It’s funny how the things that turn out to be important are somehow never the things you thought would turn out to be important, like the importance of a Jeopardy-playing computer program to what it means to be ‘human’. Tomorrow, the face off between I.B.M.’s Watson computer and the two best human Jeopardy players will be broadcast—and Watson is expected to win. This is technologically a big deal, as explained brilliantly in this Mashable article, because of the awesome computing power needed to get computers to parse the puns and allusions characteristic of Jeopardy answers. It’s humanistically a big deal because, until tomorrow (maybe), playing Jeopardy was one of those things that only humans were thought to do well. Before Watson, only homonids were thought to be any good with homonyms.
What does this have to do with metaphor? Well, the kinds of things Watson and his human opponents will be parsing tomorrow are the same kinds of things that go into metaphors: loose associations, punning relationships, sidelong and sidereal correlations. Until now, computers have not been very good at making these kinds of intuitive connections, as the wealth of useless information thrown up by the simplest Google search demonstrates. If Watson can do it, though, that is one giant leap for computerkind…
There has been research, usually involving the painstaking compilation of crucial keywords, designed to teach computers to understand metaphors. Watson may well turn out to be the first proof of concept. And there is no reason, theoretically at least, why computers shouldn’t understand metaphors. The one crucial ingredient is context.
You don’t have to look to computers for examples of poor metaphor comprehension. Children are pretty poor at comprehending complex metaphors, too, at least until they reach adolescence. As children’s knowledge of the world grows, though, so does their metaphorical range. The same is true for adults. Any metaphor is comprehensible only to the extent that the domains from which it is drawn are familiar. If you have the context, you can figure out the meaning.
The lack of essential context is what perplexed the crew of the Starship Enterprise when they encountered the Tamarians in the “Darmok” episode of Star Trek: the Next Generation. The Tamarians speak a language no one has yet been able to fully decipher. The Tamarian tongue is so elusive because it is so allusive, consisting entirely of metaphors from the alien race’s mythology and history. In Tamarian, for example, “cooperation” is expressed by the phrase “Darmok and Jalad at Tanagra” because Tamarian folklore includes the tale of Darmok and Jalad, two warriors who banded together to fight a common foe on the island of Tanagra. Other Tamarian metaphors include “Darmok on the ocean” for loneliness, “Shaka, when the walls fell” for failure, “The river Temarc in winter” for silence, “Sokath, his eyes open” for understanding, and “Kiteo, his eyes closed” for refusal to understand. In comprehending metaphor, context is king. There’s absolutely no reason why a computer can’t do it.
And it’s funny how books you’d almost forgotten turn out to be unexpectedly relevant, like The Body Electric, which, like this piece on Watson in the NYTimes, explores how computers could be considered living things—given the right senses and enough context.