Pros Sort Computer Translating

Article excerpt

Byline: Ann Geracimos, THE WASHINGTON TIMES

When it comes to wordplay, computer scientist Bonnie Dorr is a master. An associate professor at the University of Maryland, she works with language - sometimes several languages at once.

Her focus these days is inventing software that can summarize assorted documents - any means of communication, written or verbal - by machine, do it as quickly and efficiently as possible, and even do it cross-lingually. This would allow a person to take documents in Arabic, for example, translate them by machine into English and then summarize the contents in English to determine which documents should be sent on for what she calls "high-quality human translation."

Professionals elsewhere are working on grouping documents selectively in clusters, she says. Her expertise is directed at finding ways to produce summaries of documents relevant to a particular need for easy access.

"Machine translations still have issues," Ms. Dorr cautions. "If you get a general idea of the document first, you have a better idea of what you want."

The field of multilingual machine translation has advanced rapidly in the past 20 years. Carnegie Mellon University's School of Computer Science has a separate Center for Machine Translation within a new Language Technologies Institute. Several dozen computer systems exist for translating as many as 38 languages, but the quality of each varies.

Machine translation and summarization are different realms, Ms. Dorr says, "but each uses components of the other's technology, so we do a lot of different things with each of those."

Summarization - her primary focus for the past few years - is "taking documents and trying to find a very short summary for them of, say, 20 words, or about 72 characters, including punctuation and spacing."

It has taken a long while to get to this point, with many efforts trying out competing methods. The most successful one she describes as a hybrid, a combination of linguistic and numerical approaches. A numeric approach employs statistics - basically, counting words - while linguistics largely uses symbols.

"The statistical approach is better at getting content, the linguistic at getting the form," she says.

The work is being done at the University of Maryland laboratory known as CLIP (short for Computational Linguistics and Information Processing), part of the university's Institute for Advanced Computer Studies. Ms. Dorr's colleagues include members of the departments of linguistics and library science.

Chinese and Arabic are the top languages with which they work, but Spanish and Korean also are used. The "documents" involved include broadcast news as well as print forms of communication.

"Interestingly, Arabic and Spanish have more in common structurally than I realized," Ms. Dorr says. "And Chinese is closer to English in some ways. It's configurational - having a subject, verb and object the way we tend to have [in English]. ... Arabic and Spanish are not in the same family, but there seem to be similarities at least lexically. In English, you would say, 'I fear,' but in Spanish you would say, 'I have fear of it.' And you might find something like that in Arabic."

When people ask how the system handles figures of speech such as metaphors, she - metaphorically - throws up her hands. "We have a hard enough time with non-subtleties," she says. "We have so many things to work on that are pretty difficult. …