To a great extent the problem of processing natural language is the problem of relating text or speech to an internal framework of knowledge representations. In the past, artificial intelligence workers have concentrated their efforts principally on linguistic issues because certain properties of language, such as lexical composition and syntactic structure, lend themselves quite readily to formal analysis. It is evident, however, that systems that are capable of language production and comprehension must be as rich in knowledge representations as they are in linguistic elements and they must be able to link the two domains in a computationally effective manner. Unfortunately, knowledge, unlike speech or text, does not come to us in a prepackaged notational system, nor is it open to direct inspection and recording. In order to produce a formalism for knowledge we must infer both the content that is stored and the mode of its internal representation. These are the tasks that have become part of the main objective of natural-language research.
A great deal of the work on knowledge representation in the 1970's focused on the delineation of "primitives" that could be used to store semantic information. Arguments in favor of the use of primitives were based on empirical evidence drawn from recall experiments with human subjects as well as computational demonstrations of the versatility of