Still Fig. 2.3 does show how the basic ideas interrelate in an RLC system. Knowledge structures are used to drive the analysis in an error-tolerant way, judge the reasonability of what is produced, and organize the results in coherent form. There is a very strong top-down component to this system. Things that are predicted are used to fill out the structure that predicted them. Things that are not predicted either generate new structures that drive the analysis, or are part of phrasal frames that tie the text together.
Any new structure built is tested for reasonability in the domain and modified if something is wrong and there is a known way to reinterpret it. This checkand-correct facility must be part of any near-term RLC program, I think, although its function in human understanding is more controversial. In an RLC program, things have to be reformed because of mistakes in the text, indirect and metaphoric constructions, and mistakes and inadequacies in the program. In my research, I assume that people almost always find the best interpretation first, and look for ways they might do this. But any practical RLC program built now will have to take a longer way round, I fear.
Becker J. ( 1974) "The phrasal lexicon". In R. C. Schank & B. L. Nash-Webber (Eds.), Theoretical issues in natural language processing. Cambridge, Mass.: Bolt, Beranek and Newman.
Birnbaum L., & Selfridge M. ( 1979) Problems in conceptual analysis of natural language (Research Rep. #168). Computer Science Department, Yale University, New Haven.
Bullwinkle C. ( 1977) "Levels of complexity in discourse for anaphora disambiguation and speech act interpretation". Proc. Fifth International Joint Conference on Artificial Intelligence. Cambridge, Mass.
Charniak E. ( 1972) Towards a model of children's story, comprehension. Doctoral dissertation, Rep. A1 TR-266, Massachusetts Institute of Technology, Cambridge, Mass.
DeJong G. ( 1979) Skimming stories in real time: An experiment in integrated understanding. Doctoral dissertation, Research Rep. #158, Department of Computer Science, Yale University, New Haven.
Gershman A. ( 1979) Knowledge-based parsing. Doctoral dissertation, Research Rep. #156, Computer Science Department, Yale University, New Haven.
Nash-Webber B., & Reiter R. ( 1977) "Anaphora and logical form: On formal meaning representations for natural language". Proc. Fifth International Joint Conference on Artificial Intelligence. Cambridge, Mass.
Riesbeck C., & Charniak E. ( 1978) Micro-SAM and Micro-ELI: Exercises in popular cognitive mechanics (Research Rep. #139). Computer Science Department, Yale University, New Haven.
Riesbeck C., & Schank R. C. ( 1976) Comprehension by computer: Expectation-based analysis of sentences in context (Research Rep. #78). Computer Science Department, Yale University, New Haven. Also, in W. J. M. Levelt & G. B. Flores d'Arcais (Eds.) ( 1979), Studies in the perception of language. Chichester, Eng.: Wiley.
Schank R. C. ( 1979) Reminding and memory organization: An introduction to MOPs (Research Rep. #170); Computer Science Department, Yale University, New Haven.
Schank R., Lebowitz M., & Birnbaum L. ( 1978) Integrated partial parsing (Research Rep. #143). Computer Science Department, Yale University, New Haven.