If we want to build a useful AGI tool, it will need a large vocabulary – say 50,000 words. The obvious base on which to build it is a dictionary. 

This has been tried before, and failed. 

Dictionaries have several problems – the definitions are frequently circular, the handling of verbs barely rises above transitive/intransitive, dictionaries created by the contributions of many people lack discipline, and, to demonstrate their lack of relevance, dictionaries are relatively little used by someone learning a language. We can point to billions of people learning their native language – we cannot point to one team of people successfully building a machine that can read and understand language. Why not? It is essentially a single-mind problem. The mind of a child is exposed to hundreds of thousands of instances of language use, and it fits them all together into an integrated whole, in a way we do not understand. Some problems, like designing and building a new aircraft or a bridge, are easily reducible into relatively independent tasks. Learning a language is not one of them – the pieces all fit and work together far too tightly for that. We have to use a single mind to put all the pieces together, rather than a team, making the process slow.

Circularity – an example:

Amazement: a feeling of great surprise or wonder

Astonishment: great surprise

Surprise: a feeling of mild astonishment or shock caused by something unexpected 

Shock: a feeling of disturbed surprise resulting from an upsetting event

“unexpected” provides a path out of circularity.

Expectation: a strong belief that something will happen or be the case in the future

This is where intelligence enters the equation. Without intelligence, expectation is always what has already happened will happen again (ML, say). If the unconscious mind (or our emulation of it) is continually updating expectations, intelligence emerges.

If we use a dictionary as a basis, do a lot of work on it to fix its shortcomings (already done), and provide a dynamic web of expectations (in progress), we should expect to hit paydirt. Before that happens, complex text so treated explains itself by revealing the exact meaning of each word, or agglomerations of words, and the resulting structure can be activated, just as what a person reads can be activated in their mind.