The
Doors of Perception is an iconic book by Aldous Huxley where, while exploring the hallucinogenic properties of certain shrubs and herbs, he came to the unusual conclusion that any cognitive sentient has instant and automatic access to all possible knowledge in the universe. However the neural faculties that are expected to process this information and render them useful in a socio-cognitive context are in danger of being overwhelmed by the sheer mass of data -- as occassionally evident in what we refer to as 'madness' -- and so are protected and isolated from the same by the mind that works as a gate or valve that reduces the quantum of information that the brain is eventually exposed to. This mind or gate or valve is what Huxley refers to as the Doors of Perception : that can be opened wider by the use of mental techniques ( Yoga ?) or the use of hallucinogenic drugs -- to reveal a greater amount of "significance" or meaning to the sterile reams of data that is otherwise available. While the jury may still be out on the veracity of this hypothesis, the emergence of the World Wide Web has created for us an intriguing analogy that is well worth exploring.
There is no doubt that the Web represents a very large amount of information -- whether it is in the form of specialised packets as in Wikipedia, in the form of semi-structured information at social networking sites or finally in rather unstructured format of thousands of web pages scattered all across the internet. There is no doubt that almost any kind of information is certainly available out there and it is also true that barring some secret documents anyone, in principle, can access most of it. As a thought experiment (like the one performed by Einstein in riding a beam of light !), one could -- again in principle and of course in a moment of madness -- take a print out all that information but then what ? Can any human being, a cognitive sentient, ever hope to make any sense from that mountain of paper on his desk ? Obviously not.
So what do we do ? We use a search engine to act as a filter that reduces the amount of information that is dumped on our digital desktop and then we navigate through this using our own intelligence to reach our goal. This combination of search engine and our own intelligent data interpretation is in my opinion analogous to what Huxley could refer to as the Doors of Perception.Can we improve on this man-machine hybrid and create a better search-interpret product ? Can we bring improve upon it until it starts to resemble the mind more closely. To do so we need to understand that the interpretation part consists of two components : a semantic module that helps us distinguish between similar sounding but contextually different words, eg bus as in vehicle and bus as in a communication channel, and natural language processor that can do a bi-directional context sensitive translation between human phrases and a computer query and reporting language.
So if I were to say "What trees grow in the Amazon ?", the natural language processor (NLP) should be able to convert it into a search command, pass it to the search engine (SE). The SE should retrieve information and pass it to the semantic interpreter (SI) that would eliminate information about both Amazon, the retailer and amazon, the female warriors, leaving only information about the Amazon rain-forest. Finally, the NLP would should be able to repackage this information into a neat list of trees that grow in the Amazon and show it on the screen with URLs pointing to more information in Wikipedia !
Is this Utopian ? Not really.
Google has a powerful search engine and Wikipedia provides a great information base to start with -- so the SE is certainly in place. Natural language processing is well established branch of study and if combined with the advances made in language translation there is a good probability that a near human like dialogue can be achieved without too much effort. What is clearly missing is the ability to introduce semantic interpretation and this is where we enter the realms of artificial intelligence -- a cliched phrase that I have been trying to avoid for quite some time.
How does the system distinguish between Amazon the retailer and Amazon the jungle ? and between Bengal, the state and Bengal, the iconic tiger of a football team ? Of course one way would be to learn by observation. If I were to search for Bengal and then follow the links toward Bengal the state, then in my case, and only in my case, Bengal means the state. But if someone in Cincinnati were to search for Bengal and then follows the links for the football team then in his case, and only in his case, Bengal maps to the iconic tiger. How on earth would the the semantic interpreter distinguish between the two of us ?
An obvious answer would be to first identify us, say through our Google loginid, but that would compromise privacy. Instead, can we use a clever combination of a history of past searches on the word Bengal ( or Amazon for that matter ) and establish some probability figures for Bengal => State vs Bengal => Tiger which can then be refined and or otherwise tested through a dialogue established by the natural language processor ( for example, Did you mean tiger or the state ? ). Difficult but certainly not impossible considering the number of AI style
chatbots that are available. In fact techniques like market basket analysis from the world of data mining can be tailored to identify concepts that are more closely related.
Basically what we are looking for is a bolt-on product that will sit on top of Google search engine and act as an interface between the human world and the world of digital information. Its main job would be reduce the information that is available on the web, identify and isolate the relevant portions and pass it on to the curious human who triggered the search ( or "thought" ? )
Would that be a new product from Google ? Google DoPe ? Google Doors of Perception ?