Followers of the Sanatan Dharma, or "Hindus", in general and those who believe in the philosophy of Advaita Vedanta, as propounded by Sankaracharya, see the world as illusion ("Maya") that hides an underlying unity of universal consciousness or Brahman.
This concept has been explored in greater detail in
The Road to pSingularity but one of the key ideas is that there exists a central consciousness and all individual sentient entities ( or people) are a part of this central consciousness. The linkage between the central ("Brahman") and the individual ("Atman") is something that gets lost because of the illusion of "Maya" and only the true adept, after significant effort, realises the existence of the link and in doing so, succeeds in uniting his individual Atman with the universal Brahman through the process of Yoga or union.
This post is not about trying to establish the authenticity of this point of view. Instead we explore how technology is moving to replicate this concept of the Brahman, the universal consciousness.
A centralised pool of knowledge, or at least information, from which multiple agents can draw inferences is not a very new concept. A traditional library or encyclopaedia fits the model reasonably well and of course this idea has been taken to high degree of sophistication by Wikipedia. In fact the Wikipedia model has the added advantage that multiple people can, not only pull information, but can place information for others to benefit from in the spirit of Web 2.0. In an earlier post ,
Google DoPe and the Doors of Perception, I have explored how Wikipedia could be used as back end repository of information if we can come out with a reasonably sophisticated front-end client software that can draw inferences ("wisdom", "knowledge") from raw data ("information") that is stored there.
As a continuation of this line of thought and as a practical implementation of this idea we now have Raputya, a central, "cloud" based server connected to and accessible from the Internet, that can be used as a repository of artificial intelligence by robots distributed all across the world.
Artificial intelligence is a difficult topic that the world has been trying to grapple with since the early 1960s. Some people have tried to create intelligence through very complex algorithms while others have preferred to rely on massive amounts of data. Irrespective of the approach, what each robot needs is easy access to complex computing facility that will help it (a) understand stimuli from the environment and (b) respond in a manner that best meets the requirements of the robot in question. It does not matter if the robot is welding a piece of machinery or answering a question typed from a keyboard -- the core functionality is restricted to sensing inputs and responding appropriately. This is like any computer program except that when we try to demonstrate AI, the range of inputs is extremely large and more importantly unpredictable and so standard rule-based, or algorithmic, approaches are unable to meet even the simplest requirements of AI without creating huge machines.
Since it is difficult for each robot to have this individually, Raputya places this on a shared server that any robot can access -- not really different from accessing Wikipedia with browser as opposed to having hundreds of data files on each personal desktop. It is also not so different from using a mainframe computer from a "dumb" terminal where the terminal uses or acquires the "smartness" or intelligence of the program running on the central mainframe.
So what is so new about Raputya that it merits a post for its own ?
Technically nothing, but conceptually, the key difference is that each robot can share the data, or "experience", of all other robots and possibly learn from others. Most contemporary approaches to artificial intelligence include the concept of machine learning where the computer program learns how to connect a stimulus to a response based on observations of past behaviour and noting which of the responses were correct or appropriate. In the case of isolated robots, each robot must learn from scratch and there is a limit on the amount of learning that a robot can do before it can become intelligently operational. Through a clever mechanism of sharing data across multiple robots, Raputya overcomes this limitation and allows each robot to access, learn from and utilise the data that has been accumulated by other robots.
In fact, the similarity with Wikipedia is very high. In Wikipedia, each individual can access the information that has been acquired, uploaded and possibly validated by others in the classic Web 2.0 style. Raputya could be identical except that the information is used for a slightly different purpose -- in this case to demonstrate intelligent behaviour.
Native human intelligence depends on people learning about a huge number of facts about themselves and the world around them. Forget complex things like mathematics and music, even a child acquires a prodigious amount of information before he or she can balance a set of wooden blocks on top of each other or carries on a simple conversation with someone else. In the course of his lifetime, and particularly during the formative years, he picks up a lot of data that he processes into information, knowledge and wisdom that he uses to handle daily chores. When computer's try to mimic this using artificial intelligence we get chess playing robots like Deep Blue ( that can beat human GMs) or Watson ( that can beat humans in quiz or Jeopardy ) or industrial robots that can recognise parts of automobiles and weld them together with uncanny precision. But the trouble is that each is specialised in one activity. Deep Blue will lose Jeopardy and Watson is useless in a automobile assembly line ! That is because the logic and data ( or "intelligence") is both local and specific to one system and cannot be accessed by the other.
This is where Raputya, with its purported ability to share data, and the logic to process it, could play a very significant part. The World Wide Web is far more powerful than any individual computer system precisely because it allows access to a shared pool of information and Raputya, if it lives up to its promise, could become a similar tool -- except that instead of merely sharing ( and hence multiplying the value of ) information, it will allow the sharing ( and multiplying the efficiency of ) "intelligence". Obviously, this assumes that we have a common representation of the data that can be uniformly accessed by other systems but this may not be too difficult with standards like XML.
If you are with me on this so far, take a deep breath, because the next assertion just might take your breath away.
Let us go back to the Brahman, of Advaita Vedanta, and view it as a pool of intelligence, or consciousness, of which each individual person ( or sentient entity) is a part of. You would really need to read the
book or, if you do not wish to spend the money on the paperback, visit the
website to appreciate this point of view. The Atman of an individual person and the Brahman of the universal consciousness are tied to each other in a manner that is understood, or experienced, if and only if the individual can reach a state of intellectual maturity. Sages, seers and mystics across civilisations who have achieved this level of maturity can then see or experience "visions" that cut across space and time -- in a manner that is very similar to what would happen if an industrial robot that welds car components could play Jeopardy as well as Watson does !
After Wikipedia, Raputya could become the next model in our attempt to understand intelligence and consciousness.
For more information on Raputya, you could visit the
RoboEarth Cloud Engine website, watch this
video or check out the
BBC or the
IEEE reports.