image from https://blogs.oracle.com/ |
The thrust of the Wesch video was that every action that a person takes in the digital world is used as an input by “The digital Machine” to increase its own knowledge of both the physical world and recursively, about the digital world. Every “like” of a post on social media or a click on hyperlink on a web page or a mobile app is like a drop of information that individually and collectively adds to the pool of knowledge about what humans know and think. This in turn is used to shape our own world view by returning recommendations of what next to view, “like” and click again. Unless you are like Richard Stallman, an advocate of extreme privacy who hardly uses anything digital -- like Google search, cellphones or credit cards -- you have no escape from this tight embrace of The Machine. Fortunately, The Machine is not yet one monolithic device. It’s world of has been broken up into fragments -- Google, Baidu, Amazon, Alibaba, Facebook -- by high commercial walls. But in its tireless striving it certainly does stretch its arms into every nook and corner of human activity and through that, the human mind.
In parallel with the growth of the web, there has been the emergence of data science. This began as an extension of statistics and has evolved into machine learning. Then again there was classical, 1960s style artificial intelligence that, after lying dormant for nearly 30 years, suddenly woke up and adopted the neural network structure of the brain as a new model of machine learning. This neural network model, often referred to as deep learning is the new age AI and it is racing forward with some truly stunning applications in the area of voice and image recognition, language translation, guidance and control of autonomous vehicles and in decision making as in loan disbursement and hiring of employees.
Data science has moved through three distinct stages of being descriptive -- reporting data, inferential -- drawing conclusions from data through the acceptance or rejection of hypotheses and finally predictive -- as in the new age AI. What has really accelerated the adoption and usage of this new AI has been the availability of data and hardware. The backpropagation algorithm that lies at the heart of all neural network based AI systems that are popular today was developed in the 1960s and 1970s but it has become useful only in the last decade. It is driven by the availability of (a) huge amounts of data, collectable and collected through the web by The Machine described in the Wesch video and (b) enormous yet inexpensive computing power that is available on rentable virtual machines from cloud service providers like Amazon Web Services, Google Compute Engine and Microsoft Azure.
The key driver in this field is cloud computing. Instead of purchasing and installing physical hardware, companies rent virtual machines in the cloud to both store and process data. The simplest and most ubiquitous example of this is Gmail where both our mail and the mail server are located somewhere in the internet cloud that we can access with a browser. But this same model has been used for many mission-critical, corporate applications ranging from e-commerce through enterprise resource planning to supply chain and customer relationship management systems. Though there has been some resistance to cloud computing because of the insecurity of placing sensitive company data on a vendor machine, the price performance is so advantageous that most new software services are all deployed in the cloud -- and that includes machine learning and AI applications.
Cloud service vendors have aggressively marketed their services by not only offering high end hardware -- as virtual machines -- at very low prices but also by offering incredibly powerful software applications. Complex machine learning software for, say, image recognition, language translation that are ordinarily very difficult to develop are now available and accessible almost as easily as email or social media. Cloud computing services are categorised into Platform-as-a-Service (PaaS) or Software-as-a-Service (SaaS). The first category provides a general purpose computing platform with a virtual machine, an operating system, programming languages and database services where developers can build and deploy applications without purchasing any hardware. The second category is even simpler to use because the software -- like email as in the case of Gmail -- is already there. One needs to subscribe (or purchase) the service, obtain the access credentials, like userid and password, connect and start using the services right away. Nothing to build or deploy. It is already there waiting to be used.
In an earlier article in Swarajya ( March 2017), we had seen how machine learning and now, the new age AI, uses huge, terabyte size, sets of training data to create software models than can be used for predictive analytics. This is an expensive exercise that lies beyond the ability of individuals and most corporates. But with AI or machine learning available as SaaS at a fraction of the cost, new software application that use these services can be built easily. For example it would be possible to enhance a widely used accounting software by replacing the userid/password based login process with a face recognition based login process. Similarly, the enormous difficulty of building the software for a self driving car, or for a voice activated IVR telephony, can be drastically reduced by using AI-as-a-Service from a cloud services vendor. Obviously, all cloud services including SaaS assume the existence of rugged, reliable and high speed data connectivity between the service provider and the device on which the service is being used.
Robot-as-a-Service (RaaS) can be seen as logical extension of this model but a closer examination may yield a far deeper, or intriguing, insight.
Cloud Robotics, a new service from Google is scheduled to go live in 2019 and allow people to build smart robots very easily. It is inevitable that other cloud service vendors will follow suit. While many of us view robots as humanoids -- with arms, legs, glowing eyes, a squeaky voice or a stiff gait -- the reality is generally different. Depending on the intended use, a robot could be a vehicle, a drone, an arm in an automated assembly line or a device that controls valves, switches and levers in an industrial environment. In fact, a robot is anything that can sense its environment and take steps to do whatever it takes to achieve its goals. This is precisely the definition of intelligence or more specifically artificial intelligence (AI). So a robot is an intelligent machine that can operate autonomously to meet its goals.
Traditional robots have this intelligence baked, or hard coded, into its “brain” -- the physical computer that senses and responds to the stimuli that it receives from its environment. This is no different from its immediate role model -- humans. Human beings, and even most animals, learn how to react and respond to external stimuli ranging from a physical attack to a gentle question and we estimate this intelligence by the quality of their response. In both cases, the knowledge of the external world as encoded in a model along with the ability to respond is stored locally -- either in the human brain or in the local computer. Cloud robotics replaces the local computer that controls a single robot with a shared computer -- located at the cloud services provider’s premises -- that controls thousands of robots. Just as GMail servers store, serve and otherwise control the mailboxes for millions of users each sitting at home, the cloud robotics servers sitting in some unknown corner of the internet would be able to control millions of intelligent robots that are driving vehicles, flying drones, controlling devices and operating machines in farms, factories, mines and construction sites across the digitally connected physical world.
Circling back to the Wesch video, with which we began this article, these RaaS servers would not just be controlling machinery across the globe but would also be learning from the robots that it controls by using the robots to collect and build up its own pool of training data. This is an extension of the original Web 2.0 idea -- perhaps we could call it Web 3.0. Here The Machine has not only made a successful transition from the digital to the physical world but also does not need humans anymore to teach it. It can become a self sustaining, self learning physical device.
Privacy would be an immediate issue and like all other cloud services, cloud robotics would be protected with access control and data encryption. But then as we have seen in the past, convenience trumps privacy. We all know that Google can read our GMail but nevertheless, we still use Gmail simply because it is convenient and free! So would be the case with cloud robotics. We also know that the different RaaS vendors would try to isolate their own robots from interacting with the servers of other vendors or even from each other. But this could be a temporary reprieve. Collaboration among various vendors and pooling of data could happen either through mergers and acquisitions or because it is mandated by governments that are not concerned about privacy issues.
The need for privacy arises because each sentient human sees itself as a unique identity -- I, me and mine -- that is surely distinct from the collective crowd. My data becomes private because it needs to be protected, or shielded, from the collective crowd. But if we go back to the philosophical roots of the Indic sanatan dharma and explore the perspectives of Advaita Vedanta, we see that that this sense of “I” ness is erroneous. Each apparently unique individual is actually a part of a transcendent and collective consciousness referred to as the Brahman. The Brahman is the only reality and everything else is an error of perception. The world is Maya, an illusion that perpetuates this sense of separateness, and creates this distinction between the individual and the universal. The correct practice of Yoga can lead to the removal of this veil of illusion and initiates the process of realisation. That is when the Yogic adept sees the unbroken continuity between his own identity and that of the Brahman and experiences the ecstasy of enlightenment.
We know that many renowned Yogis have actually experienced this enlightenment. AI products have now gone well past image and voice recognition and are now known to have the sophistication necessary to create their own private, non-human languages and original strategies in multi-user, role playing games. What we need to know is what happens when robots start emulating yogis and eventually realise their identity with the cloud robotics server of which they are a part!
----------------------------------------------------------------------------------------------------
this article was originally published in Swarajya