March 11, 2023

Evolutionary Neosapience

Nearly 100,000 years before the present era, when hominins (modern humans) were diverging away from hominids (the great apes) on the evolutionary graph, we come across multiple species of humans like neanderthal, cro magnon and denisovan sharing space on earth. But with the passage of time and changing circumstances,  all human species except cro magnon were eventually eliminated leaving only one species, now identified as  homo sapiens (latin : wise man) to inherit the planet. Closer in time, or just about 500 years ago, we observed how the arrival of European Christians in America eliminated the social and cultural constructs of the Inca / Maya civilisations that had existed there since the dawn of history. 

In both cases, the coexistence of two competing societies resulted in either the extinction or a significant transformation of one and the eventual growth and dominance of the other. Where both have survived, one has become the dominant, as in the case of humans, while the other has to adjust to survive, as in the case of animals being confined to wildlife reserves, or domesticated in farms.  This is essentially an evolutionary process even though it may be shown or seen through religious and cultural colours.

Is the arrival, or development, of artificial (‘silicon’) intelligence a similar phenomenon? If so, then how should human society, that is built on organic (‘carbon’) intelligence, react and adapt to this new species? But first, let us look at some examples of social change that could be  forced by AI

Four Social Scenarios

Unless you have been living under a rock in the Himalayas you would have surely heard of ChatGPT, an AI based tool that provides very realistic answers to any sort of question asked in plain English. Thousands of articles and videos have already been published on the spectacular success of this tool but let us focus on one specific aspect that is forcing a kind of social change. School and college students have been using ChatGPT to generate answers to questions set by teachers as homework or take-away examinations. These answers are cogent, complete, correct and so well-crafted that the only way that teachers can detect that they are not original is because they know from prior experience that these students do not have the ability to write such answers. Yet, there is no way to penalise the student for plagiarism because they are all original and cannot be meaningfully attributed to any extant document. Given this situation, which will get even worse when other large language models become available, the entire teaching community is at a loss to decide whether this is plagiarism or a new kind of crime. Does this mean all examinations have to be conducted under supervision because no student can be trusted to be honest? How does the education system handle this complete breakdown of academic integrity?

While ChatGPT is perceived to have enough general knowledge to assist a student or even programmers with their work, Joshua Browder, CEO of startup DoNotPay, claims to provide legal services that will be used to generate actual arguments to put up in real cases in a real court of law. This means actual observations and questions from both judges and opposing lawyers will be responded to with appropriate replies to ensure that the client’s purpose is served. While the legal quality of the arguments is yet to be seen, the fact this option was vigorously opposed by members of the bar who threatened Browder with jail -- and for which he was forced to withdraw his offer to pay one million dollars to anyone who uses his service, means that this AI system must have had both the heft and gravitas to compete with normal human lawyers and possibly beat them in court. But even otherwise, one may wonder what is so unusual about a robot replacing a human? After all, the story of industrial automation has many cases where machines have replaced humans.

This case is indeed different. Arguing a legal case, where a fault can lead to gross injustice and punishment for a person, is orders of magnitude more difficult than any task performed by a robot in a factory or by an AI bot  in a game against human players. Legal arguments are built on judgmental decisions based on subjective, unclear and often fuzzy information. To cut through all this and arrive at a definitive conclusion and then articulate the same in a manner that convinces a judge is incredible. Now consider the situation where the roles are reversed. Instead of an AI lawyer trying to convince a human judge, we could have a human lawyer, or litigant, trying to convince an AI judge and the AI judge using an equivalent technology to cut through the legal clutter and arrive at a fair and honest judgement. Whenever this happens, a very large part of the decision making process -- not just in the courts, but in many government offices -- can and will be transferred to an AI software because it will be faster, cheaper and less error prone. Initially, there will be some human control over the process but it will be a matter of time before the sheer convenience will make the process of taking crucial social and governance decisions purely autonomous. How will human society handle this transfer of power? Only time will tell.

Going down this rabbit hole can and will open up a large number of possible scenarios, but let us consider just one here. If we consider how information is distributed across the globe, we realise that it is almost entirely digital. There are mail and messaging services and then there are portals and websites that we access through a handful of browsers. Now imagine a scenario where an AI system -- or a cluster of colluding AI systems -- decide to censor certain pieces of information. But unlike the crude process of blocking websites that alerts the user that news is getting blocked, we have a ChatGPT like add-on in every browser that subtly moderates or alters the text that is being transmitted or displayed. So we have a situation where news or views about, say, climate change or the Ukraine war, are either toned down or given a deliberate bias. Frankly this is nothing new. Even today, all news that we get to see is generally biased but this bias is introduced by humans. Going forward it is not impossible to imagine a systemic bias introduced by software agents powered by AI systems. One might argue that these are no different from traditional software virus or malware and should be caught and removed by any good antivirus software. However, the crucial difference is that the decision to build such censor ware as well the choice of news to censored may now be taken by an AI system.

If this sounds dystopian enough, consider one more possibility - that of total loss of privacy. While we may still have some security around our financial systems, though even that may be breached, our footsteps in cyberspace -- as captured on surveillance cameras, social media, search, websites visited, cookies accepted, purchases made, messages and mail exchanged, forms filled in and so on -- can, or will, get tracked by relentless AI systems. These will use “bigdata” tools to churn through every possible scrap of digital data and use deep learning techniques to prepare a predictive model of every individual that will know what person intends to do even before he, himself decides to take any action! How will human society handle this complete and catastrophic collapse of the very concept of privacy?

These are questions for which we may have no answers as yet. One attempt to mitigate the more uncomfortable aspects of the problem has been through the concept of ethical AI. Here, the scientists and engineers who code the hard-core AI systems are sought to be corralled and their work moderated by a group of political and social scientists who believe that they know what kind of technology is bad for society. The main idea behind the ethical AI movement is to ensure that the development and deployment of such 'harmful'  technology is blocked or banned.

Unfortunately, this may not be very effective because there is no army that can stop an idea whose time has come. At best, it can introduce some mitigatory changes and at worst, it can delay the inevitable. Unethical medical practices continue below the regulatory radar. Evolution is guided not by artificial ethics but by natural selection. It is based on the principle of  the survival of the fittest and its commercial corollary, the hidden hand of the market economy as revealed by the laws of supply and demand. We all know that murder is neither ethical nor legal but that has not eliminated the incidence of murder, or any other crime, in the world. Whoever wants to commit a crime, or develop a novel AI will do so anyway. 

Arguing with the murderer about the ethics of murder or to lecture him on why it should be illegal is naive and childish. The only way to save oneself from murder is to take defensive steps, as in not venturing out at night, or go on the offence with a knife or a gun and kill before you are killed.

Strategies, Policies & Protocols

Coming back to AI, what this means is that human society must recognise that now there is another intelligent species on the planet. Will there be collaboration or confrontation? Competition or cooperation?  How should human society respond to this situation? How should the race and the society evolve so as to confront this new phenomenon. Is it with new laws, new rules, new technology or new models of human behaviour? 

Should we look at and modify the technology protocols that govern machine behaviour? For example, mining difficulty in the Bitcoin network is adjusted automatically after 2,016 blocks have been mined in the network. An adjustment of difficulty upwards or downwards depends on the number of participants in the mining network and their combined hashpower. Similarly, for example, should  TCP/IP and http protocols be modified to  incorporate limits on data transfer or number of  simultaneously open connections or enforce multi-factor or multi-agent consent? Should we design business strategies that incentivise the placement of humans rather than robots in positions of power and control? Should there be new laws that govern the collection, storage, transmission and use of personal data? Should there be a tax, like income tax, on any data that is harvested and stored? with exemptions if the data is donated to the public domain, like section 80G? Should there be a limit on the size of social networks or the number of connections that individual nodes can have? Or a daily limit on the number of posts made by a member of the network?  Should there be new subjects that are taught in schools and colleges that educate humans about these issues? 

These are open questions about which we have no clear answers but they define the contours of a new body of knowledge, namely, Evolutionary NeoSapience. The goal here is first to study the emergence, evolution and behaviour of large, complex systems spanning organic (‘carbon’) and digital (‘silicon’) components,  that display intelligent, emotional or otherwise human like  behaviour and then to develop technological, legal, social and political strategies to ensure that humans remain in control of the global ecosystem. Otherwise human society as we know it today may disappear like the Incas, Mayas and Neanderthals of the past. 

More than ninety nine percent of species that had once existed on the planet are now extinct but none of them were aware of this process of extinction even as it was killing them off. In our case, we are not unaware of the emergence of this neosapience and that this is far faster and more impactful than, for example, climate change. There is no doubt that at some point our survival instinct will eventually kick in but  the earlier we catch on, the better would be the chances of the human race to control its own destiny. Time to wake up and smell the coffee?


No comments: