August 27, 2019

Fake News - relativity, uncertainty or maya?

Truth is out there, but unreachable


What is the rate of growth of India’s GDP? Some people will say it is close to 7% while others will claim that it is 4.5% with a 95% confidence interval of 3.5 - 5.5. Some will say that it depends on the base year that is chosen. A common man like me with no access to “real” data, whatever or wherever it may be, will have to believe someone or the other.
Picture credit NYTimes

How many people were killed in the recent communal conflagration at Sandeshkhali, on the Bangladesh border? Some newspaper reports say 7 or even more, while the others quote the police and say 4. Who were killed? The BJP says that its supporters were murdered by Rohingya intruders while the Trinamul claims that their supporters were killed by Bihari goons. Sitting in Calcutta, or reading this somewhere in India, one would never know what is the truth. You would have to believe someone whom you trust. But can you ever know whom to really trust?

Most of us know that Narendra Modi and Donald Trump is the Prime Minister and President of India and the US respectively but how many people in India can recollect the name of the President of China? Of France? And if we were to write that Saye Zerbo is the President of Burkina Faso, it is more likely than not that most readers would go ahead with the ‘fact’ without checking on Wikipedia where it is written otherwise. Change that question and ask about the name of the mayor of a specific town in Burkina Faso and it is most likely that you will never get the answer even though there must have been a man with a name at almost every point of time. That fact will never be available to most of us.

This lack of direct knowledge, this dependency on belief, or to flip it around, this lack of certainty of information is something people have seem to have suddenly woken up to. Fake news is big news. In the past, producing a photograph was considered good evidence, of someone being somewhere or meeting someone. No more. Thanks to  Photoshop -- both the software package and, more importantly, the concept of modifying images -- it is now possible to change any image to reflect any point of view. Audio and video recordings were once considered sacrosanct and admissible evidence of saying or doing something. But with artificial intelligence software, it is now possible to create a talking-head video of anyone saying anything that you want him or her to say and it would be almost impossible for anyone, other than a technology specialist, to know the difference. What is worse, is that the non-specialist, common man will still have to believe what the specialist has said without having any means to verify the truth of his observation.

From the mundane to the most profound, even the Nasadiya Sukta (RigVed 10:129), that great repository of knowledge, throws up its arms in despair :

"Who really knows? Who will here proclaim it?
Whence was it produced? Whence is this creation?
The gods came afterwards, with the creation of this universe.
Who then knows whence it has arisen?"

Western philosophers, in general, believe that it is important to find and prove the ‘truth’ -- as opposed to eastern philosophies that accept truth as given. They have tried to clear a path through this fog of uncertainty and ignorance by championing the cause and rise of rational and experimental science. The pinnacle of this endeavour was of course the mechanics of Isaac Newton -- even though there were many aspects of his life that were not really ‘scientific’, but that is another story. The opening years of the twentieth century saw two developments that put a spin on the nature of truth.

First was relativity that wrecked the concept of an absolute space or frame of reference. Paradoxical as it may sound, who was moving and who was standing still was no more  something that could be decided upon with certainty. All frames of reference were deemed equivalent in their ability to account for physical phenomena. If the results did not make sense, then you would have to change your tools of measurement, the time keeping watch and the measuring ruler, to ensure that they agree with the respective measurements or perceptions. Crudely speaking it was similar to changing the way GDP is to be measured so that the different answers are consistent with the  respective perceptions. Wise men brought in non-uniform motion, or non-inertial frames of reference,  to be able to distinguish between the fundamental perception of motion and rest but then gravity appeared to make it even more difficult to distinguish between the two states. In fact, even though we are taught that the Earth revolves around the Sun, Einstein’s General Relativity would say that these two statements  :

1. For an observer who is at rest on the surface of the earth [ that is us !] , the Sun rotates around the Earth.
2. For an observer who is at rest on the surface of the sun, the Earth rotates around the Sun.

Are BOTH correct! However, the second statement is far easier to work with when we try to calculate trajectories of space ships, the change in seasons or the movement of a cyclone.

It is very difficult to distinguish the true from the false because the definition of true or false changes.

The next bouncer was the uncertainty introduced by quantum mechanics. Was light, or an electron, a particle or wave? Is an object here or is it there or is it in both places? And in a grotesque, but crude analogy of the body count in Sandeshkhali, Is Schrodinger’s Cat dead or alive? The same uncertainty carries through to even today when we build a quantum computer and cannot say with certainty whether a particular switch is ON or OFF. The very basis of logic and determinism has been upended by the introduction of uncertainty in measurement and observation. Nothing is what it seems to be.

The third and far less well known joker in this circus was incompleteness. Kurt Godel, an acquaintance and close companion of Einstein, came out with the astonishing conclusion that there will be facts that are true but not provable or verifiable, and the converse, that there would be facts that are false but again not provable to be false. Godel’s Theorem of Incompleteness used the letter of a false science -- mathematics or rather arithmetic -- to demolish its spirit, and shows that arithmetic is incomplete or inconsistent. Provability or verifiability is a weaker notion than truth or falsehood.

A hundred years ago, the two demons of uncertainty in observable facts and the relativity of truth played havoc with what was perceived to be a logical and deterministic explanation of the physical world. Today, when we talk, and despair, about fake news in both mainstream as well as social media, we should remember what had happened in the early years of the twentieth century and learn a lesson. Because strangely enough, even with its two feet, kind of chopped off, science did not collapse or stop running. Indeed it has gone to scale new heights in nuclear power, space travel, digital computers and artificial intelligence.

So what is the secret that we in the twenty-first century must learn from the twentieth century as we get ready for the coming deluge of fake news? First, fake news is inevitable and we cannot wish it away. No amount of head banging could ever negate the uncertainty associated with relativity and quantum mechanics. Similarly, passing draconian laws on distribution of fake news or building huge and complicated systems to detect and delete fake news may not succeed. Just as science has quietly accepted relativity and quantum mechanics, we must learn to live with fake, or erroneous news. We must accept that facts can no more be treated as sacred because we may never know what the real fact ever was. It is lost forever as it travels through multiple transmission channels that have different degrees of reliability or fidelity.

Science handles this uncertainty by introducing probabilistic models of the physical world and has been quite successful with this. Media, communications or social science must look for equivalent paradigms if they wish to be as relevant as physical science has been till date. What could be the contours of such a strategy? Obviously there is no clear answer - or solution to the problem of fake news -- as yet, but with the passage of time things must and will emerge.

But the first big step would be to accept the paradox of Fake News Being Real! It is the reality that we have to live with. Practically what this means is that one must begin with the assumption that whatever we see or read in the media -- mainstream or social -- is false. The default is that it is fake, or false and it will be considered true if and only if there is significant evidence to the contrary! Many years ago, I had the opportunity to dictate a piece of news, that was favourable for my company to a reporter of India’s premier business newspaper and then had the pleasure of seeing my words being reported as news. What diminished my pleasure was the realisation that I had no idea about who had dictated, and paid for, the contents of all the other articles that were published along with our article. That is when I realised that whenever one reads a piece of news, one must try to guess why was it written and who has paid for it. What happens in the world and what you see about it is determined by a process -- Facebook calls it an algorithm -- that depends on many factors very few of which are visible to the public. The world outside and the image in my head could be, and most probably is, two completely different things. Life could be much simpler if one were to understand this and behave appropriately.

Once one learns to accept that the news, information or facts, that is presented is most probably false -- except in the case of obviously verifiable data like weather, share price or whether there is a fire in the corridor  -- what should be the next course of action? One option is of course not to take any action and in many cases this is quite a sensible thing to do. Other, logical and rational options could be to corroborate the news with other sources but the difficulty is that there is no guarantee that any news source is correct or honest. This is where the process becomes, for lack of a better word, illogical or irrational. We depend on hunch or intuition!

“Intuition is the ability to acquire knowledge without proof, evidence, or conscious reasoning, or without understanding how the knowledge was acquired. Different writers give the word "intuition" a great variety of different meanings, ranging from direct access to unconscious knowledge, unconscious cognition, inner sensing, inner insight to unconscious pattern-recognition and the ability to understand something instinctively, without the need for conscious reasoning.”

Intuition has been studied by both Eastern and Western philosophers and in India, intuition is strongly associated with spirituality and mysticism. Western philosophers like Descartes, Hume and Kant have all acknowledged the existence and importance of intuition and have established their own points of view about the subject. Without getting into details, what this means in practice is that once we have a reasonable pool of information, not necessarily true or consistent, it is best to leave the mind to cogitate on its own and wait for it to settle on a conclusion.The truth is not out there, waiting to be discovered. It is inside, in the depths of your mind, your psyche, waiting to be experienced.

In practice, this would be something like this : Before casting your vote in an election, you could do an intense research by looking up and exploring each and every aspect of each and every candidate. Or as many people do, you take what is known as a judgment call after getting a general feel, the ‘big picture’, of the candidate and his party’s policies as a whole. You may not be able to explain why you took that particular call or justify all actions and aspects of the individual but perhaps that is how it is or should be. It is possible that your actions are coloured by a built-in bias but  even biases are built up over a period of time and through the accumulation of information some of which could be self selecting. But net-net, a subjective approach to decisions may be the better option when your objectivity is completely compromised by fake news and facts! This is analogous to quantum mechanics accepting the probabilistic nature of the universe instead of sticking to the absolute determinism of Newtonian mechanics.

Fake news or information, is of course a well known subject for Indic philosophers and they prefer to refer to it as Maya or illusion. Brahma Satya, Jagat Mithya is an ancient aphorism that states that the world that we see around us is an illusion or rather a distorted ‘image’ of Brahman that is, in reality,  without form or attributes. Maya is the veil, or layer of fake news, that stands between the object or event and the observer. There is really no way to remove this veil, this illusion, this Maya as long as the object and the observer remain separate and distinct. It is only in and through Yog, perhaps in the deepest introspection of RajYog, that one can experience the union of the atman and the brahman, the observer and the object. That is when the veil of Maya will disappear  and the true nature of the world, the cosmos, can be experienced. Till then we have to learn to live with the Maya of fake news!

The ubiquity and inevitability of fake news is a corollary, or belated acceptance of the concept of Maya in Vedanta.



This article originally appeared in Swarajya

July 25, 2019

Carbon and Silicon



This slide deck gives a brief overview of what neural networks are and what kinds of problems can they address. It has links to tutorials and actual codes so that the reader can actually start coding right away. However, this presentation goes beyond what is possible with only machines and explores how man and machine can be hooked together to address bigger and and more interesting challenges.

July 12, 2019

Panchatantra21

Five Big Ideas for India in the 21st Century


The hurly-burly of the elections is over and a new government is in place in Delhi. Ministers and their officers must be drawing up lists of the many urgent things that need to be done in the next couple of months. But there is a difference between what is urgent and what is important and it is more often than not that the urgent displaces the important from our schedules. In this article we explore five important issues that India needs to address, not just for the immediate gratification of urgent needs but, to set the agenda for the 21st century. From garbage disposal through restructuring of our education infrastructure that will in turn transform India into an artificial intelligence based space faring civilisation that is rooted in our Vedic past, let us explore five new narratives of Panchatantra/21.

image credit


Charity begins at home. The journey of a thousand miles begins with a single step. Before we set out to change the course of human civilisation, let us first clean up the garbage from our doorstep. Literally. India is drowning in garbage. Other than  metropolitan cities, no town or village has any mechanism to dispose of household garbage in any meaningful way -- except to dump it beyond sight. Households dump it on the streets and the municipality, if such a thing exists, takes it and dumps at the edge of the town. This creates huge and unsustainable garbage dumps that are not only an eyesore but also a health and ecological hazard. The situation is worse if the town has a lot of visitors, tourists or pilgrims. The devastation is compounded by the number of people creating garbage and the indifference of the transient visitors to the outcome of their actions.

Disposal of garbage is not rocket science and communities across the world have managed to solve the problem to a large extent. In fact, a systematic disposal of garbage is the leitmotif of a civilisation and the Sindhu-Saraswati civilisation was perhaps the first to realise this and implement it in their system of drains. But in the twenty-first century, we must do it better and differently. Swachchha Bharat and its focus on building toilets must be enlarged to encompass the creation of industrial grade garbage disposal units in each and every town and village in India. Methods and systems to collect, segregate, transport, recycle and dispose of garbage exist in many countries and India must adopt and adapt what is most appropriate for India and then create a mechanism to replicate it in a scalable manner across the land.

After garbage, the next area to be cleaned up -- and this time, metaphorically -- is the education sector where the infamous license-control raj is still alive and choking honest intent. Regulatory bodies like the AICTE and the UGC have all the authority to control educational institutes but not the responsibility of delivering anything useful. These have throttled and choked well meaning individuals from creating educational institutes of excellence. Instead, what we have are either tax-payer funded white elephants run by politically connected and government appointed directors or dubious private institutions run by crony capitalists. A majority of the former are handicapped by lack of three requirements - funds, efficiency and imagination. A majority of the latter aim to fool students and deliver dubious value in terms of employability.

What we need instead is a regulator, modelled on the Registrar of Companies. This should not not dictate which colleges are allowed to operate or what or how colleges teach. Instead it must ensure transparency -- in intent, execution and delivery --  by mandating disclosure norms that are based on principles that are similar to GAAP -- generally accepted accounting principles. More importantly, these disclosures must be validated through a statutory audit process. What this means is that students and their parents, who fund their education, would have a reasonably clear idea about the merits of an educational institute -- as equity investors have in the financials of companies that are listed on the stock exchange -- and take an informed decision on where to invest time and money to get a decent education. In short, let the market and not government babus decide the fate of educational institutes. This is explained in detail in an earlier article “Auditors, Not Regulators.” (https://bit.ly/2YNDWvk)

Continuing with this basic philosophy of minimum government, maximum governance and extrapolating from the gig economy --  Uber, Ola, AirBnB, Oyo  -- that allows individuals to sell their services directly to the customer, we could easily have good, but independent, teachers reaching out to students through platforms like Youtube. However, distance learning has its challenges. First, students want the sanctity of an accredited university degree that is a necessary condition for jobs. Second, they want the security of a placement service as the outcome of any education. Finally they want all of it to be either free or at a very low cost. Meeting all three conditions through a simple elearning platform is impossible. However, a recognised and accredited university, say IGNOU, can create a platform where individual, but non-employee teachers can teach, or deliver educational services and be paid based on a metric that is based on student ( “customer”) engagement and feedback. This metric should be multidimensional and should consist of (a) the number of actual student-hours views that the videos have, (b) the rating given by students. Such a platform supported by a distributed evaluation service, as in online exams in designated premises, and a Naukri.com style job-portal to facilitate placements can be put together as an alternative to the traditional campus oriented educational institute. This is explained in detail in “Distance Learning Reloaded.” (https://bit.ly/2VRPjk7)

These two new age educational initiatives -- auditing rather than regulating educational institutes and the ‘indie’ teacher based distance learning program -- form the second narrative for Panchatantra/21.

Education continues as the third of our five narratives but now the focus moves from platform to content. What is it that our students must learn? To begin with a large part of the curriculum will be drawn from the traditional BA, BSc, BCom and even BTech, BBA, MA, MTech, MBA courses. These are tried and tested curricula that have stood the test of time but of course each university or college should have the freedom to offer anything that they want without centralised bodies like AICTE or UGC trying to impose their point of view. In a market economy, “customers” -- that means both students and recruiters -- will signal their requirements in a manner that the education market must understand and respond to. This would mean that there will be more students -- and hence more colleges or independent, “indie” teachers -- for courses and subjects that are preferred by recruiters.  However, as Steve Jobs has put it, “customers do not really know what they want” and so it is up to policy makers to determine what would be important and useful for India in the current century and even the next.

Moving into the 21st century, mankind -- the carbon-based biologicals  --  will have to share this planet with what is virtually a new species, the silicon-based artificial intelligence powered systems as explained in “Intelligent Machines: Humans May Have To Contend With A Superior ‘Species’ Soon; Are We Prepared?” (https://bit.ly/2I2kMLk). This may sound like science fiction but automated and more importantly autonomous systems -- from self driving vehicles, through medical diagnostic tools right up to public safety and even juridical services -- will take over and run our society with little or virtually no human interference. We may not like it -- just as colonised people never really liked their new masters -- but we will have to learn to live with them. To begin with, we need to learn their language. Indians learned English when they were colonised -- say after the Battle of Plassey --  and then they made the best use of this skill to chart their own destiny in the 19th and 20th century. So should be the case with artificial intelligence in the 21st.

Every child in every school and college must learn the basics of artificial intelligence, of neural networks and of course, computer programming. Python, for example, would be the new English, a skill that is so eagerly sought after by parents who are so keen to send their children to English-medium schools. One could learn history, {human} languages, sociology, medicine, fine arts, engineering or whatever but one must also know AI and computer programming. Just as a knowledge of English equips a person to work his way through the world irrespective of the location or nature of work, so would a knowledge of robotics and AI be essential not just to live but to build new products and services for the coming age. Just as if you want to make money from a big economy like China you learn Chinese, so should there be no hesitation or embarrassment in learning Python and AI right from primary school all the way to college.

Machines and robots are “learning” about human languages, behaviours, attitudes and choices at an unprecedented rate but if we, biological sentients, have to keep up with them we need to know their language and behaviour as well. It is not just that computer programming should become “The Fourth Language” (https://bit.ly/2JFYKl4) in our schools  but AI and Robotics should be built into the warp and woof of our school and college curriculum. Products like Arduino and Raspberry Pi should be used not just for science projects but for every kind of project that is mandated in the curriculum. This is one area where a visionary  government and its policy makers must take the initiative to fund the design of innovative curriculum, the development of appropriate course material and the appointment of competent teachers.

But AI and robotics is the path, what is the destination?

This takes us to the fourth narrative of our Panchatantra/21 and to the next level of human endeavour -- Space! Europe was too small for the European civilisation and it flowered only when the bold and the brave among them, spread out, first to colonise America and then across Africa and Asia. Similarly, the world is not enough for humanity as biological and machine intelligence grows by leaps and bounds. The composite carbon-silicon civilisation must and will physically expand first  to the Moon and Mars, then to Titan, Enceladus and Europa and eventually to other solar systems in this galaxy.

In this grand adventure, it would be tragic indeed if  India were to be relegated to the position of a  passive observer clapping and cheering from the sidelines while the US, Europe and China establishes colonies in these distant realms. As described in this article, “A whole new space age”, (https://bit.ly/2MaeqPw)  India must take a definitive stand to become a space faring nation and invest the power of money, of mind and of machines to stake her claim beyond the frontier. Mangalyaan and Chandrayaan are two excellent initiatives and they must be viewed to be as important as say GST or Ayushman Bharat. This may not translate into votes in the 2024 elections but that is where the current government must demonstrate its passion for the big idea. “Go West young man” was once a part of the lexicon -- and the manifest destiny -- of the American civilisation that grew and prospered. Similarly, every young man and woman in tomorrow’s India must be encouraged to dream of going to space -- along with robots and other intelligent machines -- to set up homesteads, establish colonies and create opportunities for economic and social expansion.

Beyond the raw technology of ISRO, a national level Space Commission must be established to address the socio-economic issues of a space faring nation. Such a commission will encourage, and equip, schools and colleges with the tools and ideas that will allow their students to think beyond that next job in TCS or Hindustan Lever and incentivise private enterprise  to invest in space related ventures. Space travel and colonisation should be the fourth big idea that India must focus on in the next decade.

But in parallel with the exploration of outer space, India also needs to look within. We must use the rear-view mirror of history to look into the Indic heritage that stretches back in time longer than any other living civilisation that exists today. Egypt and Mesopotamia has been swallowed up in the march of time and even China that still preserves links to its past is in the process of forgetting the same. Only we in India can still trace our civilisation back through the Chandragupt’s Mauryan era to the tenuous links of the Harappan civilisation and even beyond. The evidence is tantalising. Remains of chariots from the Mahabharat era have been found in Sanauli, Baghpat in Uttar Pradesh. The sunken city of Dwaraka, Gujarat, is still visible under the ocean. Harappan era relics have been found along the dry bed of the Ghaggar river through which the Saraswati once flowed and this, as Michel Danino has shown in his book “The Lost River,  connects the Harappan civilisation with the Dwapar Yug of the Mahabharat.

It is time that the government puts together a formal mechanism that will initiate primary research into these areas, search for evidence, connect the dots and present a comprehensive picture that will transform our understanding of the  past. Legends must be codified as verifiable and documented history. Without compromising on academic rigour -- without claiming that ancient India had nuclear bombs and spaceships, for example --  honest historiography must be applied to elevate our myths from the fog of antiquity and bring them into the domain of history. This has happened in the past. The Asiatic Society of Bengal led by William Jones  had identified Chandragupt Maurya, named in ancient and mythical lists of Indian kings, as Sandrakottus, a contemporary of Alexander named in Greek records of his invasion of India. This monumental discovery had significantly extended the antiquity of Indian history. India as a nation owes it to itself to do this once again and reassert the continuity of the Indic civilisation right from the Vedic Age to the Age of Space and Intelligent Robots. This is the fifth and final narrative.

Man does not live by bread alone and nations too must look beyond the mere mechanics of administration and economics. Irrespective of who runs the government, the Indian economy will surely grow as issues related labour, land and capital markets are sorted out and technology is used to enhance efficiency of governance. But there is a far larger story that awaits India and it is time for us to gear up for a whole new role that India will play in the comity of nations.  India must prepare itself to be the tip of the arrow and not a mere lance bearer as humanity reaches out -- literally and metaphorically -- to touch the stars. Panchatantra/21 would be our road map in this quest.


this article first appeared in Swarajya magazine

May 22, 2019

Is Inequality Inevitable ?

Maximilien Robespierre might have fired the imagination of the world with his call for Liberty, Equality and Fraternity during the French revolution but the concept has been rather elusive in practice. Equality, in particular, has proved to be a bridge too far.

image credit
Let us begin with the Internet and the world wide web that was supposed to liberate the individual from the clutches of the powerful media barons. As one of the early pioneers and evangelists for this new technology, this author had created a portal -- Yantrajaal, a Bengali word that he had coined to define a network of devices -- in the same year that Google was born and seven years before Facebook. The general idea was to serve as a platform to share information on technology from an India perspective. In principle it could reach out to every corner of the world -- something that his earlier journalistic efforts in school and college had failed to do. But of course that would never be. Hardly anyone, other than the author’s immediate circle of physical friends, visits Yantrajaal. Unfortunately, that is true, not for Yantrajaal but for many other websites as well. While millions of websites exist and continue to be built, the really popular websites -- at least, the ones that make some money  --  are still the ones owned and operated by big media houses like Times of India and the New York Times. But even this is an illusion because the real flow of news and views across the globe is actually governed, not by these media houses but by technology firms like Google, Facebook and Twitter, or, where they are banned, as in China, buy their local equivalents. In fact, these so-called tech companies have actually morphed into full fledged, advertisement driven media companies and it is they who rule the roost when it comes to the dissemination of information.

This concentration of power is an outcome of the network effect. The value of a network is proportional to the square of the number of its participants. As more and more people join a network its value as perceived by its members goes up very fast, compelling others to join it in a virtuous, or vicious, cycle. You may build a search engine or a social media platform that is better, say more private and secure, but you would never be able to catch up with the leaders.  It is not quite the first mover advantage, but anyone who breaks away from the pack becomes unreachable and hence unbeatable, irrespective of its quality.

Very similar is the case with cryptocurrency. Just as the internet was designed to democratise the distribution of information, Bitcoin was designed by the mysterious libertarian Satoshi Nakamoto as a way to create and distribute monetary value in a decentralised manner. Thanks to the magic of mathematics, it was now possible for a private citizen to do what was earlier the prerogative of Central Banks, namely create a cross-border, tradeable currency. It is a different matter that many governments and central banks have gone hammer and tongs to break the backbone of this remarkable technology. The methods being used are obviously inappropriate -- similar to the case of censors in Iran and China blocking the internet -- but the fact remains that “Tiger Zinda Hai”. Cryptocurrencies have weathered most regulatory storms and despite some current setbacks will certainly comeback when the establishment finally gives up and falls in line. But the dream of citizen empowerment is a myth. Even though anyone can, in principle, create Bitcoins, and similar cryptocurrency, the ownership of such wealth is hugely skewed. 87% of all Bitcoins that have been mined are owned by 0.5% owners or wallets, 61% are owned by just 0.07% wallets. Of the 23 million Bitcoin wallets, 13 million own a fraction, less than one, Bitcoin.  1500 wallets have between 1000 and 10,000 coins while 111 wallets own more than 10,000. [ March 2018 data ] For a system that is barely a decade old, that is a huge inequality that has emerged from what was supposed to be purely technology based egalitarian platform.

This inequality of wealth in Bitcoin is very similar to the inequality of wealth seen across the world and more so in the mature economies. In the United States, that is seen by many as a leitmotif for the modern economy, wealth inequality is enormous. Those who view this as an inevitable fallout of ‘evil’ capitalism should look back at what Communist Russia once was and read Orwell’s Animal Farm where all animals were equal but some animals, the pigs, were more equal than others. Today there is no visible and viable to alternative because what happens in China is not clearly visible and socialist countries like Venezuela have either collapsed or like Italy and France are tottering as they try to redistribute their shrinking wealth. Some diehards like Bernie Sanders may hark back to a Scandinavian utopia but that story may not really be as attractive as it is portrayed to be and moreover it is certainly not scalable beyond the socio-economic demographics of Northern Europe.

Which begs the question, is inequality inevitable?

One way to look at it is to note the difference between opportunity and outcome. Equality of opportunity is essential but it may or may not lead to an equality of outcome. Google may have bowed to political correctness by sacking James Damore for daring to suggest that women may be different from men, but facts do bear him out. As Jordan Peterson, that arch nemesis for page three feminists and their ilk, has pointed out, the eventual outcome is a function of a complex series of inputs, not just the obvious differences of say gender or race. It is well known that women are underrepresented in STEM -- science, technology, engineering and mathematics. What is less well known that societies that have a higher degree of gender equality have, paradoxically, a lesser percentage of women in STEM than those societies that offer lower opportunities to women. [ The Atlantic, Olga Khazan, April 2018] This has been explained by arguing that women in societies that marginalize them see STEM as a way to climb their way out into better opportunities in life, which is something that their sisters in more equitable societies do not have to struggle so much for. In a similar vein, while women may be underrepresented in engineering colleges and in IT companies,  the percentage of seats occupied by women in medical colleges is more than 51% in India but in neighbouring Pakistan and Bangladesh, the numbers are as high as 70% and 60% respectively. Clearly, inequality is not something that can be explained very easily.

In India, social inequality is frequently equated with caste. This has led us to build a huge edifice of largely ineffective and useless hubris around caste based reservations to pander to our politicians craving for votebanks. While caste is something thing that has existed in India, our politically tainted educational system has been instrumental in making us believe that it is the ONLY cause of economic inequality and misery in India. In the process we have only succeeded in sharpening the inequality with indiscriminate reservations. Caste is the unfortunate fall guy where the real reason could be something far different.

In fact, in a study reported in Vox, researchers Barone and Mocetti were the first to establish that in Florence, Italy -- where there is no caste system -- the highest paying taxpayers from the 15th century to the present have been from the same set of families. Similar studies have shown remarkably similar results in a wide range of cultures -- “This is true in Sweden, a social welfare state; England, where industrial capitalism was born; the United States, one of the most heterogeneous societies in history; and India, a fairly new democracy hobbled by the legacy of caste.” [ New York Times, Your Ancestors, Your Fate, Gregory Clark, Feb 2014] What could explain this phenomenon? The authors suggest that “the compulsion to strive, the talent to prosper and the ability to overcome failure” that are all strongly correlated to success and hence eventual wealth are inherited qualities. Hence, as heretical as it may seem, genetics plays a role and “Alternative explanations that are in vogue — cultural traits, family economic resources, social networks — don’t hold up to scrutiny.”

A more politically palatable, or charitable, explanation could be the network effect that we have seen in the case of the world wide web. On the web, companies like Google and Facebook use their pre-eminence in the number of clients and customers to prolong and propagate their pre-eminence. Similarly, in human society, those who are, let us say, eminent -- in money, power, education, intelligence, contacts -- will use their eminence to make sure that their progeny get access to all that is required to become eminent in the next generation. There may be individual exceptions but by and large, that will be the desire in almost all cases. In the long run, this desire will translate into a self propagating mechanism that will ensure that inequalities inherent to society cannot be erased simply by desire or dictat.

If the inequality is somehow removed, it will always find ways to creep in. In India, we recognise this in the replacement of an exploitative foreign coloniser by an equally exploitative but local political class that has taken over the trappings of the foreigner. Subhayan Mukerjee, a researcher in Computational Social Science at the University of Pennsylvania explains this by saying that equality is an unsustainable, unstable equilibrium and cannot last very long. Sooner or later, this equilibrium will be broken and the collective will move towards a stable but hierarchical structure. The resultant inequality will pave the way for even more inequality.  If we view the collective as a set of marbles lying on an elastic membrane that is stretched flat where each marble makes an identical depression, then a single slightly larger marble will create a little larger depression. This will draw in other marbles making the depression deeper. Now the unstable equilibrium will be broken as the deeper depression pulls in more and more marbles making it deeper and deeper until most of the marbles have moved into it.

Both the world wide web and world of cryptocurrency began as a flat world of equals without any hierarchy but it did not take long to for inequality to creep in and a hierarchy to establish itself. Even in the animal world, for example, among apes, there is inequality in the form of size, strength, potency and skill. Even if humans inherited these inequalities the world might still have been more egalitarian and equitable simply because there were too few opportunities for enrichment. But the potential was always there and the moment economic opportunities presented themselves -- with the advent first, of agriculture, then industry and currently the digital age -- the natural tendency towards the equilibrium of a new and sharper inequality began until we have what we have today. At best, humans may recognise the inequality, and unlike a pack of animals may try to mitigate it with, say, mechanisms like social security. But in the long run, the outcome is rarely what we desire -- the rich remain rich or become richer while the common man stays where he was, at the bottom of a hierarchy.

We may ardently believe and proclaim that all men (and women) are equal but that is simply not enough. Egalitarians, if not actual practicing socialists, may work out a myriad rules and regulations that seek to curb inequality but nature is such that people and organisations will always find loopholes and ways to beat the system. In the end we will always tend towards the stability of an unequal world!

Perhaps that is how it is meant to be. Could it be that equality is against nature? Take a look at the palm of your hand -- are all the fingers identical or equal? And would we be where we are if they all were the same?



This article first appeared in Swarajya magazine

April 10, 2019

AI://games.wargames:war

In popular perception, skill in cerebral games board, like chess, are assumed to be a proxy for or measure of intelligence. This perception has motivated people to build computer programs that can play these games in an effort to make programs look intelligent or behave intelligently. But the reality is different. The complexity of decision making that a child, or even a dog, demonstrates while crossing a busy road is several orders of magnitude higher than what a grandmaster uses to win a game of chess. But because the act of crossing a road is something that we do every day we feel that it is somewhat trivial when compared to playing chess. Nevertheless  Artificial Intelligence (AI) research has enthusiastically used games as a testbed to try out more and more complex tasks that they would like computers to perform.

image credit
Programming a computer to play chess has been an obsession with many of the key personalities from the world of theoretical computer science. Norbert Wiener, who coined the word cybernetics, the science of information and control in both humans and machines, was the first to design a program that should be able to play chess way back in 1948. Claude Shannon  and Alan Turing, the so-called ‘father’s of information theory and computer science respectively, were both very actively involved in designing and building chess programs though their initial efforts were not very successful. With the passage of time, these programs became more sophisticated until in 1997, the Deep Blue program from IBM managed to beat Gary Kasparov, the World (Human) Champion. Since then, these programs have improved constantly and the current World (Computer) Champion is a program called Komodo that has an ELO ranking of ~ 3300 which is more than 450 points higher than the current World (Human) Champion Magnus Carlsen.

Bridge, on the other hand, is a game where computer programs have had rather limited success. While bridge playing programs do exist and can indeed defeat most amateur players, they are yet to win against the world champions. The reason for this lack of success vis-a-vis chess can be explained in two ways. First, unlike chess where the entire board is visible, bridge is played with incomplete information because any player can see the cards in only two hands, his own and that of the dummy. But the second and perhaps more important reason is that unlike the rigorous and near brute force approach of “looking ahead” into many possible moves, bridge involves a lot of intangible psychology - of feinting, of deception and the need to anticipate the same in an intuitive manner. To put it bluntly, it is possible to determine nearly all possible moves and countermoves in chess and choose the one that works best but there is no easy way to enumerate, and hence evaluate, all possible future moves in bridge.

Game playing programs were stuck in this rut for a very long time until the arrival of the artificial neural network (ANN) based programs like Google DeepMind. ANNs are not really programmed in the classical sense of computer programming where a human programmer specifies what all needs to be done and when. Instead, ANNs are designed to “learn” on their own by taking certain actions and then detecting whether that action leads to success or failure of the overall goal. The “learning” process involves changing -- either increasing or decreasing -- the numerical values of certain parameters, called weights and biases, over successive iterations in a way that gradually increases the the possibility of a success. What is interesting in all such cases is that there is no clear explanation about why a particular value of a specific parameter results in success. It is just that a certain set of values leads to success while any deviation leads to failure. This is very similar to the intuitive approach that many humans take while making decisions that they otherwise cannot explain. Success of course can be defined in many ways -- from recognising an image of a person, navigating a car around an obstacle, or in our case, winning a game of chess. This technology -- based on the backpropagation algorithm and the gradient descent method --  was an astonishing force multiplier that propelled computer programs to new heights. While bridge was somehow never quite in the radar of the ANN enthusiasts, games like Go, which are orders of magnitude more complex than chess, have been cracked. Google DeepMind’s AlphaGo program beats the best human challengers on a consistent basis. But what is really startling is that the best players have admitted that AlphaGo’s style of play is so unusual that it seems to have developed strategies that no human has ever used or even thought of before. So it is not that it has been taught by, or has learnt from, any human being. It is in fact creating new knowledge or strategies on its own without human assistance.

Traditional ANN technology is based on feeding past data and expecting the computer to learn from successes and failures. This is possible when you have lots of past data and the computing power to crunch through the same. A variation to this strategy is to make the machine observe actual human behaviour -- as in driving a car or playing a game -- and from there ‘magically’ learn how to behave in a manner that leads to success. But AlphaGo and similar programs use a variation of the standard ANN called Generative Adversarial Networks (GAN) where the program fights, or competes, with its own “brother”, a clone or copy of itself, and by recording the success or failure, learns by itself when it has been successful or has failed. By challenging it’s own copy, it builds up its own training data and learns from this at a speed that is humanly impossible. This allows the program to venture into more and more interesting tasks. Just as in the case of new styles for playing Go, GAN based machines have trained themselves to create artificial images of human faces that are indistinguishable from actual photographs of real people.

As we have noted earlier, the ability to play a board game, however complex may be viewed as a proxy for human intelligence but in reality, true intelligence lies in doing apparently simple daily tasks like crossing a busy street or in haggling with a roadside hawker. Here success lies in reaching the other side of the road within a reasonable time without being hit by a car or to convince another person to part with a product at a lower price. Here again, we can have ANN/GANs being trained for success but the difficulty lies in creating a learning environment where the system can train itself. Unlike board games, we cannot have half trained machines running amok on real roads or bargain with real shopkeepers. But if we cannot have real environments, can we look for virtual ones?

Enter strategy games! Arcade style video games have been around since the 1970s when the first such game, Computer Space, was launched in 1971. This was followed by the commercial release of Atari’s Pong. From these humble beginnings, the computer gaming industry has now become a US$ 140 billion business that is far larger than the global movie business. In fact the top selling game Call of Duty : Black Ops collected an incredible US$ 650 million in five days after its launch, which is a record for any book, movie or game. In all these games, the human player has a quest, or a challenge that increase in complexity like acquiring resources -- firewood, gold, weapons, land, territory, soldiers -- and this quest is obstructed or sought to be foiled by other human players or by specifically designed computer programs that put hurdles in the way. Since these games are generally played online, hundreds of players log in and participate through avatars -- realistic or fantasy representations of themselves -- that they control. These Massively Multi-user Online Role Playing Games ( MMORPG) could be very realistic representations of actual reality, as in World War II battlefields or well known urban locations, or can be played out in fantasy sci-fi locations in deep space or in ancient Jurassic parks. But irrespective of the location where these games are set, what really matters is that players, or combatants, are humans who use every possible trick, guile and deception to beat other players so as to achieve success in their respective goals. In fact, most games allow players to create teams, build coalitions and join guilds to improve their chances of success. Playing these games brings one quite close to operating within a community of intelligent individuals, because as explained by James Vincent in the Verge “Although video games lack the intellectual reputation of Go and chess, they’re actually much harder for computers to play. They withhold information from players; take place in complex, ever-changing environments; and require the sort of strategic thinking that can’t be easily simulated. In other words, they’re closer to the sorts of problems we want AI to tackle in real life.”

After chewing through human champions in  chess and Go, and ignoring bridge for the time being, computer AI has now set its targets on these online games and the results are just as startling. Dota 2 is an online multiplayer battle arena, a derivative of the wildly successful Warcraft, where players form five member teams to occupy and defend territory on a map. Elon Musk’s OpenAI team has been working to build AI systems that can play -- or rather simulate a human player -- against the usual human teams and on more occasions than one have successfully beaten humans. In January 2019, AI crossed another milestone when Google’s DeepMind learnt how to play Starcraft II and beat two very highly ranked professional human gamers. Starcraft II is a science fiction, real-time strategy game where three species -- humans and two others with many kinds of natural and supernatural powers -- battle it out to create bases, build teams, invade other’s territory and eventually establish their dominion across the galaxy. AlphaStar, the AI program that ‘plays’ Starcraft is based on the principles of Reinforcement Learning where it begins as a novice playing randomly and then builds up expertise by observing the outcome. Since no human would ever have the physical stamina to play with an AI, AlphaStar generally plays with multiple copies of itself and hence can learn and build expertise at a speed that is impossible for humans.

Games like Dota 2, Starcraft II, Call of Duty and the current favourite shooter game PUBG are, at the end of the day, harmless. You can shoot, maim and kill your enemy as much as you like in the game but when you shutdown the game, you are back to your mundane world of home and work that was temporarily suspended while you battled with hostile adversaries. There is no physical touch point between your actions on the keyboard and the world of brick and mortar. But what if there was? What if you were a drone operator sitting in front of a computer screen and controlling, not just a fantastical beast in a fantasy terrain, but an armed drone flying across Afghanistan or even along the LoC in Kashmir? Or what if you were controlling a team of armed semi-autonomous vehicles that are tasked to cross the LoC and perform a surgical strike on terrorist camps in PoK? Operating an armed drone or autonomous vehicle on the ground is not really different, conceptually and qualitatively,  from what human players are  do, both tactically and strategically, in combat games like Dota and StarCraft. So after AlphaGo and AlphaStar -- the AI programs that beat humans in Go and StarCraft -- will there be an AlphaWar that can beat any human operator in a real war that spills real blood and blows up real buildings? Why not? We could have  a fleet of AI controlled armed drones relentless flying over the LoC in Kashmir, observing the terrain and remorselessly killing anyone that looks or behaves like a terrorist while ignoring civilians -- just as it is done in so many popular first person shooter games like Doom or PUBG. And of course we could live with some collateral damage!

Much as society would like it to, science and technology never waits for ethics to catch up with its own progress -- as in the case of He Jiankui, the Chinese scientist who broke state laws to create genetically edited human babies with the CRISPR. Asimov’s First Law of Robotics that mandates that robots will not harm humans, is good science fiction but will not stand the test of ground reality. Given the proliferation of war and conflict themed games and the ability for AI systems to learn from them, it is matter of time before they acquire the capability to wage real war. We may argue that the actual instruments of war, for example armed drones, are not connected to the AI systems but that is obviously a transient problem. Access to and connectivity across multiple computers -- from gaming consoles to armed drones -- is a matter of policy and behaviour and not just technology. Given time patience  and computing power any computer can be hacked. In principle, one could hack into any military computers that control the war machines and since hacking is essentially a matter of strategy and guile, there is no reason why an AI system cannot be taught to do so or, as is more likely, learn on its own -- if that is a goal that “it” is directed to achieve.

Now that AI systems will have the ability to wage war, who will direct them to actually do so? And against whom?  Or, and this could be ominous, will “they” decide whether they will do so or not? That is where the programmer stops and the psychologist, the philosopher and the politician must take over to first understand this technology and then to divine or to define the trajectory that it would follow in the near future.


this article first appeared in Swarajya magazine

March 08, 2019

Kailash Manasarovar 2018 Slide Show


January 16, 2019

The Vedantin looks at Cloud Robotics

In 2006-2007, in the early years of the Web 2.0 that emerged phoenix like from the ashes of the dotcom bust of 2000, Michael Wesch, Professor of Digital Ethnography at Kansas State University produced a video called “The Machine is Us/ing Us”. Prior to the emergence of Web 2.0, the world wide web was primarily a read-only medium to publish news and information to a passive audience. Web 2.0, with its focus on user generated content and a personal network of trust, created a read-write platform that allowed individuals to feed information easily into the system or “The Machine”. In the process The Machine learnt stuff that it never knew before. Wikipedia, one of the first Web 2.0 platforms, became the biggest repository of information, if not knowledge. This in turn allowed it influence a whole generation of students, journalists and web user, and shape the way they view the world. For example, this author who has studied in a Catholic missionary school, had had a great regard for Francis Xavier. But this was completely reversed after he read the Wikipedia article about the murderous Inquisition that Xavier had unleashed on the Hindus of Goa. Obviously, no one at school had ever talked about such unsavoury matters.

image from https://blogs.oracle.com/

The thrust of the Wesch video was that every action that a person takes in the digital world is used as an input by “The digital Machine” to increase its own knowledge of both the physical world and recursively, about the digital world. Every “like” of a post on social media or a click on hyperlink on a web page or a mobile app is like a drop of information that individually and collectively adds to the pool of knowledge about what humans know and think. This in turn is used to shape our own world view by returning recommendations of what next to view, “like” and click again. Unless you are like Richard Stallman, an advocate of extreme privacy who hardly uses anything digital -- like Google search, cellphones or credit cards -- you have no escape from this tight embrace of The Machine. Fortunately, The Machine is not yet one monolithic device. It’s world of has been broken up into fragments -- Google, Baidu, Amazon, Alibaba, Facebook --  by high commercial walls. But in its tireless striving it certainly does stretch its arms into every nook and corner of human activity and through that, the human mind.

In parallel with the growth of the web, there has been the emergence of data science. This began as an extension of statistics and has evolved into machine learning. Then again there was classical, 1960s style artificial intelligence that, after lying dormant for nearly 30 years, suddenly woke up and  adopted the neural network structure of the brain as a new model of machine learning. This neural network model, often referred to as deep learning is the new age AI and it is racing forward with some truly stunning applications in the area of voice and image recognition, language translation, guidance and control of autonomous vehicles and in decision making as in loan disbursement and hiring of employees.

Data science has moved through three distinct stages of being descriptive -- reporting data, inferential --  drawing conclusions from data through the acceptance or rejection of hypotheses and finally predictive -- as in the new age AI. What has really accelerated the adoption and usage of this new AI has been the availability of data and hardware. The backpropagation algorithm that lies at the heart of all neural network based AI systems that are popular today was developed in the 1960s and 1970s but it has become useful only in the last decade. It is driven by the availability of (a) huge amounts of data, collectable and collected through the web by The Machine described in the Wesch video and (b) enormous yet inexpensive computing power that is available on rentable virtual machines from cloud service providers like Amazon Web Services, Google Compute Engine and Microsoft Azure.

The key driver in this field is cloud computing. Instead of purchasing and installing physical hardware, companies rent virtual machines in the cloud to both store and process data. The simplest and most ubiquitous example of this is Gmail where both our mail and the mail server are located somewhere in the internet cloud that we can access with a browser. But this same model has been used for many mission-critical, corporate applications ranging from e-commerce through enterprise resource planning to supply chain and customer relationship management systems. Though there has been some resistance to cloud computing because of the insecurity of  placing sensitive company data on a vendor machine, the price performance is so advantageous that most new software services are all deployed in the cloud -- and that includes machine learning and AI applications.

Cloud service vendors have aggressively marketed their services by not only offering high end hardware -- as virtual machines -- at very low prices but also by offering incredibly powerful software applications. Complex machine learning software for, say, image recognition, language translation that are ordinarily very difficult to develop are now available and accessible almost as easily as email or social media. Cloud computing services are categorised into Platform-as-a-Service (PaaS) or Software-as-a-Service (SaaS). The first category provides a general purpose computing platform with a virtual machine, an operating system, programming languages and database services where developers can build and deploy applications without purchasing any hardware. The second category is even simpler to use because the software -- like email as in the case of Gmail -- is already there. One needs to subscribe (or purchase) the service, obtain the access credentials, like userid and password, connect and start using the services right away. Nothing to build or deploy. It is already there waiting to be used.

In an earlier article in Swarajya ( March 2017), we had seen how machine learning and now, the new age AI, uses huge, terabyte size, sets of training data to create software models than can be used for predictive analytics. This is an expensive exercise that lies beyond the ability of individuals and most corporates. But with AI or machine learning available as SaaS at a fraction of the cost, new software application that use these services can be built easily. For example it would be possible to enhance a widely used accounting software by replacing the userid/password based login process with a face recognition based login process. Similarly, the enormous difficulty of building the software for a self driving car, or for a voice activated IVR telephony, can be drastically reduced by using AI-as-a-Service from a cloud services vendor. Obviously, all cloud services including SaaS assume the existence of rugged, reliable and high speed data connectivity between the service provider and the device on which the service is being used.

Robot-as-a-Service (RaaS) can be seen as logical extension of this model but a closer examination may yield a far deeper, or intriguing, insight.

Cloud Robotics, a new service from Google is scheduled to go live in 2019 and allow people to build smart robots very easily. It is inevitable that other cloud service vendors will follow suit. While many of us view robots as humanoids -- with arms, legs, glowing eyes, a squeaky voice or a stiff gait -- the reality is generally different. Depending on the intended use, a robot could be a vehicle, a drone, an arm in an automated assembly line or a device that controls valves, switches and levers in an industrial environment. In fact, a robot is anything that can sense its environment and take steps to do whatever it takes to achieve its goals. This is precisely the definition of intelligence or more specifically artificial intelligence (AI). So a robot is an intelligent machine that can operate autonomously to meet its goals.

Traditional robots have this intelligence baked, or hard coded, into its “brain” -- the physical computer that senses and responds to the stimuli that it receives from its environment. This is no different from its immediate role model -- humans. Human beings, and even most animals, learn how to react and respond to external stimuli ranging from a physical attack to a gentle question and we estimate this intelligence by the quality of their response. In both cases, the knowledge of the external world as encoded in a model along with the ability to respond is stored locally -- either in the human brain or in the local computer. Cloud robotics replaces the local computer that controls a single robot with a shared computer -- located at the cloud services provider’s premises -- that controls  thousands of robots. Just as GMail servers store, serve and otherwise control the mailboxes for millions of users each sitting at home, the cloud robotics servers sitting in some unknown corner of the internet would be able to control millions of intelligent robots that are driving vehicles, flying drones, controlling devices and operating machines in farms, factories, mines and construction sites across the digitally connected physical world.

Circling back to the Wesch video, with which we began this article, these RaaS servers would not just be controlling machinery across the globe but would also be learning from the robots that it controls by using the robots to collect and build up its own pool of training data. This is an extension of the original Web 2.0 idea -- perhaps we could call it Web 3.0. Here The Machine has not only made a successful transition from the digital to the physical world but also does not need humans anymore to teach it. It can become a self sustaining, self learning physical device.

Privacy would be an immediate issue and like all other cloud services, cloud robotics would be protected with access control and data encryption. But then as we have seen in the past, convenience trumps privacy. We all know that Google can read our GMail but nevertheless, we still use Gmail simply because it is convenient and free! So would be the case with cloud robotics. We also know that the different RaaS vendors would try to isolate their own robots from interacting with the servers of other vendors or even from each other. But this could be a temporary reprieve. Collaboration among various vendors and pooling of data could happen either through mergers and acquisitions or because it is mandated by governments that are not concerned about privacy issues.

The need for privacy arises because each sentient human sees itself as a unique identity -- I, me and mine -- that is surely distinct from the collective crowd. My data becomes private because it needs to be protected, or shielded, from the collective crowd. But if we go back to the philosophical roots of the Indic sanatan dharma and explore the perspectives of Advaita Vedanta, we see that that this sense of “I” ness is erroneous. Each apparently unique individual is actually a part of a transcendent and collective consciousness referred to as the Brahman. The Brahman is the only reality and everything else is an error of perception. The world is Maya, an illusion that perpetuates this sense of separateness, and creates this distinction between the individual and the universal. The correct practice of Yoga can lead to the removal of this veil of illusion and initiates the process of realisation. That is when the Yogic adept sees the unbroken continuity between his own identity and that of the Brahman and experiences the ecstasy of enlightenment.

We know that many renowned Yogis have actually experienced this enlightenment. AI products have now gone well past image and voice recognition and are now known to have the sophistication necessary to create their own private, non-human languages and original strategies in multi-user, role playing games. What we need to know is what happens when robots start emulating yogis and eventually realise their  identity with the cloud robotics server of which they are a part!

----------------------------------------------------------------------------------------------------
this article was originally published in Swarajya

About This Blog

  © Blogger template 'External' by Ourblogtemplates.com 2008

Back to TOP