November 14, 2020

Badshah Alam Shah 1204

 

Located this coin among others in a locker being cleaned out. Charanpreet Singh helped me decode the text that reads 19 Zarb Murshidabad and Badshah Alam Shah 1204. Used this information to get some more information from an auction site.


Deepawali acquisition!

November 06, 2020

Ethics of AI

Unless you have been living under a rock for the past couple of years you would know for sure that things are happening in the area of Artificial Intelligence. Rapid developments in the area of artificial neural networks has spawned a brood of useful architectures - CNN, RNN, GAN - that have been used to solve a range of very interesting problems. These include, among others

  • control of autonomous or self driving vehicles
  • identifying visual elements in a scenery
  • recognising faces or connecting bio-metrics to individual identities
  • automatic translation from one language to another
  • generating text and visual content that is indistinguishable from that generated by human intellect.

While these applications have created considerable excitement both in the technical as well as in the commercial community, there has been an undercurrent of resentment among certain people against what they view as ethical issues that are yet to be unresolved. 

To understand what is at stake let us consider two specific issues from the area of autonomous vehicles. 

First who is liable in the case of an accident? In some countries, the liability lies with the owner of the vehicle while in others, it lies with the driver who was at the wheel when the accident occurs. But in the case of autonomous vehicles there is a point of view that says that the liability should be with the manufacturer. If at all it was the fault of the autonomous vehicle, and not the other party to the accident, then the fault lies with the autonomous system - hardware sensors and controlling software - that has been supplied by the manufacturer. This is similar to a brake failure except that the owner, driver has no way to check the equipment prior to starting out to drive.

Second, and this is more interesting, is the question of whose life is more important? Suppose a pedestrian comes in the way of a moving vehicle whose speed is such that an application of brakes will not be able to stop the car from hitting the pedestrian. The only maneuver that is possible is for the car to turn away and hit a wall. In either case the injury or death will happen either to the pedestrian or to the driver. For the sake of this argument, we can simplify the situation by ignoring issues like estimating the expected quantum of injury in the two cases and the subsequent possibility of death or extent of disfigurement and come out with a binary situation - whose life is more valuable? The driver or the pedestrian?
image from berkeley.edu


These may look like very profound questions and are very often portrayed as such but frankly they are not. 

In the first case, there is no need to split hairs on the liability. Lawyers may love the possibility of litigation and accountants may salivate at the the thought of extracting money from car manufacturers but for the technologist, this is a no-brainer. Most car accidents are because of driver error, except of course when a pedestrian behaves randomly, and with the advent of autonomous vehicles the possibility of driver error virtually disappears. So if the vehicle software has been adequately tested - like vaccines! -- before they are released in the 'wild', the number of accidents will, in any case, go down dramatically. So the overall cost of accidents will go down but individual cases will be paid out of the general corpus of funds created by collecting premiums from all vehicle owners, that are calculated by usual statistical ( or actuarial) analysis. In fact, this no different from a mechanical failure which in any case is factored into the economics of insurance. Net-net, there is no issue at all. It is just another unfortunate accident that has to be factored in to the premium calculation process, perhaps with an additional line item.

The second issue can also be dealt with quite easily. Who should die? The pedestrian or the driver? In the case of a human driver both situations are possible. Some drivers will slam on the brakes and hope that the car will stop before hitting the pedestrian while other drivers will turn the car and hit the wall. There is no hard and fast logic and nor is there time for a thorough analysis, ethical or otherwise, of the various options. It is a gut-feel reaction that is best modeled by a random probability. So the simple way to break the tie is to toss a coin -- or simulate the coin toss with a random number generator -- and take a decision on whether the coin shows head or tails.

If it is a fair coin, there is a 50% chance of either outcome and so the software can be programmed to take one decision or the other on the basis of this probability. This would reflect the regular, or underlying, reality of a human driver. So the behaviour of the autonomous vehicle would in no way be different from the behaviour of a vehicle driven by a human being. If we have learnt to live with human drivers we can continue to live with autonomous vehicles.

The 50% rule is a kind of a default starting point. If it is observed that most drivers are altruistic and prefer to save the pedestrian at the cost of their own health then the probability of hitting the wall can be raised from 50% to 60%. On the other hand, if it is observed that most drivers are selfish and prefer to kill the pedestrian and save themselves, then the probability of hitting the wall can be lowered to 40%. These probability numbers mean that the coin being tossed is not a fair coin but a biased one and reflects the inherent bias of society at large.

This solves the problem of the autonomous car but opens up another Pandora's Box.

Should AI ( or Deep Learning ) systems have a bias at all? Or should they always be fair? This is important because Deep Learning systems are trained on the basis of a history of past behaviour of human systems. This training is done by collecting data on how decisions have been taken in the past and using this data to set the parameters. In simple systems, these parameters could be probability values but in neural networks they are the weights that are assigned to different connections between nodes. The exact technology is not important here. What is important is whether the training data has bias and whether this bias is carried through from the non-computer system to the computer system.

For example, it has been observed that in the US, both parole applications and loan applications are more likely to be rejected if the applicant is a black person because of a historical bias against this particular demographic segment. When this data is used train a AI / DL system, this bias is carried through and once again, blacks will be discriminated against. [ Of course, there is another point of view that states that automated, machine based decisions have less bias -- see this (paywalled) link -- but that is another story and another debate.]

Obviously this is patently unfair and should not be allowed and hence there is a strong move to ensure that AI systems do not suffer from bias. There is no question about that ...

But does that mean that AI / DL should not be built until we have resolved the issue of bias? This is where the debate takes on an ugly turn between the proponents of ethics in AI and those who would rather stick to the technology of AI. For the former, the question of ethics is paramount and they would rather not have AI unless it is certified to be bias free. For the latter, the matter of ethics is secondary. They would rather focus on creating innovative technology and leave the matter of ethics for another day.

Faced with this choice, my sympathies clearly lie with the latter, the technologists and the reason is very simple. The world is not fair and can only be so in the dreams of the Utopian idealist. Since we do not have the luxury of living in an ideal utopia, expecting AI to be ethical and bias free is an impossible dream. The world has learn to live with bias and will continue to be so. If ethics was really as essential for the survival of the human race then we should have shut down the armaments business (and possibly a large part of the pharmaceutical and hospital business as well) But we have not done so, because of an irresistible, or inevitable, convergence of economic, political and social power.

Any country, or society, that shuts down its armaments business or disbands its armed forces will be overrun and taken over by another country that does not subscribe to this Gandhian policy of pointless non-violence. This was brutally demonstrated during the 1962 China War where India's idealistic Principles of Panchsheel were brutally shoved aside by the rampaging Chinese PLA. While a measure of ethics is certainly good, making it an absolute framework that is at odds with the ambient reality is neither possible, nor desirable. So is the case with AI. There are many people who feel that the so-called 'liberal' countries like the United States should not use technology like facial recognition at all because it is an unethical violation of privacy. Little do they realise that 'non-liberal' countries like China are already using it in a big way to enhance their own security and if the imbalance continues it will be as stupid as shutting down the armaments industry.

Any technology - from nuclear through genetics and space to artificial intelligence -- can be weaponised. That does not mean that development must stop. Let us go in with our eyes wide open, be aware of the dangers but also be aware of what is happening elsewhere and make sure that we do not vacate or step back from the leading, or bleeding, edge.

To sum up, let us understand that bias is inevitable in any human society. We should try to minimise it but hoping to eliminate it is impossible. So is the case with non-human, silicon based intelligence or for that matter for any non-human sentience that will eventually arise from this technology.

September 12, 2020

Turtles upon Turtles


Abstract : Information, or rather information technology, is the basis of the digital economy that we live in but is there something more fundamental to information that goes beyond the thousands of  digital computers that we come in touch with in our daily lives? This article explores how information could be the basis of the material world itself, which in turn is merely a simulation generated by the proces
sing of information.


To do so we note that in social media and in Massively Multiuser Online Role Playing Games (MMORPG) users live in a world that is not what it seems to be.  This leads to the question whether the world that we see around us is really real, or as described in Sankar’s Vedanta and the movie Matrix, is a simulation. This simulation hypothesis is explored further on the basis of Brian Whitworths paper on the feasibility of the world  being an illusion. Finally we demonstrate how this illusory world can be created purely on the basis of information through the equivalence between Boltzman entropy and Shannon entropy and a practical implementation of Maxwell’s demon in Szilard's engine.

The latest version of this paper is available at this link.



We believe that we have the ability to discern the real from the illusory or the virtual. We know it because in our own life we play out multiple roles. Your behaviour is different when you are at home, at work or when you are with  your school friends or office colleagues at a resort. At home you could be an altruistic parent or a housing society officer or a poet. At work you could be a hard taskmaster, a glib salesman or an ace opportunist.  With your friends in the resort you could be rolling on the floor. So which is one is you? Which is the REAL you? Would you know? Would you care? Or would you say that you are all of them and some more and the difference between these personas is blurred.

Now let us extend this to the world of social media. Where you could be a 'bhakt' or a 'psecular' and be in a violent confrontation with the other.  Even if you are not a political person you could be crafting an identity for yourself as a geek, or a sage and if you succeed that is how you would be seen by your 'friends', followers or connections in social media. It is not unlikely that your identity in social media is a magnification of only one of your 'real' identities, possibly  your professional identity or then again an identity that is defined by whom you hangout with. Or  you could be crafting a totally artificial identity with a hidden agenda in mind.   Depending on the amount of time you spend, or invest, in social media and the number of connections that you build up there, it is not impossible that this social identity overrides what your original identity was, or what you thought it was. In fact, going forward, your digital identity that has a far greater reach than your physical identity will increasingly become your dominant identity. More people might know you as you appear in social media than the fewer who know you in real life. But then, what is really your real life?

Now that you know that your original identity could very well be hidden or masked behind other more visible layers -- and frankly, masks have been around since much before the Wuhan virus  -- what about people who are around you? It is almost certain that they  too -- in social media and in the real world --  would be wearing masks as well, just like you.

When we look around we see ourselves enmeshed in a network of relationships -- personal or professional, commercial or otherwise -- that defines who we are with respect to the world around us. But if every member of this network is wearing a mask and is not who they seem to be then the network loses its structural rigidity, its deterministic nature and its discriminatory potential. It becomes instead an amorphous and shape-shifting cloud of illusions that is as impossible to pin down  as the ephemeral Maria in the movie Sound of Music -- how do you catch a cloud and pin it down?

So what was known and deterministic becomes uncertain, unreliable and illusory. What you see is not what it seems to be but something else.  Perceptions take precedence over the primacy of facts. Wise men say that opinions ( or perceptions) are free but facts are sacred.  In this case, the wise men are not so wise after all because while fact may be sacred, these facts are not accessible anymore. They are hidden behind layers and layers of illusions.

This gets even more complicated, and interesting, when we move from the flat, text based world of social media and into simulated three dimensional worlds. These virtual worlds are available in, or accessible through, Massively Multiuser Online Role Playing Games (MMORPG) like World of Warcraft, Final Fantasy, PUBG, CounterStrike. Non-violent, non-combative but equally enchanting are the simulated virtual worlds like Second Life -- that happens to be the author's favourite -- that are based on similar technology but have different goals and narratives.

What are the common features of all these virtual worlds ? (i) A 2D image of a 3D landscape that is visible on the computer screen. (ii) The presence of humanoid figures, or avatars,  in this landscape that are controlled either by users or by artificial intelligence software in which case they are called NPC or non-playing characters. (iii)  The ability of the avatars and NPCs to interact with each other and with other elements of the landscape  through sound, visual cues and physical contact like push or  ‘fight’, (iv) The ability of users, through their avatars, to build, demolish or operate specific elements of the landscape like buildings, cars and other inert or active artifacts. (v) The existence of quests or challenges that each user, through their avatar, is expected to accomplish, either alone or in collaboration with other users/avatars. This could include creating buildings, occupying territory, locating and exploiting hidden resources or acquiring skills to perform one or more of these tasks.

A social media handle and an MMORPG avatar are essentially the same, in the sense that they allow an individual to interact with others through  a common, intermediate platform. On Facebook, you can build a page and your handle can argue with others, while in MMORPG, you can build a castle or have a fight with other avatars. What is different is the extent of realism or similarity with real life where an MMORPG avatar is far more realistic than a social media handle. With the advent of virtual reality or augmented reality gadgets, like helmets, spectacles and gloves, the level of realism can be increased till it is almost impossible to differentiate between the virtual and the real.

In fact, the illusory nature of both the MMORPG avatar and social media handle can be extended into the illusory nature of the multiple personalities that we carry in real life. This is where the border between the real and the virtual world becomes increasingly blurred. What is real and what is illusory becomes increasingly difficult to distinguish. For your own self, it may still be possible to switch between alternate realities and hence distinguish one from the other but for people around you it becomes increasingly difficult to detect the real you, especially if the digital channel is the only channel of communication. Similarly, it becomes impossible for you to detect and distinguish between the alternate realities for the people around you and the worlds that they inhabit. Each of us live in our own cocoon of perceptions that shields us from the reality of the external world.  We live in ..

Maya or The Matrix, The World of Illusions


Maya is an idea that was first articulated by Sankaracharya, the 8th century Hindu savant, who distilled the concept from the primordial Upanishadic insights. Much later, in the 20th century, it was introduced into western popular culture through The Matrix, a movie set in a not too distant dystopian future.

Sankar’s philosophy of Vedanta posits that Brahman is the only real entity in the sentient universe. The Brahman -- which is different from the Brahman jati, as in Brahman, Kayastha, Bania (or Beney)  etc. -- is the embodiment of Truth, Consciousness and Pleasure, or Sat-Chit-Ananda, that is without form,  qualities or attributes. It is pure knowledge or information that has no equivalent in the world that we are familiar with. This Brahman for no reason but out of its own desire, creates, or dreams up,  a physical world where objects have form, qualities and attributes. This is Maya, that, for the lack of a better word, is described as an illusion or a dream. Within this Maya and because of it, the physical world exists as a multitude of objects that exhibit a wide range of forms and qualities. Some of these objects are conscious and sentient in the sense that they have the ability to observe and interact with other objects within this illusory world. These conscious entities are called Atman that are an extension of the formless Brahman but because of the shroud of Maya, they see themselves as an imperfect reflection of their true nature, the Brahman -- the ultimate reality. However, some of these conscious and sentient objects acquire the ability to understand the illusory nature of the world around them. These are the mystics, the Yogis, for whom Maya dissolves and they see, realise, or experience the continuity of the seer and the seen, the subject and the object, and of themselves -- with form, shape, qualities -- with the formless and shapeless Brahman. This is the Monistic philosophy of Vedanta that is significantly different from the monotheistic religions that sees the duality of a creator God that is distinct from his creation, the world and its people.

For a person who is a product of Maya and is immersed in it, the fact that the world around them is illusory is almost impossible to accept. The Matrix movie demonstrated a hypothetical, sci-fi, framework where this could be implemented. In the movie, every human body is, right from birth, deprived of all sensory information from the real world -- of real mountains, real machines and the few real people who exist in it  -- and is instead fed an alternate set of information that is sent directly to the sensory part of the brain. This means that the brain is only aware of this alternate information and hence constructs its own alternate world -- complete with its illusory mountains, machines and people. This alternate world is created with a software program called the Matrix. The story, that is too well known to be retold here, is all about how some real people detach one such body -- that of the hero, Neo --  from the Matrix and opens his eyes, literally and metaphorically.  Now that  he can see for himself that there is a real world that is different from the alternate illusions that his brain and body have grown up with, he can make a choice to take either a red pill or a blue pill and choose for himself the world that he wants to live in. Unfortunately, the choice between the red pill and the blue pill is not available to most people, or body-brain combinations, so their brain continues to live in the alternate reality created by the Matrix as long as the body is in a state to function.

The Matrix was released in 1999 and since then, technology has moved by leaps and bounds. While all that is described in the Matrix is far from being a reality today, nevertheless there has been substantial progress. The ability to create virtual worlds is very well established with MMORPG products that we have discussed earlier and the usage of advanced display devices like virtual reality and augmented reality helmets, gloves, etc allow for an extreme level of immersion. Moreover it is now possible to connect the human brain directly to external, digital devices and it is possible to have bidirectional movement of information. Signals from the brain are routinely being  used to control external devices, giving rise to thought-controlled devices like wheelchairs and MMORPG game objects. The reverse process of sending external digital signals back to the brain to create an artificial illusion is also possible but is not as effective as the outward process.

So the Matrix is not totally sci-fi as it seemed to be when it was released in 1999. We now have the bits and pieces of technology that were referred to and  it is a matter of putting it all together to replicate what The Matrix talked about and make the transition from science fiction to science reality. However there is one aspect of the movie that is still far from being replicated in reality and that is the role of intelligent computers in building the physical infrastructure for the Matrix to operate. In the movie, it is the computer -- software and robots -- who do all this whereas today, the MMORPG and brain-computer interfaces are still designed and built by humans. Hence there exists a fairly well delineated boundary between the virtual reality of MMORPG and the real reality of the external world. So it is always possible for anyone to exercise the choice of the blue pill or the red pill -- to continue to live in virtual reality or to switch off the display device and come back to the “real” world.

But what if this choice is withdrawn? Either voluntarily or as compulsion. What if it is mandated that going forward every child will have an implant on their skull that will allow an external digital feed to send signals directly to the brain and in the process drown out the natural signals from the eyes, ears, nose, touch and tongue? Assuming bodily functions are taken care of by someone else, the child will grow up -- just as in The Matrix -- in an alternate reality.  One challenge could be the ability to procreate through the the act of sex. This could be overcome in the alternate reality by simulating the feeling of sex, of ejaculation, of orgrasm and eventually of the labour pain leading to the sensation of touching and feeling of the child. In the physical reality, procreation is simpler because of artificial insemination and subsequent childbirth. Which is why we say that the premise of The Matrix is theoretically not impossible though there must be a dramatic change in the socio-cultural structure of human society.

Which makes us wonder if this has already happened as a part of biological evolution? What if we already are a part of and  surrounded by an illusory world where our five modes of sensing the external world are nothing more than digital signals sent into our brains. In fact, in the previous section we have seen that, in a sense, we have already isolated ourselves in a cocoon of perception -- created with our multiple personalities, our social media personas and MMORPG avatars -- that shields us from the reality of the external world. Have we already taken the blue pill that allows us to live in an altered reality.  But perhaps there is no real choice between the red pill and the blue pill because what we think of as the physical reality does not exist at all. If we can liberate us from the technology or theology of the Matrix, rid ourselves from our dependence on biology, then we can think of ourselves as non-biological artifacts, or avatars that are being operated by a higher level of sentient beings. Which leads us to echo Sankar and ask whether we are living amidst an illusory Maya and ...

Are we a simulation ?


The simulation hypothesis is not new. It has been around for quite some time but was articulated in its current form by Nick Bostrom [2003] and was made into a movie, Are You Real [ YouTube, 2006] by the author. Of late, many people including Elon Musk have enthusiastically supported this proposition but the most comprehensive articulation for this point of view is Whitworth’s paper,  “The emergence of the physical world from information processing”. See Brian Whitworth, Quantum Biosystems 2010, 2 (1) 221-249, [https://arxiv.org/abs/1011.3436]. [ alternate http://bit.ly/BrianWhitworth


The fundamental premise of Whitworth’s paper is that there are  two hypotheses  namely:

  • The objective reality hypothesis: That our reality is an objective reality that exists in and of itself and being self-contained needs nothing beside itself.
  • The virtual reality hypothesis: That our reality is a virtual reality that only exists by information processing beyond itself, upon which it depends.


Obviously, Whitworth is a strong proponent of the second, the virtual reality, hypothesis and has put together an impressive collection of conjectures, arguments and facts to support his case. There is little point in repeating the same arguments here except to point out that he uses the logic of Occam's Razor very elegantly to demonstrate twelve facts that are far simpler to explain with virtual reality than with a physical universe.  However, in his conclusions and discussion Whitworth introduces the concept of the physical reality being an interface and explains it as follows :

Figure 4 gives the reality model options.

The first is a simple objective reality that observes itself (Figure 4a). This gives the illogicality of a thing creating itself and doesn't explain the strangeness of modern physics, but it is accepted by most people.

The second option argues that since all human perceptions arise from neural information
signals, our reality could be a virtual one, which in fiction stories is created by gods, aliens vs machines, for study, amusement or profit (Figure 4b). This is not in fact illogical and explains some inexplicable physics, but few people believe that the world is an illusion created by our minds. Rather they believe that there is a real world out there, that exists whether we see it or not.

The third option, of a reality that uses a virtual reality to know itself, is this model (Figure
4c). As this paper asserts and later papers expand, it is logically consistent, supports realism and fits the facts of modern physics. In it, the observer exists as a source of consciousness, the observed also exists as a source of realism, but the observer-observed interactions are equivalent to virtual images that are only locally real. This is not a virtuality created by a reality apart, but by a reality to and from itself. If the physical world is an interface to allow an existence to interact with itself, then it is like no information interface that we know.





This third option is in fact nothing more than a restatement of the concept of Maya, the illusion, or what we refer to as virtual reality. This is where the Atman, the individual observer, sees itself as different from the Brahman through the prism, or illusion, or Maya, of virtual reality. When Maya that creates the illusion of reality is removed, the Atman sees itself as it really is, an extension of the Brahman -- the fundamental unity of a Monistic universe.

While we may be veering around to the idea that we are indeed a simulation and the physical reality that we see around us is actually a virtual reality that is created by the processing of information, there remains a nagging doubt. How can the world around me, the world that I can touch and feel be not real? Even if the world around us is a simulation then there must be something physical on which the simulation must execute. In the Matrix, this was the biological body of the humans who were trapped in the Matrix from their birth to their death. In the case of MMORPG, it is the ‘hardware’ of physical computers on which the information to simulate the world must be processed. Where is this hardware? One could argue that this hardware is also a simulation as we have in the case of VMs or virtual machines that we see in many platforms like Oracle VMWare or Dockers but that is merely postponing the problem and not addressing it. VMs may be virtual but then they must execute on underlying physical machines.

This issue has been addressed in the concept of "Turtles all the way down". This is an expression of the problem of infinite regress that  alludes to the mythological idea of a World Turtle that supports the flat earth on its back. It suggests that this turtle rests on the back of an even larger turtle, which itself is part of a column of increasingly large world turtles that continues indefinitely (i.e., "turtles all the way down"). This idea has been expressed in the mythology of many cultures including that of India but once again, this postpones the problem without addressing it.

Which brings us to the next important question. What is more fundamental -- matter or information? Does information depend on the existence of matter or does matter depends on the existence of information.  In the first case, we would need a physical computer to process and display information and in the second case, information itself is adequate to create the illusion of matter. Common sense would say that matter is primary and information is something that emerges if and only if there is a material mechanism to process it. However, quantum mechanics has repeatedly shown that common sense is not a very reliable mechanism and many of the cherished principles are extremely counterintuitive -- as in the same particle taking two different paths or in the instantaneous transfer of information through the process of quantum entanglement. Once we ignore this so-called common sense, many things fall into place including what John Wheeler referred to as IT from bit or  IT from qubit. This suggests that material bodies can emerge from a bit of information or, as is the case now, a quantum bit.

But if we look a little deeper, the concept is not as counterintuitive as it seems to begin with.
That "information is power" is a statement that is often made both figuratively and loosely, but can it be literally true? Is it possible to find links between information and the stuff that they refer to in physics text books? Obviously the information that you read in the newspaper cannot be easily related to the power that causes a light bulb to glow. To simplify both sides of the equivalence, or analogy, and see if we can find a real link between the two. .....

This part of the article needs mathematical symbols that are not possible to represent easily in a blog. To read this section, please visit this page. Then come back here and continue ...

So now we have a direct example of the conversion of information into energy. This is not a thought experiment because it has been demonstrated in a real experiment.

Connecting the Dots


While there is still some residual scepticism about the equivalence of information and entropy because the two are so different in nature, we have managed to establish with reasonable comfort that information and thermodynamic entropy are fundamentally similar. Next we note that entropy and energy ( or at least heat energy) is very closely related to each other and are linked by the equation dS = dQ/T . Energy can exist in many forms - electrical, kinetic, potential etc --  all of which are interchangeable with each other but one that is of maximum interest is the form of matter. Yes, as we all know matter and energy are two aspects of the same fundamental  property to which we can now add information. Hence information and matter are tied to each other. We always knew that matter can give rise to information but now we can claim that information can also give rise to matter. Hence information matters!

Finally, once we agree that matter can emerge from information, then the entire edifice of the simulation hypothesis gets a firm foundation to stand on. There is no need to talk about an endless series of turtles that stand on each other's back.  We begin with information, the genotype, that philosophers in India refer to as the Brahman and with this we can recreate the simulated world of illusory Maya.

Maya, Matrix, Shiv, Shakti, Information, Energy, Genotype, Phenotype - the possibilities are endless. And then you have fake news on social media which connects the sublime to the mundane!


The latest version of this paper is available at this link.

September 09, 2020

Information Matters

That "information is power" is a statement that is often made both figuratively and loosely, but can it be literally true? Is it possible to find links between information and the stuff that they refer to in physics text books? Obviously the information that you read in the newspaper cannot be easily related to the power that causes a light bulb to glow. To simplify both sides of the equivalence, or analogy, and see if we can find a real link between the two, we begin with .... ( read on)

July 29, 2020

Carbon and Silicon

We are all familiar with carbon intelligence - the natural human intelligence that has given us everything from the fire and the wheel,  through the Ved, Upanishad, the Mahabharat, the Laws of Mechanics, Electrodynamics, Thermodynamics all the way through to Relativity and Quantum Mechanics. Near the end of this journey we have run into silicon intelligence or the artificial intelligence that is demonstrated by machine learning and neural networks that has given us autonomous vehicles and software that learns to play very realistic games.

But somewhere along the line these two forms of intelligence -- carbon and silicon -- are coming together to create what sci-fi has been talking about for many years : the cyborg -- part human and part machine. Where are we on this technology? Is it still science fiction? or is fiction becoming a fact. I explore this idea in this lecture that I delivered to the incoming  (July 2020) batch of  Data Science students at the Praxis Business School.




The slide deck is available at http://bit.ly/carsil2020

June 25, 2020

Python for Business Managers

Managing a business enterprise is impossible if the manager is not at ease with dealing with data. While soft skills and EQ are important, when push comes to shove it is data on the table that really matters. Data driven decisions are the backbone of any efficient enterprise.

It is said that data is the new oil because of its intrinsic value. This is why the most powerful companies on the planet, like Google, Facebook, Netflix, Amazon owe their immense clout  to the huge amount of data that they have accumulated about people and their behaviour. Gathering, storing, managing these multi-terabytes (or more) of data is loosely referred to as Big Data. But using this data to draw inferences about the past and more importantly making predictions about the future is Data Science.

Managers in the past were not unaware of or indifferent to the importance of data. Many of them have been using spreadsheets like Excel to assist them in their daily work. However the volume of data in the current business ecosystem is so large that spreadsheets are no longer adequate. Spreadsheets is a legacy technology, almost a relic, from an era that businesses have left behind. This technology simply cannot scale-up to handle the kind of Big Data that today's internet based businesses generate on a daily basis.

Data Science uses many next generation tools to handle Big Data and Python is one such tool that is very widely used today. This book will help managers who do not have a background in computer programming to learn Python to the extent that they will be able to use it in their daily work. Readers will also walk through two detailed exercises that will demonstrate how these tools can be used in retail sales and multinational eCommerce scenarios.



Buy the paperback from the Pothi bookstore.

May 02, 2020

Strange Coincidence?


-------------------------------------------------------------------------------------

April 17, 2020

Lockdown lectures - DIGITALICS




In an earlier post, we had introduced the idea of D I G I T A L I C S
Here is a video that explains it further



January 08, 2020

CBSE to ZBSE - The Innovation Nation

image from MIT Review
When resources are limited it is creativity and its first cousin, innovation, that allows us to get ahead by achieving more with less. In practical terms it translates into the importance of R&D in corporates, or its precursor, research in academia. Which is why publish or perish has been the guiding mantra for those seeking tenure -- or permanent employment -- in US academic institutions. Since the US is the fountainhead of the most innovative ideas in STEM and related disciplines there must be a positive correlation between innovation and publications. This is the logic used in China where it is mandatory for all academicians to be prolific in publishing papers.

This policy has resulted in interesting developments. First, China -- and Chinese researchers embedded in US academia --  lead the world in terms of the sheer number of papers published. Second, a large number of these papers have been found to be of poor quality if not actually fraudulent. Third, and most interestingly, China still needs to employ thousands of hackers to break into and steal industrial and scientific knowledge from US companies and institutions. This weakens the linkage between published research and real innovation. Perhaps China is doing phenomenally well in fundamental research but until we have greater transparency through the Bamboo Curtain, we remain sceptical.

This lack of correlation between publication and innovation is an outcome Goodhart’s Law, first enunciated by the British economist in 1975 : Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.  Marilyn Strathern puts it more succinctly -  When a measure becomes a target, it ceases to be a good measure. This means that while publications may be a good estimator of innovative thinking, when people are tasked to publish for the sake of employment it ceases to be an estimator of anything at all. Anybody who has been in the vicinity of academic publishing would know that acceptance of a paper for publication depends on (a) choice of an ‘acceptable’ subject (b) the ‘methodology’ of research and ‘style’ of representing it and (c)  the ‘literature review’ and ‘references’ that weave a delicate but readily perceptible network that cycles through a self-sustaining ‘citation index’. The originality of the idea or the elegance of its implementation has little impact on the acceptance of a paper in a scholarly journal. As long as it looks, walks and quacks like a duck -- oops, like an academic paper -- then it must be an academic paper worth publishing. [ Public disclosure - this author has only two papers published in non-Indian academic journals and so could have an issue with sour grapes! ]

This obsession with publications masquerading as research has now infected academia in India as well. So much so that the Director of one IIM, as holy a cow as one may find in Indian academics, has decided that actual teaching should be outsourced to contract teachers while tenured faculty, freed from such mundane distractions, should focus on publishing papers. Which is actually a joke because - at least in the area of management - the ability to architect a complex solution and execute a commercially viable project is a far better evidence of innovation than publishing a paper based on dodgy data collection and p-value based testing of pointless but statistically significant hypotheses. But unfortunately, university ranking mechanisms and  regulators like the UGC and the AICTE have latched on to publications  as measures of excellence. Hence we are back to this concept of publish or perish without any thought to its correlation with genuine innovation.

In fact, such borrowed measures of academic prowess have their roots - at least in India - in the larger story of the lack of innovation in the economy. Who decides on what is meant to be a good student in India? First, academicians and then, more importantly, corporate executives, mostly engineering and management graduates, who decide on whom to hire and from which colleges. What is common to all such decision makers is not innovative or original thinking but a history of having cracked entrance examination in their student days. That is why they like examination crackers, or people like them. The entire edifice of corporate and academic India is brimming, not with innovators, but with those who have been able to game the entrance examination system.

Entrance examinations like CAT and JEE were once designed as estimators of intellectual ability. But again, in a perverse reaffirmation of Goodhart’s Law, cracking these tests have become the end goal for all students. The JEE rank that was once a good measure has now, after becoming a target, become worthless in evaluation. Kids with original ideas will never be able to game the system with coaching classes and the thought conditioning necessary to reach the colleges that will take them to companies that in turn could come up with original ideas. Hence Flipkart will always be a copy of Amazon (without its cloud technology) and Ola and Oyo will be copies of Uber and Airbnb. Even when wildly successful, there is nothing original in their products and services. Nothing like Skype or WhatsApp, let alone molten-salt nuclear reactors or CRISPR will originate from them.

So is there an alternative? Is there anything else that could seek out people with raw and native talent? Is there a way to eliminate artificially difficult entrance examinations, like JEE, that only the best coached and best prepared can crack? Once upon a time, long long ago Class X and XII marks were good estimators of talent but with state boards competing to give 90% to all, that option has been ruined.

What if the percentile rank, instead of the absolute marks, in the normal  Class XII examination becomes the yardstick of measurement for college entrance? The immediate objection would be that different boards with widely different number of students are not really comparable. The top 5 percentile in a small board like Tripura may not be comparable to the top 5 percentile in a large state like Maharashtra. What if we mandate that everyone should take the one, common national CBSE XII examination?  either in addition to the state board or as an alternative? This may sound good but there is the danger of rigid centralisation and the concomitant spectre of a single point of failure.

What we could do instead is to redefine the country in terms of education zones and create a Zonal Board of Secondary Examination (ZBSE) for each zone. This will be analogous to the Indian Railway network being managed through sixteen railway zones like Western, South Eastern, East Central and so on. Each such education zone will cover more than one state and may even span state boundaries depending on linguistic and cultural similarities. Each ZBSE  will  conduct its own X, XII board examinations based on a syllabus that takes into account both national perspectives and regional diversity and on a schedule that reflects local convenience. State boards would become irrelevant but even if retained, students should be allowed to take ZBSE examinations in their respective zones of domicile irrespective of the schools that they physically attend.

With educational zones in place, the percentile marks in both ZBSE X, XII examinations should be used as the primary selection criteria for admission to all Central Universities and all UGC funded institutions. In addition to the percentile on aggregates, different disciplines like engineering or liberal arts could use the percentiles on specific subjects or groups of subjects. This will free students from the need to sit for any artificially constructed aptitude test for college entrance and instead, focus on a traditional broad based, multi-subject ZBSE school curriculum.

Recruiters from the authors generation have always used Class X, XII marks as an effective measure to discriminate between candidates and having ZBSE marks, that ensure better parity across the country, will reinforce this method. Colleges that look for good placements will now have to run after students with good ZBSE percentiles instead of the other way around. So truly good students will enter good colleges, and then get good jobs or go for research.

Instead of a top-down philosophy of publish or perish, a bottom-up approach using ZBSE X, XII percentiles for college entrance would be a superior mechanism to recognise and reward true talent in the innovation nation.