July 25, 2021

From Machine Learning to Machine Motivation



Software artifacts that display artificial intelligence are increasing in both number and sophistication. There are many definitions of what constitutes intelligence and there are many ways in which software has been programmed to demonstrate the same. Of all the many options,  the use of artificial neural networks (ANN), that closely mimic the connectionist approach of animal brains, has been found to be most effective in performing tasks that are both useful and insightful. This includes, for example, recognising faces, driving cars, generating meaningful text passages and playing a wide range of games both against humans and against other programs. It may not be the case that the ANN will always be the best way to demonstrate this kind of intelligent behaviour, so for the purpose of this study an ANN is neither necessary nor sufficient. All that we need is a digital artifact -- a container of data, code, model, APIs or a combination of some of these -- that we will refer to as a digital intelligence unit, or DIU. Having access to a DIU equips a digital computing device with the ability to demonstrate intelligence or mimic a specific behaviour of an intelligent biological system.

Digital Intelligence Unit


A DIU may be a digital construct but its input and output could be both digital as well as physical. A typical DIU that we deal with today may read in a piece of digital information, like an image or data file as an input and generate digital output, like a name or a class. However there is no conceptual difficulty in assuming that the DIU is connected to sensors that capture a physical measurement from the environment or that it can cause  wheels, drills, arms, actuators or tools to move and do physical work. 


For example, a DIU may guide a car to move through traffic or terrain,  pick up and assemble physical objects like rocks, machine components or even assemble two objects together. It can also sense and consume energy or where necessary cause the generation or transformation of energy from one state to another. Not all DIUs need to be very sophisticated. There could be very basic DIUs that simply interrogate  other devices and exchange information or a DIU that allows a device to share a physical resource like a camera or a disk with another device. Nevertheless, we will treat the DIU as a digital abstraction that is resident on a digital hardware device like a computer.


A set of DIUs that work together may be viewed together as a larger, bigger or more sophisticated DIU.  This larger DIU may still reside on one hardware device or its parts may be distributed across multiple hardware devices and identified with something like a Uniform Resource Identifier as is done in web development.  However, this set of smaller, compatible DIUs -- is, conceptually,  still another DIU. For example, a ‘rover’ that NASA sends to Mars may consist of a collection of DIUs, each with its own intelligent function but the ‘rover’ itself may be viewed as another DIU.


Continuing with this analogy, we may be tempted to view a biological animal, like a fish or man, as a DIU that is a collection of simpler DIUs. For the sake of argument, and simplicity, we may view, or model, a biological dog as a collection of four DIUs that can scan the environment and identify objects, distinguish between edible and non-edible objects, consume edible objects and  generate signals that express facts about the taste of the food. So four discrete DIUs are collected together to give a bigger DIU called a digital dog.


While this analogy appears tempting, it poses a few challenges.

Motivation


The first challenge is motivation. What motivates a DIU to demonstrate its intelligence? Or what is even more fundamental, what causes a DIU to come into existence?


For a DIU to demonstrate its intelligence, a program must be executed, which means that someone or something must start the program. This is not difficult because most digital platforms (as in computers with operating systems) have mechanisms that could cause certain programs (including DIUs) to start automatically when the system boots and then wait for signals, or interrupts, from the external world. These signals or interrupts could be key-presses or other events like the arrival of mail or the rise of temperature.


What is much more difficult is the process of creating the DIU in the first place. At our current level of understanding, the motivation to create a new DIU, or a new capability, lies in the hands of a human programmer and not within the domain of the digital device. As a human programmer, I can decide that in addition to recognising faces, we need to generate music for which we need an additional DIU. The process of building, or writing the code, for another DIU can be automated -- programs to write other programs are not impossible with current technology, for example, the Github CoPilot or GPT3 -- but someone must have the motivation to do so.


Current DIUs can be programmed to improve their performance with time. Face recognition programs or self-driving cars can be programmed to become better with use but we still do not have any logical mechanism through which a face recognition program suddenly decides to learn how to drive a car or vice versa. Coming back to our digital dog, its food recognition DIU can become better and better to differentiate good food from bad but it is extremely unlikely that it will acquire an additional DIU that makes it jump over the fence and search for better food outside the house. 


Incidentally, a fence jumping DIU is not at all difficult to construct. With current technology it is a trivial exercise to build a robotic system that can jump over a fence. However, what is missing is the motivation to include this DIU in the current digital dog DIU and enhance its capabilities. We need a human programmer to identify this new need and add this additional DIU to the digital dog. Thus the  challenge lies in creating a mechanism, a motivation, that will allow the digital dog DIU to do this on its own.


Let us see how a biological dog does this in the physical world.

The Community


A biological dog will jump the fence when it sees another biological dog jumping the fence. This ability to jump the fence, or the motivation to do so, is a behaviour, an ability ( or intelligence unit),  that resides within the community and which is acquired by or triggered in a member by observing other members. Perhaps this is far more so in humans than in animals who are primarily hardwired. So the existence of a community is an important mechanism that allows an individual member to enlarge its DIU  by acquiring  the DIU available with some other member. In fact, in the story of human evolution, one of the reasons why humans have been so successful compared to other animals, is because they could form communities, share experiences and learn from each other. Henrich [6]


The first challenge is to devise a mechanism that motivates an individual member of the community to search for and get access to a DIU that is available with others. We refer to this as low level or primary motivation.


Computers that are connected in a network, say a TCP/IP based Ethernet network, can be viewed as a community that can share information with each other. However,they do not share information spontaneously. There needs to be a trigger activated by a human or an external stimulus that causes an exchange of information. Two computers, A and B may be connected by a network and one, say A, may have a DIU to recognise faces and the other, B, may have a DIU to drive cars. But there is no reason or likelihood for A to access the car-driving DIU on B or for B to access the face recognition DIU on A.


First, A would not be aware of the existence of the car-driving DIU on B and even if it were, there would be no reason, or motivation, to access it.  To overcome this drawback and to create the mechanism for the primary motivation, we introduce the analogy of a computer virus.

Primary Motivation : The Virus and the DXP


A computer virus is a computer program, but we can view it as another DIU that is capable of performing at least two tasks. First, unlike other programs that sit patiently on their host platform waiting for a signal to do something, a virus program actively seeks out other devices in the network, or the ‘community’, and actively looks for exploitable exposure points. These exposure points could be TCP/IP ports through which messages could be sent or files and folders that can be written on. Second, it usually makes a copy of itself and places the copy in the second device.


Unlike the DIUs that we are interested in, a computer virus has malicious intentions but the principle under which they operate can be easily adopted by the DIUs. This leads to the idea of a DIU Exchange Protocol (DXP) that is built into the operating system of all digital devices, somewhat similar to the ubiquitous TCP/IP stack. The DXP stack on any host device, is designed to look into other, target devices connected on the network and if allowed to do so by the DXP stack on the target device,  to scan it for the existence of new DIUs. This is the primary motivation built into the protocol stack. If a new DIU is found it will be copied back from the target to the host. Obviously, the process works in a symmetrical, peer-to-peer manner. Any machine with DXP can be a host and can pick up DIUs from any other target machine that is running the DXP protocol.


How do we know and recognise a DIU from the many other files, programs, images lying in the target? At the simplest level, a DIU can be identified by something like a file extension. For example, the http protocol recognises files with extensions like htm, html etc but will not interact with doc or ppt files. But given the complexity of a DIU, a simple file may not be sufficient. So we may create a DIU as a container -- a Docker container is a good analogy -- that contains code with APIs, models and perhaps data that may be exchanged across devices. Markers present in the container, for example a correctly formatted XML  file, will allow the DXP protocol to recognise them as such and distinct from other artifacts lying in the machine.


The process can be made significantly simpler and more secure if instead of exchanging DIUs from each other in a peer-to-peer manner, DXP protocol on each device publishes its DIU, or saves it, to a central location like a DIU repository. This could be analogous to the Docker hub, CRAN the repository for R packages or even GitHub which has a lot of source code. A better mechanism could be to store the DIUs as smart contracts in an Ethereum, or similar, blockchain. There are two advantages for using a blockchain based approach. First, blockchain ‘full’ clients, that validate transactions and add blocks to the blockchain are designed to operate autonomously without any human intervention and as we show later, this is important in our scheme. Secondly, the DXP protocol  that controls the process of adding a new block, with smart contracts, can be configured to include a validation process to ensure that only valid DIUs are added. The validation process will be explored in more detail later.


Once the repository is in place, then any member-device of the DIU user community can pull any DIU that is required or is of interest. The newly pulled DIU can then be assembled with other DIUs already present to create larger and more sophisticated DIU. This would not only mean that the device has evolved by acquiring a new ability but it has done so on the basis of its own primary motivation.


The DIU repository, and the primary motivation built into the DXP protocol, gives us a possible solution to the problem of how our digital dog enhances its ability by acquiring the ability, or DIU, to jump over the fence and find better food. This leads us to two more difficult questions, both of which are tied to the phenomenon of motivation. First, if it is not a human being, then who will create these DIUs and why? Second, why should an existing digital platform that already has a set of DIUs pull one more and add it to the DIUs that it already has.


We shall park the first question for the time being and focus on the second. Which DIU should a platform pull and why? What is the motivation for a device to pull a specific DIU? The basic or primary motivation, namely to scan the DIU hub and pull DIUs at periodic intervals, is baked into the design of the DXP protocol. But the choice of which DIU to pull depends on two factors, namely compatibility and utility.

DIU Compatibility


For a DIU to work, it needs certain prerequisites. A DIU to drive a car needs access to a car, that is a device with engine, wheels, radars and many other things. A dog does not have wings and cannot fly in the air but it has legs that allow it to jump. So it learns how to jump and not how to fly. Similarly every digital device cannot pull any DIU. Its choice is restricted to a set of DIUs that it is in a position to operate, or for which it already has the prerequisite DIUs.


Prerequisites are usually chained backward. Let us consider that a device attempts to install a face-recognition DIU. 


A face recognition DIU needs

  • A DIU that already has the ability to access a network camera

OR

  • A DIU that can obtain a camera for the device, that in turn needs

    • A DIU that can execute an eCommerce transaction to purchase a camera and that in turn needs

      • A DIU that can earn money with say crypto mining or performing Amazon Mechanical Turk type assignments

AND

  • A DIU that can physically plug a camera, that in turn needs

    • DIU that can operate a robotic arm, etc., that in turn needs

      • { another hierarchy of DIUs}


What if every device were to adopt this chain strategy? That would lead to a situation where every device can do everything which may not be physically possible or even desirable. Can a dog acquire the ability to fly? It may be possible after many many generations -- as the evolution of species has shown -- but obviously the physical dog body will die but its genomes will get progressively altered over generations until it can fly. Similarly the physical platform on which the digital device works may collapse but the software can get transferred from device to device and keep acquiring DIUs until it can do whatever it wants to do. This will take a long time and a lot of resources. 


Instead, let us focus on how a device will pull a certain DIU that it wants to. But what is it that the device ‘wants-to-do’? This is a part of a  larger question that will be addressed as the next level of motivation, or secondary motivation. Our current focus is on the question of “Which DIU should a platform pull?” and we said that the answer depends on compatibility and utility. We have addressed the issue of compatibility with primary motivation and we now look at utility and the secondary motivation.

Secondary Motivation : DIU Utility 


A DIU will be selected for a pull and implementation, if it provides some value to the device. A biological dog will learn how to jump because it gives it better food and so improves its ability to survive. It will not try to learn how to walk on two legs even if it sees another dog walking on two legs because  walking on two legs does not increase its survivability. In the case of biological species, the utility of a particular ability is related to survival and this survival operates at different levels - survival of the individual body, survival of the species or the genome. There is also the possibility or the question of the survival of specific genes in the genome, if we agree to accept  Dawkins’ principle of The Selfish Gene.


Mapping this issue of biological survival to the world of digital devices is the next challenge and in a sense it loops back to the first issue that we identified already, namely motivation. We have already addressed this at one level of primary motivation that partially explains which DIU is to be pulled based on the ability to search for and pull DIUs on the basis of feasibility and compatibility. Now we need a next, or higher level of secondary motivation. Why should a digital platform seek any specific DIU to enlarge its set of DIUs?


In the biological world the only motivation behind the process of acquiring intelligence ( or ability to perform certain tasks) is survival. Humans in a certain limited way are governed by Maslow’s hierarchy of needs. When it comes to digital devices, we need to determine whether they should be guided, like lower animals, by the need to survive? Or should they be guided by something similar to the human hierarchy of needs? We know that in the case of computer viruses, the motivation is simply to spread to other machines, which is like a survival strategy. For a higher level digital device, that is one with a complex DIU, the motivation could be something else.


So instead of trying to discover what could be the motivation, we can as humans build our own definition of secondary motivation directly into the algorithm of the DPX protocol. Most optimization problems begin with a motivation, which is generally captured by means of an objective function that we try to minimise or maximise depending on the problem, but there could be others. The Open Shortest Path First is an algorithm that is baked into the heart of the Internet Protocol (IP) and determines the route to be taken by a data packet. Public Key Cryptography is an algorithm that is present in the HTTPS protocol and ensures data security. Proof of Work is an algorithm that is built into many cryptocurrency protocols to determine which block will be allowed to enter the blockchain. 


Similarly we need a motivation algorithm, the secondary motivation,  that is baked into the DPX protocol that determines which DIU is of interest to the device or is useful. The design of this algorithm could be based on certain principles that human society holds dear, like the three principles of Utilitarianism that can be summed up as the “the greatest good for the greatest number.” We could also draw upon certain ideas drawn from popular culture like the Three Laws of Robotics created by Isaac Asimov. Obviously other competing approaches can be explored as well. All that we are saying now is that a motivation function, whatever it may be, needs to be built into the DPX algorithm and this will guide the choice of DIUs that will be allowed to be added to the repository or pulled from it by individual platforms.


With the algorithm of the secondary motivation that decides on which DIU to acquire, in place, we now have another question that we had parked earlier. If it is not a human being, who will create this pool of  DIUs and why? This leads us to a tertiary motivation that operates at the community level.

Tertiary Motivation : Community Participation


Mutations that drive biological evolution occur at random.  The ones that survive and are passed down through generations survive purely because they make the individuals “fitter” in their respective environments. Thus, evolution works in a brute-force manner, randomly trying out different permutations, and keeping only those mutations that survive the test of natural selection. Similarly, it can be possible to devise mechanisms that will generate newer and newer DIUs and then test them against the principles of secondary motivation. Here we will draw upon three analogies from the world of mathematics and computers and use them to define another level of motivation that can motivate the community as a whole to come up with more and more DIUs.


First let us consider the Ramanujan Machine that was created by Raayoni, et.al [7]  to automatically generate new mathematical conjectures using an algorithmic approach. Ramanujan was an Indian mathematician who came up with many unproven conjectures, most of which were validated long after his death. However these conjectures opened up new vistas in number theory that are still being exploited today. The Ramanujan machine is a network of computers running algorithms dedicated to finding conjectures about fundamental constants in the form of continued fractions. The purpose of the machine is to come up with conjectures (in the form of mathematical formulas) that humans can analyze, and hopefully prove to be true mathematically. 


The Ramanujan machine currently generates conjectures from a rather narrow domain of number theory and uses two algorithms, namely MITM and gradient descent. But we can envisage other algorithms that may generate tasks or objectives that are in line with the contours of the secondary motivation algorithm. Then the code for these tasks can be created by a product or process similar to the Open AI’s GPT-3. This combination of a secondary motivation task generator and a code creator can then be viewed as a DIU engine that can run autonomously and generate any number of novel DIUs.


The second key piece of our strategy would be a blockchain based decentralised autonomous organisation( DAO) . This is a self-sustaining distributed mechanism that creates economic value by encouraging individual machines to validate transactions -- in this case DIUs created by the DIU machine -- and rewards successful ones with cryptocurrency tokens. For a DIU to be valid it must meet the conditions of DPX protocol in terms of interoperability and the principles of secondary motivation. Only then it will be accepted as a part of the DIU blockchain and this blockchain will become the DIU hub or repository that we had discussed earlier.


Unlike the Bitcoin or current Ethereum blockchain that is based on an energy intensive Proof of Work protocol, this DIU Blockchain could be based on the principles of Proof of Stake or other energy efficient protocols.


This combination of a DIU generator and blockchain based DIU validator is remarkably similar in principle to the combination of generator-discriminator that is the basis of a class of artificial neural networks called generative adversarial network [GAN] first proposed by Goodfellow A GAN, which is the third piece of our proposed tertiary motivation mechanism, is typically used to generate original artifacts that are nearly indistinguishable from similar artifacts that are found in natural populations. The most common example of this is human faces. Given a training set of human faces, a GAN can generate  synthetic images of faces that are not found in the training set, but cannot be distinguished from naturally occurring images. In this case, the training set of DIUs could be the thousands of currently extant DIUs of AI systems that have been developed by humans. In fact, the blockchain could also be seeded by humans as in the first few thousand blocks could contain DIUs built from existing AI systems.  However, going forward, the combination of the DIU engine and the blockchain validation process will create a GAN-like mechanism that will create a potentially endless series of DIU.



This mechanism will provide the tertiary motivation to fuel an evolving ecosystem of digital devices with more complex and useful DIUs. As a by-product, the crypto-tokens generated on this DIU Blockchain could be used by digital devices to pay for DIUs that they pull from the DIU hub.


July 17, 2021

LifeSciences in the Cloud

Do you wish to carry out complex life science experiments but do not have the equipment to do so? Or even otherwise, if you want to catch a preview of the next big thing in technology, you should know about cloud based Life Science. But what is the "Cloud" that we are talking about?

Unless you have been living under a rock for the past ten years, it is very unlikely that you have not heard of Cloud Computing. Originally popularised by Amazon AWS and then picked up by Google Compute, Microsoft Azure and host of other companies, cloud computing is a business that allows individuals and organisations to do away with owning computing hardware. Instead they use computers located at vendor premises and pay only for the time that they use it.


In fact, individuals who use GMail, Hotmail, Google Docs, Google Sheets or even the new Office 365 are actually using a cloud service. All that they have on their laptop or phone is a web browser - Chrome, Mozilla, Edge etc., while their documents are being stored and processed at a distant computer located on the cloud vendor premises to which they are connected over the internet. At a corporate level, companies are doing away with large computers - running business applications like SAP or hosting relational databases -  on their premises and using computers located at cloud vendor premises. Here company employees use a browser or a specialised client software to connect to computers at vendor premises. In the case of a public cloud, this connection is over the public internet but in case there are security issues, some companies prefer a private cloud where the connection is made over secure, private networks provided by telecom operators. Virtual Private Networks (VPNs) over public internet is also a valid mechanism for accessing private clouds.

Irrespective of whether the user is an individual or a corporation, or whether they are using a public or a private cloud, the key principle of cloud computing is that the hardware is owned, operated and located at vendor premises and the user pays to, well, just use it. Depending on the nature of the contract between the user and vendor, the price of the service, when not free, is based either on the kind of hardware and software that is used or the number of transactions processed. The key value proposition that cloud computing brings to the table is that the user can 'order' a machine, use it just for as long as they need to and then dispose it of very easily without having to bear the capital cost or the cost of technicians who manage it. This business model, known as Infrastructure-as-a-Service, has become the most popular way of doing business in the corporate world, except of course for highly secretive operations and those that involve national security.

Frankly, cloud computing is old news. Perhaps little more interesting than automobile manufacturing, petroleum refining in terms of novelty. What is really novel and interesting is when we transfer the concept of the cloud from the world of computing to that of laboratory based life sciences.

Life sciences is the sizzling new field that is changing the world today. Many corporates are investing in projects for drug discovery, genetic engineering and myriads of other projects based on the bio and life sciences. But all such research needs huge, complex laboratories with very expensive instruments. While big pharma and life science companies with deep pockets can fund the huge capital expenditure required to set up these labs, smaller companies and universities are at a deep disadvantage here. Research is not possible without these labs and such labs are not possible without huge capital expenditure.

This is where the cloud steps in.

Emerald Cloud Lab and Strateos are two of a new breed of cloud companies (like the Amazon of old) who have invested in huge, state-of-the-art laboratories and 'rent' out the same to anyone who subscribes to their service, for a fee in a manner that is analogous to traditional cloud computing.

How does it work? 

The cloud provider has a stack of virtually all kinds of high end equipment and a stock of inorganic and organic consumable materials -- chemicals etc.,  that are usually found in any such laboratory. Whatever consumables are not in stock can of course be ordered and obtained through standard eCommerce channels. 

Scientists who subscribe to this cloud lab can set up experiments on their laptop using client software. This is very similar to writing software programs : for example, add  5 gm of A to 3 ml of B and stir for 10 mins. Then heat for 30 mins, then add 2 mg of C and separate the precipitate from the fluid. Measure the weight of the precipitate and then add 3 ml of D and measure the quantity of gas produced [... and so on.] Once this experiment is ready, that is the program has been written, it is uploaded into the cloud lab and then the machines take over. Most of the lab machines are highly automated or are connected by arms, actuators, conveyor belts, sensors and other modern robotic devices that are found in any automated factory today. The experiment is carried out, the results are noted, stored and sent back to the scientist who devised the experiment and requested that it be executed. And of course there is a bill for the use of the consumable chemicals and for the use of the specific machine that is debited from the account of the organisation that has signed up for this service.

Readers who have a more intimate knowledge of the work done in life science labs would understand that a platform like this is particularly helpful for performing high throughput experiments. Researchers can design compound libraries, prepare samples and assay plates, perform measurements, acquire and process experimental data, all with the click of a cursor. They can then run multiple seamless workflows in parallel and in fact can even troubleshoot their experiments remotely!

The brilliance of this model is no different from the brilliance of the traditional cloud computing model -- end users do not have to incur any capital expenditure or have an army of technicians to manage the equipment and carry out experiments. Watch this video.

There are two ways in which this fascinating technology can be used in the academic and corporate world. 

  • First, small companies and universities that cannot afford the capital expenditure for complex equipment can easily sign on for such services and start using equipment that they never had access to in the past. 
  • But even large pharma and life science companies, who can afford such equipment can centralise their labs at one location and use a private cloud to allow its employees who can be located in any part of the world to use the same. The pandemic has made work-from-home acceptable to many corporates and using cloud labs will allow even scientists, who traditionally need to work in labs, the opportunity to work from home and allow corporates to seek out the best talent from any part of the world.
In fact, for both small and large companies, the availability of such a robotic, cloud platform can vastly accelerate the process and improve the repeatability of conducting physical experiments that are currently done by individuals in their own personal laboratories.

How I hope that some Indian corporates -- who are notorious for never taking the lead with any new idea or technology -- will understand the immense potential of this new process. For a change, let India lead and not follow the herd.


A more detailed post that claims that Cloud Labs will be the norm is available here.

July 10, 2021

Ramanujan, Gödel and Hindu Darshan

I recently saw two movies on Indian mathematicians, the first on Shakuntala Devi and the second -- The Man Who Knew Infinity -- on Srinivas Ramanujan. Obviously both these mathematicians are very well known in India and to a certain extent, or in certain circles, in the world as well. Both Shakuntala Devi and Ramanujan seemed to have magical or supernatural abilities to see or visualize solutions to mathematical problems. Of course, Ramanujan was several orders of magnitude higher than Shakuntala. His work on Number Theory, -- the Queen of Mathematics ( where Mathematics is the Queen of the Sciences) -- was vastly more significant or exciting compared to the mere complexity of arithmetic computations that Shakuntala could demonstrate. But then, if Shakuntala had not been deprived of her childhood and childhood schooling by her avaricious father, who made Shakuntala earn money for the family, she might have had the opportunity to demonstrate more significant abilities.


But we are not here to compare Shakuntala or Ramanujan. I see them as the Rishis, or savants, of ancient India who could visualize the truth. That is why in India, we use the word darshan when we talk about subjects that Europeans refer to as philosophy. Literally, darshan means vision or sight. Metaphorically, it means insight born out of intuition. India's rishis or savants could see the truth, or as it is said, they could hear it directly from its source.  Which is why the Veds are referred to as shruti, or that which has been directly heard, as opposed to the subsequent literature smriti, or literature that is  remembered from an earlier era.

The ability to see the truth in darshan or to hear the same as a shruti is a fundamental feature of the Indic spiritual ethos and is the basis of all intellectual enquiry in Hindu India. Hindu savants who could see or hear great philosophical and scientific truths were treated with reverence and viewed with respect if not awe. In contrast, the European worldview is based, among other things, on the method of proofs that were codified by Euclid of Alexandria and known to the common man as Euclidean geometry. But Euclid is more than just geometry. Unlike Vedic mathematics which is more of a collection of mathematical aphorisms, Euclidean mathematics starts from a foundation of self-evident axioms and then builds a self supporting superstructure of theorems that are connected with beams of irrefutable logic. The strength and beauty of this approach is that even though Euclid applied it to geometry, it has been adopted by the European academic establishment not just in the pristine precincts of Mathematics but even in the more rough and tumble world of the physical, biological and now even in the social sciences. Net-net the proofs provide the necessary rigour that, in the European mind, separates the grain of truth from the chaff of anecdotes and hearsay.

This is where savants like Ramanujan ( and Shakuntala) collide with the modern -- largely European and American -- world of mathematics. Ramanujan never, or rarely, had a proof for his conjectures. He claimed that his family goddess Namgiri, a local avatar of the Divine Feminine Mahalakshmi, would tell him the equations and they could not but be true because they had been revealed by the Devi herself. To his dying day, and he died very young at the age of 32 after battling tuberculosis that he had developed in the cold weather of England, Ramanujan claimed, that to him, an equation had no value unless it expressed a thought of God.

This was of course heresy to mathematicians like Hardy and Littlewood who had a great fondness for Ramanujan and had arranged for him to join them at Cambridge. For Hardy and his tribe,  a statement without a proof was a mere conjecture, an intellectual oddity, but not something that could be accepted as an intellectual achievement. After a lot of persuasion, that included quite a few unfortunately harsh words, Hardy could compel Ramanujan to develop proofs of some of his numerous conjectures. Fortunately, this was enough, first for the Royal Society and then Cambridge University to accept him as Fellow. Ramanujan was the second Indian to be elected as Fellow of the Royal Society.

Ramanujan died in 1920, and you will have to see the movie to know how and why he died so young, but had he lived for another ten years he would have heard Kurt Gödel presenting an usual paper in Vienna where he claimed that mathematics ( or rather arithmetic, the most primordial part of mathematics) in incomplete.  Gödel Theorem of Incompleteness showed that if a formal, logical system is consistent, then it cannot be complete. This means that there will be statements that are true but not provable by logically connecting them to established theorems. Moreover the consistency of axioms cannot be proved within the system. A simple analogy would be to consider the following sentence : "This statement is false." If you think about it, there is no way to determine the whether the statement is true or false!

Gödel used mathematics itself to drive a stake through the heart of mathematics. He used the letter of a false doctrine to kill its own spirit. This was so humiliating for the world of mathematicians, that a mathematician of the stature of John Von Neumann, one of the first to hear Gödel's presentation in person -- and understand it! -- gave up a career in pure mathematics because he realized that it was pointless. He subsequently moved to applied mathematics and did wonderful work with computers but never again did he touch pure mathematics and Number Theory.

Ramanujan was of course deeply immersed in Number Theory and his numerous conjectures -- statement that are true but not yet proven -- were precisely the kind of statements whose existence was established by Gödel in his Theorems of Incompleteness. Pushed by Hardy, Ramanujan did publish 28 peer-reviewed papers in the last six years of his short life but he has left behind more than 3500 conjectures that are available in his notebooks that were subsequently published or printed by Springer Verlag many years later. These notebooks are the frantic efforts of a man to capture and preserve for posterity the thoughts of "God "that he could see as beautiful equations.  These beautiful equations are possibly no less profound than the shloks of the RigVed but only to one who has the ability to understand or rather appreciate them. It is a different matter of course, that Bruce Berndt, Professor of Mathematics at the University of Illinois at Urbana Champaign, eventually proved almost all these 3500 odd conjectures or theorems that Ramanujan left in his notebooks. See this interview and this paper.

Gödel's Theorem establishes that there are statements that are true but not provable and so provability is a weaker notion than truth. Absence of a proof does not weaken the value of a true statement. In a sense the Incompleteness Theorem puts a closure, a boundary, to Ramanujan's premise that a proof is not really important in the search of the truth. If it is there, then that is nice. If not, so be it. There is no reason to disbelieve or devalue what has been revealed just because there is no proof.

It may of course be argued that Gödel's Theorem is applicable only in the domain of mathematics and that too within the narrow confines of arithmetic. Extrapolating it into the domain of mysticism and religion is incorrect. In a strict sense, that may be true, but nevertheless it opens up a crack in the otherwise impenetrable wall of hard logic that keeps mysticism away from 'scientific' or rational enquiry. But this crack gives an indication, a hint, that logic and rationality are not as sacrosanct as they are made out to be. Even if a field as structured and as logically tractable as arithmetic cannot be accommodated and captured with the tools of logic then it is even more unlikely that a subject as complex and nuanced as religion can ever be understood or explained in logical and rational terms. There is nothing to be apologetic about accepting the limits of logic and rationality.

Ramanujan's genius is beyond doubt. Had he simply proved these theorems he would have been just another great mathematician. What is far more interesting however is his ( and Shakuntala Devi's ) ability to transcend the process of proving truths -- like mere mortals -- and jump straight through to the eventual truth of a statement. Not only does this provide a practical demonstration of Gödel's Theorem of Incompleteness but is an endorsement, a contemporary reaffirmation, of the process through which Hindu savants had a darshan or vision of the Truth. The shruti literature of the Veds and the Upanishads encode Truths that were directly envisioned by Hindu savants and that is why we hold so dear today.


I end by quoting Rabindranath Tagore 

সীমার মাঝে, অসীম, তুমি বাজাও আপন সুর। আমার মধ্যে তোমার প্রকাশ তাই এত মধুর॥

that can be loosely translated as - "It is so very pleasing to experience the epiphany of the infinite when it manifests itself within the finite domain of my own limited psyche."


P.S. I have been told that even though all this sounds kind of plausible, true scientists will never be convinced to accept the core idea of this post. Frankly, I am not here to convince anyone. I am here to state my point of view that others may or may not agree with.