November 29, 2020

Building CRUD Applications on the Blockchain - Part II

Once the basic architecture or the big picture of the blockchain ecosystem -- as explained in Part I of this article -- is understood, we can now move into the nuts-and-bolts of actually building a software application. To do so, we need the following four pieces :

  1. An Ether wallet - to store Ether and pay it out when necessary
  2. A Solidity development environment or IDE to build contracts
  3. A connection service provider that allows client software to connect to the Ethereum network
  4. A platform or IDE to build a client software that can execute transactions against Solidity contracts residing on the blockchain

1. The Ether Wallet - MetaMask

An ether wallet is a Solidity contract with an address ( think Bank Account Number)  and a private-key (think Specimen Signature in a Bank Account) that resides on the blockchain. What is commonly understood as an ether wallet is actually a client application that can execute payment transactions from the Solidity wallet-contract. So we will refer to this client as a wallet-client. 

A wallet-client can be installed on your local laptop or be hosted with a crypto-exchange like Coinbase and is protected with a password that is different from the private-key of the actual wallet-contract. For our purpose we will use MetaMask wallet-client that sits as a Chrome extension in the Chrome browser. A Mozilla extension is also available but I have not tried it. Android / iPhone versions are irrelevant for our exercise.

After installing the extension,  select the network that you want to connect to, which could be the MainNet or in our case a TestNet called Rinkeby. MetaMask allows connections to other public TestNets or also a local blockchain that you might install on your local machine. This last option may be used by experienced users but my advise is to stay away from such adventurism until you get your basics right.

Once connected to the network, MetaMask will allow you to create a number of wallet-contracts. You need to create at least one wallet-contract and and note down its address and private-key. The next step is to fund the wallet-contract with Ether. If you were using the MainNet, then you would have purchase Ethers at a crypto-currency exchange like Coinbase and have them sent to the address of your wallet-contract. In the case of the Rinkeby network,  you can get free Ethers -- that of course have no value in the real MainNet -- by visting  Rinkeby faucet and following instructions. The simplest way is to send out a tweet with your wallet-conract address,   the URL of the tweet in their form and request for Ether. After a minute or two -- remember, nothing happens instantaneously on the blockchain because of synchronization requirements -- you will suddenly find that your wallet-contract has Ethers that you can use.

2. The Remix Ethereum IDE

There are many Ethereum IDEs available but you can start with Remix , a hosted IDE that runs in a browser. Contracts developed in the Remix IDE can be deployed on the MainNet or on any of the TestNets that are supported in the MetaMask wallet-client that you have installed in the previous step after paying the required fees - in Ethers - from the wallet-contracts defined in the MetaMask wallet-client. But since deploying and testing of contracts on the network is a little slow, Remix gives you a local Ethereum node -- complete with wallets etc -- where you can rapidly test the Solidity contracts that you are testing. Remix also gives you a small and generic client with which you can call ( or invoke) functions (or methods), pass parameters and see the results returned.

The code that you write in Remix can be saved either on your local disk or directly as a Gist in Github! 

A CRUD application

The application that we will build is inspired by this post from Rob Hitchens who has not only shared the architecture of his application but also his entire code. The basic idea in our application, that is different from Rob's application, is as follows :
  • We have an employee database consisting of EmpID, EmpName, EmpSalary that is stored in a simple array called database9 that is indexed by the EmpID. The EmpID is the primary key or index and is stored separately in another index array called db9index.
  • There are six functions that are defined, namely, 
    • insertEmp - create new record in the database
    • updateEmpSalary - update the salary field
    • deleteEmp - delete a record
    • getEmpCount - get total number of records
    • getEmpData - get record of employee specified by EmpID
    • isEmp - to check whether an employee ID exists or not
Open the  Remix  IDE, delete any sample applications and start coding. To get started, create a new .sol file called pmCrudCon.sol and copy the contents of the code available here.   Simply copy and paste this code into your file. You are free to modify this application and save it in your own Gist but for the time being let us deploy and execute the sample code.

The IDE has three important tabs on the left
  • The coding tab where you write or edit your Solidity program
  •  The compile tab where you choose your compiler and compile your code. On successful compilation, go to the bottom and look for a field called ABI. Copy the contents of the field and store it separately. We will need it later for the client application. You can of course come back and copy it later as well.
  • The deployment tab that we will study in some detail
In the deployment tab
  • The first key field is Environment where you have three options. Either you can deploy the contract in the Javascript EVM in your local browser or you can choose the Injected Web3 option that will connect to the TestNet that your MetaMask wallet-client is already connected to! Since both the Remix IDE and the MetaMask wallet-client are loaded in the same browser, the connection is delightfully easy. The third option of Web 3 provider is more complicated and can be safely ignored for the time being.
  • The Gas limit and the wei value can be ignored or left on default.
  • Assuming that the compile is successful, you can now press the Deploy button. Now a couple of things will happen
    • Since Deployment costs you Ether, MetaMask will seek your confirmation to debit the required number of Ether ( do not panic, it is a miniscule fraction of an Ether) and send it to the network.
    • Once you confirm, the deployment transaction will be be submitted and may take a few seconds to complete. You will get a confirmation from MetaMask that  you transaction (for deployment, or whatever) has been confirmed.

    • If you scroll down you will see details of the contract ( in this case PMCRUDCON)  that has been deployed. Copy the contract address - that is more important than the name.
    • There is a button to execute each of the functions along with a way to send arguments to the same. Go ahead and insert an employee with name, salary, emp id as 'Narendra Modi', 10000, 100 or something like that.
    • Note that the Blue boxes represent Read Only functions that do not require Ether payment. The Brown boxes represent state-changing transactions and for each MetaMask will request you to confirm a payment and only then will the transaction be submitted to the network.

3. Connection Service -

When MetaMask or Remix connects to the network -- mainnet or testnets -- they know the URL to use but how will the client that you and I write, connect. To do so, head over to and create first a login for yourself and then a Ethereum project. Navigate to the dashboard settings, choose the network that you wish to connect to and get the URL of the corresponding endpoint that will look like this :"

4. The Python Client

The contract-client that will allow us to execute transactions built into a deployed Solidity contract can be built with any language that has a web3 library with a set of functions that allow us to interact with the Ethereum network. Both Javascript and Python libraries are available but because I am more comfortable with Python we use Python in this example. Any Python IDE -- like Anaconda with Spyder or Jupyter -- is fine but we prefer to use Google Colab! and the corresponding notebook is available here. Like Remix, Colab is a another browser based IDE so again there is nothing to install. If you are not familiar with Google Colab, suggest that you learn it by reading tutorials.

The notebook is pretty straightforward and goes through the following steps
  • Install the web3 client with pip. Note that you need to restart the VM after the installation
  • Define the contract address, the wallet address and the wallet private key 
  • Unhide the cell containing ABI variable and place your own ABI variable from Remix there
  • Define the contract in terms of its address, and ABI value
  • Execute the read_only functions and see the results. Data that you have entered in the Remix client should be visible here.The only way you can get an error is if  you have not transferred the following variables correctly
    • Endpoint URL from
    • Contract Address & ABI value from Remix
    • Wallet Address and Wallet Private Key from MetaMask
For the state-altering ( or insert, update, delete) transactions, the process is little complex because we have to send the function along with a way to make an Ether payment. Look at the pyinsertEmp function that acts a wrapper to the Solidity insertEmp function. Before calling the Solidity we need to specify the maximum quantity of gas that we will expend on executing the function. Gas comes at a price and quantity x price gives us the total that we will spend. To spend we need to specify the address of the wallet-contract as well as its private-key. We also need to specify the chainID ( for Rinkeby this is 4).

Now we call the function or submit the transaction. If all variables have been set correctly and the gas quantity is sufficient, the transaction will be enqueued for the network but it will take time to execute. Hence the python code must keep waiting until a non-null transaction receipt comes back or there is a timeout. Typically, this takes about 20 - 30 secs in the testnet. Both the transaction receipt and a broadcast log ( if set) can be read to understand whether the transaction has succeeded or failed.

This application has demonstrated how to write a basic application that allows us to create, read, update and delete data records into the blockchain. This is primarily what any commercial application does and we have shown how to this with a 
The next step would be to write contracts that will accept payment in Ether and deliver a digital artifact  or service to the initiator of a transaction. To see how that is done look at 
  • This Solidity Code : that allows you to deposit and withdraw Ether into a contract and also send it to a third party
  • This Colab Notebook that shows the Python Client

November 28, 2020

Building CRUD Applications on the Blockchain - Part I

With Bitcoin and cryptofinance in the news, blockchain programming has suddenly become very important but most people - including many programmers - do not have a clear idea on how to write a program on the blockchain. This article is not about theoretical concepts like Merkle Trees and SHA256 Hash. It is about how to write an application that creates, reads, updates and deletes a data record from a 'database' on the blockchain. A CRUD application. If you are impressed by Powerpoint slides in Zoom Webinars, you may skip this article, but if you are a programmer -- professionals or hobbyist -- do read on. You will end up writing your first blockchain application without too much sweat.

I first came across Bitcoins as currency in the virtual world of Second Life and wrote about "Bitcoins - my first look at a new currency" in 2013 and then I struggled to wrap my head around this fantastic concept for almost a year until I reached a point where I could write "Bitcoins - as I understood it." While Bitcoin itself was a fascinating concept - an esoteric cocktail of mathematics, programming and economic concepts worthy of a Nobel Prize in Economics - the underlying technology of blockchain was even more fascinating. The incredible potential of this technology became evident with the advent of the Ethereum Virtual Machine and intrigued me enough to explore the concept of the CryptoCorporation, a strange, but successful automatic, manager-less organisation that could create value out of thin air and finally in 2016, I was finally able build my own cryptocoin using Smartcontracts running on the Ethereum blockchain.

Back in 2016, writing blockchain programs were not for the faint-hearted, because there were two big stumbling blocks.

  • First, one had to install a whole suite of very complex software -- including wallets -- on the laptop and 
  • Second, one had to buy Ether, with credit cards, and use it each time you wanted to run, or even test a program that you had written.
This was frustrating, to say the least. First, if you had to replace or change your computer, there was this great likelihood of not just losing your software stack but also your Ether wallet unless  you had the foresight and the ability  ( which very few had) to keep proper backups. The other huge challenge was the sudden decision of the RBI / Government of India, not too ban cryptocurrency, but to bar banks from allowing people to purchase cryptocurrency using normal, legitimate banking channels like credit cards. Hence while the whole world, including China, raced ahead with blockchain technology, programmers in India were left with no option but to only watch from the sidelines. Fortunately, the Supreme Court has lifted this ban  in March 2020 and blockchain programmers can start work again after losing nearly two years of sitting it out in the wilderness.

image from

But blockchain programming is so very different from 'normal' programming that most programmers have difficulty in wrapping their heads around the  concept. Forget about the concept of smart contracts, even basic stuff like writing, reading and updating a piece of data becomes esoteric because apparently there is no one machine where program resides or executes. It took me quite a while to understand this and once I did, the elegance and beauty of it all took my breath away. Which is why I thought of writing it down, not just for my own clarity but for other programmers who could possibly be in the same boat as well. While I have tried to keep my post as simple as possible, please note that reading beyond this point means that you have some programming experience that goes beyond Excel. An exhaustive manual would be far too long but if you are like me, a hobbyist with a penchant for coding, sit back and enjoy this ride. 

This tutorial is based on two important premises :
  1. Only the bare minimum software will be installed on the local laptop, we will use hosted software as much as possible.
  2. We will not spend any real money -- as in using credit cards.
Without any further ado, let us start with 

The Big Picture / Architecture

A theoretical and mathematical description of the blockchain is given in this article but for all practical purposes it consists of a network of computers (the MainNet) that run the the full Ethereum node software. All full nodes contain identical copies of two artifacts : (a) a node software application and (b) a huge database file that is called the blockchain. Both the node application and the block chain are kept in continuous sync by constant exchange of messages between nodes, all of which have a peer-to-peer relationship with each other. In addition to the MainNet, there are other private and public chains called Test Nets  with names like Ropsten, Rinkeby, Roerli etc. 

In traditional three-tier client server architecture we have three components. 
  • Client software written in, say Python, Javascript, VisualBasic, or in many cases running inside an Internet Browser like Chrome, Mozilla or Edge. This provides the user interface.
  •  Application software, written in say, PHP, Ruby, Python that resides within an application server or web server like Apache. This provides the business logic. 
  • A persistent data store like MySQL. Obviously the application server (Apache) and the database server (MySQL) may or may not reside in the same physical machine.
Developers build applications by writing both client-side programs (in Javascript, Python) that are hosted in the client machine and server-side program (in PHP, Python etc) that are hosted in the web-server. There is also the case of client side software being stored in the server and downloaded and used inside a browser, but we shall park that aside for the time being. The database engine, MySQL ( or equivalent) is not written but purchased or downloaded and installed on an appropriate machine. If you are a programmer, all this will be very easy for you to understand.

In the blockchain architecture we have a two tier setup
  • Client application written in Javascript, Python (there could be others, but I have not used other languages) or running in an Internet Browser. As in traditional client-server architecture, the client software provides the user interface which may or may not be GUI.
  • 'Contracts' - a new term used only in the blockchain -- written in a language called Solidity - again a new language used on blockchains -  that holds BOTH business logic AND persistent data.
Since contracts are the new kids in the block, let us look at them a little carefully
  • Think of a contract as an instance of a object from the world of Object Oriented programming. This means that the contract has both logic ( as in methods, or functions) and data. But unlike OO objects which can have multiple instances, there is only one instance of a contract. Like OO objects, contracts can inherit properties from other contracts.
  • The state of a contract -- as defined by the data that it holds -- can be changed  through a transaction. A transaction can be initiated from a client application either by a human being or some automated process
  • A contract is deployed into and resides in the blockchain. Hence it is available on all computers on all networks and all copies are in the same state. Each contract is identified with its own unique address that is generated when a contract is deployed in the blockchain through any node.
    • A wallet is special kind of contract that holds a defined quantity of Ether. A transaction can deposit any amount of Ether into a wallet contract but to withdraw the Ether from a wallet contract, one must provide a password ( aka 'the private key') of this specific wallet. So a wallet is a contract defined by its address (similar to a bank account) and its private key (similar to specimen signature in a bank account)
  • When a client application connects to any node (through a standard URL) it requests or calls a contract that is now loaded from the blockchain on the node where it resides to the Ethereum Virtual Machine running on the same node. On the EVM, the contract performs the task -- in the process, it may or may not change its state -- and then is put back in its new state back into the blockchain. 
    • It is actually a misnomer to say that the state of the contract is changed. The blockchain is immutable, it can be added to but earlier data cannot be changed. When we say that the state of the contract is changed, what we really mean is that the original contract along with all transactions that were supposed to change it are ALL stored in the blockchain. Hence the current state of the contract can -- and is -- recreated. This may sound incredibly inefficient when compared to, say a relational database, but it is the only option when there is no central authority ( like the MySQL database adminstrator) who is trusted by everyone in the network. Which is why it does not make any sense to port an application from a traditional three-tier platform to the blockchain platform where there exists a clear central authority, like a statutory body and its database administrator, to own manage a central database.. Blockchain applications are meant to handle situations where there are multiple operators -- all of whom are peers, without a central hierarchy -- and there is no reason to trust any or all of them.
Finally, in addition to the client application and the Solidity contract, that a developer needs to build there is a third 'joker' that the developer needs to deal with - a payment. Since the transaction is getting executed on network node, the owner of the node needs to be incentivised or compensated for their investment in hardware and electricity. In the Bitcoin network this incentive is in the form of new coin that is mined ( that is another long story -- see here, and here ) but in the Ethereum ecosystem it is a combination of mining and an actual fee that must be paid by the client application to the node owner in the form of Ether. On the Main Net, this is real Ether that has be mined or purchased but in the other Test Nets, there is a lot 'fake' or 'toy' Ether that is freely available. Think of this as monopoly money with which we buy properties in the Monopoly game. This Ether - real or fake, depending on the network being used - must be sent along with the transaction request.

The second part of this tutorial will show you how to to build 
  • a Solidity contract pmCrudCon.sol  to manage employee data ( empID, empName, empSalary)
  • a Python Application to Insert, Update, Display and Delete this Data

November 14, 2020

Badshah Alam Shah 1204


Located this coin among others in a locker being cleaned out. Charanpreet Singh helped me decode the text that reads 19 Zarb Murshidabad and Badshah Alam Shah 1204. Used this information to get some more information from an auction site.

Deepawali acquisition!

November 06, 2020

Ethics of AI

Unless you have been living under a rock for the past couple of years you would know for sure that things are happening in the area of Artificial Intelligence. Rapid developments in the area of artificial neural networks has spawned a brood of useful architectures - CNN, RNN, GAN - that have been used to solve a range of very interesting problems. These include, among others

  • control of autonomous or self driving vehicles
  • identifying visual elements in a scenery
  • recognising faces or connecting bio-metrics to individual identities
  • automatic translation from one language to another
  • generating text and visual content that is indistinguishable from that generated by human intellect.

While these applications have created considerable excitement both in the technical as well as in the commercial community, there has been an undercurrent of resentment among certain people against what they view as ethical issues that are yet to be unresolved. 

To understand what is at stake let us consider two specific issues from the area of autonomous vehicles. 

First who is liable in the case of an accident? In some countries, the liability lies with the owner of the vehicle while in others, it lies with the driver who was at the wheel when the accident occurs. But in the case of autonomous vehicles there is a point of view that says that the liability should be with the manufacturer. If at all it was the fault of the autonomous vehicle, and not the other party to the accident, then the fault lies with the autonomous system - hardware sensors and controlling software - that has been supplied by the manufacturer. This is similar to a brake failure except that the owner, driver has no way to check the equipment prior to starting out to drive.

Second, and this is more interesting, is the question of whose life is more important? Suppose a pedestrian comes in the way of a moving vehicle whose speed is such that an application of brakes will not be able to stop the car from hitting the pedestrian. The only maneuver that is possible is for the car to turn away and hit a wall. In either case the injury or death will happen either to the pedestrian or to the driver. For the sake of this argument, we can simplify the situation by ignoring issues like estimating the expected quantum of injury in the two cases and the subsequent possibility of death or extent of disfigurement and come out with a binary situation - whose life is more valuable? The driver or the pedestrian?
image from

These may look like very profound questions and are very often portrayed as such but frankly they are not. 

In the first case, there is no need to split hairs on the liability. Lawyers may love the possibility of litigation and accountants may salivate at the the thought of extracting money from car manufacturers but for the technologist, this is a no-brainer. Most car accidents are because of driver error, except of course when a pedestrian behaves randomly, and with the advent of autonomous vehicles the possibility of driver error virtually disappears. So if the vehicle software has been adequately tested - like vaccines! -- before they are released in the 'wild', the number of accidents will, in any case, go down dramatically. So the overall cost of accidents will go down but individual cases will be paid out of the general corpus of funds created by collecting premiums from all vehicle owners, that are calculated by usual statistical ( or actuarial) analysis. In fact, this no different from a mechanical failure which in any case is factored into the economics of insurance. Net-net, there is no issue at all. It is just another unfortunate accident that has to be factored in to the premium calculation process, perhaps with an additional line item.

The second issue can also be dealt with quite easily. Who should die? The pedestrian or the driver? In the case of a human driver both situations are possible. Some drivers will slam on the brakes and hope that the car will stop before hitting the pedestrian while other drivers will turn the car and hit the wall. There is no hard and fast logic and nor is there time for a thorough analysis, ethical or otherwise, of the various options. It is a gut-feel reaction that is best modeled by a random probability. So the simple way to break the tie is to toss a coin -- or simulate the coin toss with a random number generator -- and take a decision on whether the coin shows head or tails.

If it is a fair coin, there is a 50% chance of either outcome and so the software can be programmed to take one decision or the other on the basis of this probability. This would reflect the regular, or underlying, reality of a human driver. So the behaviour of the autonomous vehicle would in no way be different from the behaviour of a vehicle driven by a human being. If we have learnt to live with human drivers we can continue to live with autonomous vehicles.

The 50% rule is a kind of a default starting point. If it is observed that most drivers are altruistic and prefer to save the pedestrian at the cost of their own health then the probability of hitting the wall can be raised from 50% to 60%. On the other hand, if it is observed that most drivers are selfish and prefer to kill the pedestrian and save themselves, then the probability of hitting the wall can be lowered to 40%. These probability numbers mean that the coin being tossed is not a fair coin but a biased one and reflects the inherent bias of society at large.

This solves the problem of the autonomous car but opens up another Pandora's Box.

Should AI ( or Deep Learning ) systems have a bias at all? Or should they always be fair? This is important because Deep Learning systems are trained on the basis of a history of past behaviour of human systems. This training is done by collecting data on how decisions have been taken in the past and using this data to set the parameters. In simple systems, these parameters could be probability values but in neural networks they are the weights that are assigned to different connections between nodes. The exact technology is not important here. What is important is whether the training data has bias and whether this bias is carried through from the non-computer system to the computer system.

For example, it has been observed that in the US, both parole applications and loan applications are more likely to be rejected if the applicant is a black person because of a historical bias against this particular demographic segment. When this data is used train a AI / DL system, this bias is carried through and once again, blacks will be discriminated against. [ Of course, there is another point of view that states that automated, machine based decisions have less bias -- see this (paywalled) link -- but that is another story and another debate.]

Obviously this is patently unfair and should not be allowed and hence there is a strong move to ensure that AI systems do not suffer from bias. There is no question about that ...

But does that mean that AI / DL should not be built until we have resolved the issue of bias? This is where the debate takes on an ugly turn between the proponents of ethics in AI and those who would rather stick to the technology of AI. For the former, the question of ethics is paramount and they would rather not have AI unless it is certified to be bias free. For the latter, the matter of ethics is secondary. They would rather focus on creating innovative technology and leave the matter of ethics for another day.

Faced with this choice, my sympathies clearly lie with the latter, the technologists and the reason is very simple. The world is not fair and can only be so in the dreams of the Utopian idealist. Since we do not have the luxury of living in an ideal utopia, expecting AI to be ethical and bias free is an impossible dream. The world has learn to live with bias and will continue to be so. If ethics was really as essential for the survival of the human race then we should have shut down the armaments business (and possibly a large part of the pharmaceutical and hospital business as well) But we have not done so, because of an irresistible, or inevitable, convergence of economic, political and social power.

Any country, or society, that shuts down its armaments business or disbands its armed forces will be overrun and taken over by another country that does not subscribe to this Gandhian policy of pointless non-violence. This was brutally demonstrated during the 1962 China War where India's idealistic Principles of Panchsheel were brutally shoved aside by the rampaging Chinese PLA. While a measure of ethics is certainly good, making it an absolute framework that is at odds with the ambient reality is neither possible, nor desirable. So is the case with AI. There are many people who feel that the so-called 'liberal' countries like the United States should not use technology like facial recognition at all because it is an unethical violation of privacy. Little do they realise that 'non-liberal' countries like China are already using it in a big way to enhance their own security and if the imbalance continues it will be as stupid as shutting down the armaments industry.

Any technology - from nuclear through genetics and space to artificial intelligence -- can be weaponised. That does not mean that development must stop. Let us go in with our eyes wide open, be aware of the dangers but also be aware of what is happening elsewhere and make sure that we do not vacate or step back from the leading, or bleeding, edge.

To sum up, let us understand that bias is inevitable in any human society. We should try to minimise it but hoping to eliminate it is impossible. So is the case with non-human, silicon based intelligence or for that matter for any non-human sentience that will eventually arise from this technology.

September 12, 2020

Turtles upon Turtles

Abstract : Information, or rather information technology, is the basis of the digital economy that we live in but is there something more fundamental to information that goes beyond the thousands of  digital computers that we come in touch with in our daily lives? This article explores how information could be the basis of the material world itself, which in turn is merely a simulation generated by the proces
sing of information.

To do so we note that in social media and in Massively Multiuser Online Role Playing Games (MMORPG) users live in a world that is not what it seems to be.  This leads to the question whether the world that we see around us is really real, or as described in Sankar’s Vedanta and the movie Matrix, is a simulation. This simulation hypothesis is explored further on the basis of Brian Whitworths paper on the feasibility of the world  being an illusion. Finally we demonstrate how this illusory world can be created purely on the basis of information through the equivalence between Boltzman entropy and Shannon entropy and a practical implementation of Maxwell’s demon in Szilard's engine.

The latest version of this paper is available at this link.

We believe that we have the ability to discern the real from the illusory or the virtual. We know it because in our own life we play out multiple roles. Your behaviour is different when you are at home, at work or when you are with  your school friends or office colleagues at a resort. At home you could be an altruistic parent or a housing society officer or a poet. At work you could be a hard taskmaster, a glib salesman or an ace opportunist.  With your friends in the resort you could be rolling on the floor. So which is one is you? Which is the REAL you? Would you know? Would you care? Or would you say that you are all of them and some more and the difference between these personas is blurred.

Now let us extend this to the world of social media. Where you could be a 'bhakt' or a 'psecular' and be in a violent confrontation with the other.  Even if you are not a political person you could be crafting an identity for yourself as a geek, or a sage and if you succeed that is how you would be seen by your 'friends', followers or connections in social media. It is not unlikely that your identity in social media is a magnification of only one of your 'real' identities, possibly  your professional identity or then again an identity that is defined by whom you hangout with. Or  you could be crafting a totally artificial identity with a hidden agenda in mind.   Depending on the amount of time you spend, or invest, in social media and the number of connections that you build up there, it is not impossible that this social identity overrides what your original identity was, or what you thought it was. In fact, going forward, your digital identity that has a far greater reach than your physical identity will increasingly become your dominant identity. More people might know you as you appear in social media than the fewer who know you in real life. But then, what is really your real life?

Now that you know that your original identity could very well be hidden or masked behind other more visible layers -- and frankly, masks have been around since much before the Wuhan virus  -- what about people who are around you? It is almost certain that they  too -- in social media and in the real world --  would be wearing masks as well, just like you.

When we look around we see ourselves enmeshed in a network of relationships -- personal or professional, commercial or otherwise -- that defines who we are with respect to the world around us. But if every member of this network is wearing a mask and is not who they seem to be then the network loses its structural rigidity, its deterministic nature and its discriminatory potential. It becomes instead an amorphous and shape-shifting cloud of illusions that is as impossible to pin down  as the ephemeral Maria in the movie Sound of Music -- how do you catch a cloud and pin it down?

So what was known and deterministic becomes uncertain, unreliable and illusory. What you see is not what it seems to be but something else.  Perceptions take precedence over the primacy of facts. Wise men say that opinions ( or perceptions) are free but facts are sacred.  In this case, the wise men are not so wise after all because while fact may be sacred, these facts are not accessible anymore. They are hidden behind layers and layers of illusions.

This gets even more complicated, and interesting, when we move from the flat, text based world of social media and into simulated three dimensional worlds. These virtual worlds are available in, or accessible through, Massively Multiuser Online Role Playing Games (MMORPG) like World of Warcraft, Final Fantasy, PUBG, CounterStrike. Non-violent, non-combative but equally enchanting are the simulated virtual worlds like Second Life -- that happens to be the author's favourite -- that are based on similar technology but have different goals and narratives.

What are the common features of all these virtual worlds ? (i) A 2D image of a 3D landscape that is visible on the computer screen. (ii) The presence of humanoid figures, or avatars,  in this landscape that are controlled either by users or by artificial intelligence software in which case they are called NPC or non-playing characters. (iii)  The ability of the avatars and NPCs to interact with each other and with other elements of the landscape  through sound, visual cues and physical contact like push or  ‘fight’, (iv) The ability of users, through their avatars, to build, demolish or operate specific elements of the landscape like buildings, cars and other inert or active artifacts. (v) The existence of quests or challenges that each user, through their avatar, is expected to accomplish, either alone or in collaboration with other users/avatars. This could include creating buildings, occupying territory, locating and exploiting hidden resources or acquiring skills to perform one or more of these tasks.

A social media handle and an MMORPG avatar are essentially the same, in the sense that they allow an individual to interact with others through  a common, intermediate platform. On Facebook, you can build a page and your handle can argue with others, while in MMORPG, you can build a castle or have a fight with other avatars. What is different is the extent of realism or similarity with real life where an MMORPG avatar is far more realistic than a social media handle. With the advent of virtual reality or augmented reality gadgets, like helmets, spectacles and gloves, the level of realism can be increased till it is almost impossible to differentiate between the virtual and the real.

In fact, the illusory nature of both the MMORPG avatar and social media handle can be extended into the illusory nature of the multiple personalities that we carry in real life. This is where the border between the real and the virtual world becomes increasingly blurred. What is real and what is illusory becomes increasingly difficult to distinguish. For your own self, it may still be possible to switch between alternate realities and hence distinguish one from the other but for people around you it becomes increasingly difficult to detect the real you, especially if the digital channel is the only channel of communication. Similarly, it becomes impossible for you to detect and distinguish between the alternate realities for the people around you and the worlds that they inhabit. Each of us live in our own cocoon of perceptions that shields us from the reality of the external world.  We live in ..

Maya or The Matrix, The World of Illusions

Maya is an idea that was first articulated by Sankaracharya, the 8th century Hindu savant, who distilled the concept from the primordial Upanishadic insights. Much later, in the 20th century, it was introduced into western popular culture through The Matrix, a movie set in a not too distant dystopian future.

Sankar’s philosophy of Vedanta posits that Brahman is the only real entity in the sentient universe. The Brahman -- which is different from the Brahman jati, as in Brahman, Kayastha, Bania (or Beney)  etc. -- is the embodiment of Truth, Consciousness and Pleasure, or Sat-Chit-Ananda, that is without form,  qualities or attributes. It is pure knowledge or information that has no equivalent in the world that we are familiar with. This Brahman for no reason but out of its own desire, creates, or dreams up,  a physical world where objects have form, qualities and attributes. This is Maya, that, for the lack of a better word, is described as an illusion or a dream. Within this Maya and because of it, the physical world exists as a multitude of objects that exhibit a wide range of forms and qualities. Some of these objects are conscious and sentient in the sense that they have the ability to observe and interact with other objects within this illusory world. These conscious entities are called Atman that are an extension of the formless Brahman but because of the shroud of Maya, they see themselves as an imperfect reflection of their true nature, the Brahman -- the ultimate reality. However, some of these conscious and sentient objects acquire the ability to understand the illusory nature of the world around them. These are the mystics, the Yogis, for whom Maya dissolves and they see, realise, or experience the continuity of the seer and the seen, the subject and the object, and of themselves -- with form, shape, qualities -- with the formless and shapeless Brahman. This is the Monistic philosophy of Vedanta that is significantly different from the monotheistic religions that sees the duality of a creator God that is distinct from his creation, the world and its people.

For a person who is a product of Maya and is immersed in it, the fact that the world around them is illusory is almost impossible to accept. The Matrix movie demonstrated a hypothetical, sci-fi, framework where this could be implemented. In the movie, every human body is, right from birth, deprived of all sensory information from the real world -- of real mountains, real machines and the few real people who exist in it  -- and is instead fed an alternate set of information that is sent directly to the sensory part of the brain. This means that the brain is only aware of this alternate information and hence constructs its own alternate world -- complete with its illusory mountains, machines and people. This alternate world is created with a software program called the Matrix. The story, that is too well known to be retold here, is all about how some real people detach one such body -- that of the hero, Neo --  from the Matrix and opens his eyes, literally and metaphorically.  Now that  he can see for himself that there is a real world that is different from the alternate illusions that his brain and body have grown up with, he can make a choice to take either a red pill or a blue pill and choose for himself the world that he wants to live in. Unfortunately, the choice between the red pill and the blue pill is not available to most people, or body-brain combinations, so their brain continues to live in the alternate reality created by the Matrix as long as the body is in a state to function.

The Matrix was released in 1999 and since then, technology has moved by leaps and bounds. While all that is described in the Matrix is far from being a reality today, nevertheless there has been substantial progress. The ability to create virtual worlds is very well established with MMORPG products that we have discussed earlier and the usage of advanced display devices like virtual reality and augmented reality helmets, gloves, etc allow for an extreme level of immersion. Moreover it is now possible to connect the human brain directly to external, digital devices and it is possible to have bidirectional movement of information. Signals from the brain are routinely being  used to control external devices, giving rise to thought-controlled devices like wheelchairs and MMORPG game objects. The reverse process of sending external digital signals back to the brain to create an artificial illusion is also possible but is not as effective as the outward process.

So the Matrix is not totally sci-fi as it seemed to be when it was released in 1999. We now have the bits and pieces of technology that were referred to and  it is a matter of putting it all together to replicate what The Matrix talked about and make the transition from science fiction to science reality. However there is one aspect of the movie that is still far from being replicated in reality and that is the role of intelligent computers in building the physical infrastructure for the Matrix to operate. In the movie, it is the computer -- software and robots -- who do all this whereas today, the MMORPG and brain-computer interfaces are still designed and built by humans. Hence there exists a fairly well delineated boundary between the virtual reality of MMORPG and the real reality of the external world. So it is always possible for anyone to exercise the choice of the blue pill or the red pill -- to continue to live in virtual reality or to switch off the display device and come back to the “real” world.

But what if this choice is withdrawn? Either voluntarily or as compulsion. What if it is mandated that going forward every child will have an implant on their skull that will allow an external digital feed to send signals directly to the brain and in the process drown out the natural signals from the eyes, ears, nose, touch and tongue? Assuming bodily functions are taken care of by someone else, the child will grow up -- just as in The Matrix -- in an alternate reality.  One challenge could be the ability to procreate through the the act of sex. This could be overcome in the alternate reality by simulating the feeling of sex, of ejaculation, of orgrasm and eventually of the labour pain leading to the sensation of touching and feeling of the child. In the physical reality, procreation is simpler because of artificial insemination and subsequent childbirth. Which is why we say that the premise of The Matrix is theoretically not impossible though there must be a dramatic change in the socio-cultural structure of human society.

Which makes us wonder if this has already happened as a part of biological evolution? What if we already are a part of and  surrounded by an illusory world where our five modes of sensing the external world are nothing more than digital signals sent into our brains. In fact, in the previous section we have seen that, in a sense, we have already isolated ourselves in a cocoon of perception -- created with our multiple personalities, our social media personas and MMORPG avatars -- that shields us from the reality of the external world. Have we already taken the blue pill that allows us to live in an altered reality.  But perhaps there is no real choice between the red pill and the blue pill because what we think of as the physical reality does not exist at all. If we can liberate us from the technology or theology of the Matrix, rid ourselves from our dependence on biology, then we can think of ourselves as non-biological artifacts, or avatars that are being operated by a higher level of sentient beings. Which leads us to echo Sankar and ask whether we are living amidst an illusory Maya and ...

Are we a simulation ?

The simulation hypothesis is not new. It has been around for quite some time but was articulated in its current form by Nick Bostrom [2003] and was made into a movie, Are You Real [ YouTube, 2006] by the author. Of late, many people including Elon Musk have enthusiastically supported this proposition but the most comprehensive articulation for this point of view is Whitworth’s paper,  “The emergence of the physical world from information processing”. See Brian Whitworth, Quantum Biosystems 2010, 2 (1) 221-249, []. [ alternate

The fundamental premise of Whitworth’s paper is that there are  two hypotheses  namely:

  • The objective reality hypothesis: That our reality is an objective reality that exists in and of itself and being self-contained needs nothing beside itself.
  • The virtual reality hypothesis: That our reality is a virtual reality that only exists by information processing beyond itself, upon which it depends.

Obviously, Whitworth is a strong proponent of the second, the virtual reality, hypothesis and has put together an impressive collection of conjectures, arguments and facts to support his case. There is little point in repeating the same arguments here except to point out that he uses the logic of Occam's Razor very elegantly to demonstrate twelve facts that are far simpler to explain with virtual reality than with a physical universe.  However, in his conclusions and discussion Whitworth introduces the concept of the physical reality being an interface and explains it as follows :

Figure 4 gives the reality model options.

The first is a simple objective reality that observes itself (Figure 4a). This gives the illogicality of a thing creating itself and doesn't explain the strangeness of modern physics, but it is accepted by most people.

The second option argues that since all human perceptions arise from neural information
signals, our reality could be a virtual one, which in fiction stories is created by gods, aliens vs machines, for study, amusement or profit (Figure 4b). This is not in fact illogical and explains some inexplicable physics, but few people believe that the world is an illusion created by our minds. Rather they believe that there is a real world out there, that exists whether we see it or not.

The third option, of a reality that uses a virtual reality to know itself, is this model (Figure
4c). As this paper asserts and later papers expand, it is logically consistent, supports realism and fits the facts of modern physics. In it, the observer exists as a source of consciousness, the observed also exists as a source of realism, but the observer-observed interactions are equivalent to virtual images that are only locally real. This is not a virtuality created by a reality apart, but by a reality to and from itself. If the physical world is an interface to allow an existence to interact with itself, then it is like no information interface that we know.

This third option is in fact nothing more than a restatement of the concept of Maya, the illusion, or what we refer to as virtual reality. This is where the Atman, the individual observer, sees itself as different from the Brahman through the prism, or illusion, or Maya, of virtual reality. When Maya that creates the illusion of reality is removed, the Atman sees itself as it really is, an extension of the Brahman -- the fundamental unity of a Monistic universe.

While we may be veering around to the idea that we are indeed a simulation and the physical reality that we see around us is actually a virtual reality that is created by the processing of information, there remains a nagging doubt. How can the world around me, the world that I can touch and feel be not real? Even if the world around us is a simulation then there must be something physical on which the simulation must execute. In the Matrix, this was the biological body of the humans who were trapped in the Matrix from their birth to their death. In the case of MMORPG, it is the ‘hardware’ of physical computers on which the information to simulate the world must be processed. Where is this hardware? One could argue that this hardware is also a simulation as we have in the case of VMs or virtual machines that we see in many platforms like Oracle VMWare or Dockers but that is merely postponing the problem and not addressing it. VMs may be virtual but then they must execute on underlying physical machines.

This issue has been addressed in the concept of "Turtles all the way down". This is an expression of the problem of infinite regress that  alludes to the mythological idea of a World Turtle that supports the flat earth on its back. It suggests that this turtle rests on the back of an even larger turtle, which itself is part of a column of increasingly large world turtles that continues indefinitely (i.e., "turtles all the way down"). This idea has been expressed in the mythology of many cultures including that of India but once again, this postpones the problem without addressing it.

Which brings us to the next important question. What is more fundamental -- matter or information? Does information depend on the existence of matter or does matter depends on the existence of information.  In the first case, we would need a physical computer to process and display information and in the second case, information itself is adequate to create the illusion of matter. Common sense would say that matter is primary and information is something that emerges if and only if there is a material mechanism to process it. However, quantum mechanics has repeatedly shown that common sense is not a very reliable mechanism and many of the cherished principles are extremely counterintuitive -- as in the same particle taking two different paths or in the instantaneous transfer of information through the process of quantum entanglement. Once we ignore this so-called common sense, many things fall into place including what John Wheeler referred to as IT from bit or  IT from qubit. This suggests that material bodies can emerge from a bit of information or, as is the case now, a quantum bit.

But if we look a little deeper, the concept is not as counterintuitive as it seems to begin with.
That "information is power" is a statement that is often made both figuratively and loosely, but can it be literally true? Is it possible to find links between information and the stuff that they refer to in physics text books? Obviously the information that you read in the newspaper cannot be easily related to the power that causes a light bulb to glow. To simplify both sides of the equivalence, or analogy, and see if we can find a real link between the two. .....

This part of the article needs mathematical symbols that are not possible to represent easily in a blog. To read this section, please visit this page. Then come back here and continue ...

So now we have a direct example of the conversion of information into energy. This is not a thought experiment because it has been demonstrated in a real experiment.

Connecting the Dots

While there is still some residual scepticism about the equivalence of information and entropy because the two are so different in nature, we have managed to establish with reasonable comfort that information and thermodynamic entropy are fundamentally similar. Next we note that entropy and energy ( or at least heat energy) is very closely related to each other and are linked by the equation dS = dQ/T . Energy can exist in many forms - electrical, kinetic, potential etc --  all of which are interchangeable with each other but one that is of maximum interest is the form of matter. Yes, as we all know matter and energy are two aspects of the same fundamental  property to which we can now add information. Hence information and matter are tied to each other. We always knew that matter can give rise to information but now we can claim that information can also give rise to matter. Hence information matters!

Finally, once we agree that matter can emerge from information, then the entire edifice of the simulation hypothesis gets a firm foundation to stand on. There is no need to talk about an endless series of turtles that stand on each other's back.  We begin with information, the genotype, that philosophers in India refer to as the Brahman and with this we can recreate the simulated world of illusory Maya.

Maya, Matrix, Shiv, Shakti, Information, Energy, Genotype, Phenotype - the possibilities are endless. And then you have fake news on social media which connects the sublime to the mundane!

The latest version of this paper is available at this link.

September 09, 2020

Information Matters

That "information is power" is a statement that is often made both figuratively and loosely, but can it be literally true? Is it possible to find links between information and the stuff that they refer to in physics text books? Obviously the information that you read in the newspaper cannot be easily related to the power that causes a light bulb to glow. To simplify both sides of the equivalence, or analogy, and see if we can find a real link between the two, we begin with .... ( read on)

July 29, 2020

Carbon and Silicon

We are all familiar with carbon intelligence - the natural human intelligence that has given us everything from the fire and the wheel,  through the Ved, Upanishad, the Mahabharat, the Laws of Mechanics, Electrodynamics, Thermodynamics all the way through to Relativity and Quantum Mechanics. Near the end of this journey we have run into silicon intelligence or the artificial intelligence that is demonstrated by machine learning and neural networks that has given us autonomous vehicles and software that learns to play very realistic games.

But somewhere along the line these two forms of intelligence -- carbon and silicon -- are coming together to create what sci-fi has been talking about for many years : the cyborg -- part human and part machine. Where are we on this technology? Is it still science fiction? or is fiction becoming a fact. I explore this idea in this lecture that I delivered to the incoming  (July 2020) batch of  Data Science students at the Praxis Business School.

The slide deck is available at

June 25, 2020

Python for Business Managers

Managing a business enterprise is impossible if the manager is not at ease with dealing with data. While soft skills and EQ are important, when push comes to shove it is data on the table that really matters. Data driven decisions are the backbone of any efficient enterprise.

It is said that data is the new oil because of its intrinsic value. This is why the most powerful companies on the planet, like Google, Facebook, Netflix, Amazon owe their immense clout  to the huge amount of data that they have accumulated about people and their behaviour. Gathering, storing, managing these multi-terabytes (or more) of data is loosely referred to as Big Data. But using this data to draw inferences about the past and more importantly making predictions about the future is Data Science.

Managers in the past were not unaware of or indifferent to the importance of data. Many of them have been using spreadsheets like Excel to assist them in their daily work. However the volume of data in the current business ecosystem is so large that spreadsheets are no longer adequate. Spreadsheets is a legacy technology, almost a relic, from an era that businesses have left behind. This technology simply cannot scale-up to handle the kind of Big Data that today's internet based businesses generate on a daily basis.

Data Science uses many next generation tools to handle Big Data and Python is one such tool that is very widely used today. This book will help managers who do not have a background in computer programming to learn Python to the extent that they will be able to use it in their daily work. Readers will also walk through two detailed exercises that will demonstrate how these tools can be used in retail sales and multinational eCommerce scenarios.

Buy the paperback from the Pothi bookstore.

May 02, 2020

Strange Coincidence?


April 17, 2020

Lockdown lectures - DIGITALICS

In an earlier post, we had introduced the idea of D I G I T A L I C S
Here is a video that explains it further

January 08, 2020

CBSE to ZBSE - The Innovation Nation

image from MIT Review
When resources are limited it is creativity and its first cousin, innovation, that allows us to get ahead by achieving more with less. In practical terms it translates into the importance of R&D in corporates, or its precursor, research in academia. Which is why publish or perish has been the guiding mantra for those seeking tenure -- or permanent employment -- in US academic institutions. Since the US is the fountainhead of the most innovative ideas in STEM and related disciplines there must be a positive correlation between innovation and publications. This is the logic used in China where it is mandatory for all academicians to be prolific in publishing papers.

This policy has resulted in interesting developments. First, China -- and Chinese researchers embedded in US academia --  lead the world in terms of the sheer number of papers published. Second, a large number of these papers have been found to be of poor quality if not actually fraudulent. Third, and most interestingly, China still needs to employ thousands of hackers to break into and steal industrial and scientific knowledge from US companies and institutions. This weakens the linkage between published research and real innovation. Perhaps China is doing phenomenally well in fundamental research but until we have greater transparency through the Bamboo Curtain, we remain sceptical.

This lack of correlation between publication and innovation is an outcome Goodhart’s Law, first enunciated by the British economist in 1975 : Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.  Marilyn Strathern puts it more succinctly -  When a measure becomes a target, it ceases to be a good measure. This means that while publications may be a good estimator of innovative thinking, when people are tasked to publish for the sake of employment it ceases to be an estimator of anything at all. Anybody who has been in the vicinity of academic publishing would know that acceptance of a paper for publication depends on (a) choice of an ‘acceptable’ subject (b) the ‘methodology’ of research and ‘style’ of representing it and (c)  the ‘literature review’ and ‘references’ that weave a delicate but readily perceptible network that cycles through a self-sustaining ‘citation index’. The originality of the idea or the elegance of its implementation has little impact on the acceptance of a paper in a scholarly journal. As long as it looks, walks and quacks like a duck -- oops, like an academic paper -- then it must be an academic paper worth publishing. [ Public disclosure - this author has only two papers published in non-Indian academic journals and so could have an issue with sour grapes! ]

This obsession with publications masquerading as research has now infected academia in India as well. So much so that the Director of one IIM, as holy a cow as one may find in Indian academics, has decided that actual teaching should be outsourced to contract teachers while tenured faculty, freed from such mundane distractions, should focus on publishing papers. Which is actually a joke because - at least in the area of management - the ability to architect a complex solution and execute a commercially viable project is a far better evidence of innovation than publishing a paper based on dodgy data collection and p-value based testing of pointless but statistically significant hypotheses. But unfortunately, university ranking mechanisms and  regulators like the UGC and the AICTE have latched on to publications  as measures of excellence. Hence we are back to this concept of publish or perish without any thought to its correlation with genuine innovation.

In fact, such borrowed measures of academic prowess have their roots - at least in India - in the larger story of the lack of innovation in the economy. Who decides on what is meant to be a good student in India? First, academicians and then, more importantly, corporate executives, mostly engineering and management graduates, who decide on whom to hire and from which colleges. What is common to all such decision makers is not innovative or original thinking but a history of having cracked entrance examination in their student days. That is why they like examination crackers, or people like them. The entire edifice of corporate and academic India is brimming, not with innovators, but with those who have been able to game the entrance examination system.

Entrance examinations like CAT and JEE were once designed as estimators of intellectual ability. But again, in a perverse reaffirmation of Goodhart’s Law, cracking these tests have become the end goal for all students. The JEE rank that was once a good measure has now, after becoming a target, become worthless in evaluation. Kids with original ideas will never be able to game the system with coaching classes and the thought conditioning necessary to reach the colleges that will take them to companies that in turn could come up with original ideas. Hence Flipkart will always be a copy of Amazon (without its cloud technology) and Ola and Oyo will be copies of Uber and Airbnb. Even when wildly successful, there is nothing original in their products and services. Nothing like Skype or WhatsApp, let alone molten-salt nuclear reactors or CRISPR will originate from them.

So is there an alternative? Is there anything else that could seek out people with raw and native talent? Is there a way to eliminate artificially difficult entrance examinations, like JEE, that only the best coached and best prepared can crack? Once upon a time, long long ago Class X and XII marks were good estimators of talent but with state boards competing to give 90% to all, that option has been ruined.

What if the percentile rank, instead of the absolute marks, in the normal  Class XII examination becomes the yardstick of measurement for college entrance? The immediate objection would be that different boards with widely different number of students are not really comparable. The top 5 percentile in a small board like Tripura may not be comparable to the top 5 percentile in a large state like Maharashtra. What if we mandate that everyone should take the one, common national CBSE XII examination?  either in addition to the state board or as an alternative? This may sound good but there is the danger of rigid centralisation and the concomitant spectre of a single point of failure.

What we could do instead is to redefine the country in terms of education zones and create a Zonal Board of Secondary Examination (ZBSE) for each zone. This will be analogous to the Indian Railway network being managed through sixteen railway zones like Western, South Eastern, East Central and so on. Each such education zone will cover more than one state and may even span state boundaries depending on linguistic and cultural similarities. Each ZBSE  will  conduct its own X, XII board examinations based on a syllabus that takes into account both national perspectives and regional diversity and on a schedule that reflects local convenience. State boards would become irrelevant but even if retained, students should be allowed to take ZBSE examinations in their respective zones of domicile irrespective of the schools that they physically attend.

With educational zones in place, the percentile marks in both ZBSE X, XII examinations should be used as the primary selection criteria for admission to all Central Universities and all UGC funded institutions. In addition to the percentile on aggregates, different disciplines like engineering or liberal arts could use the percentiles on specific subjects or groups of subjects. This will free students from the need to sit for any artificially constructed aptitude test for college entrance and instead, focus on a traditional broad based, multi-subject ZBSE school curriculum.

Recruiters from the authors generation have always used Class X, XII marks as an effective measure to discriminate between candidates and having ZBSE marks, that ensure better parity across the country, will reinforce this method. Colleges that look for good placements will now have to run after students with good ZBSE percentiles instead of the other way around. So truly good students will enter good colleges, and then get good jobs or go for research.

Instead of a top-down philosophy of publish or perish, a bottom-up approach using ZBSE X, XII percentiles for college entrance would be a superior mechanism to recognise and reward true talent in the innovation nation.