September 03, 2017

The IT Professional & the Zone of Comfort

Rapid advances in machine learning and the spectacular success of artificial intelligence software in, say, self-driving cars, voice recognition and chatbots for customer service, is sending shivers of anxiety through IT employees. The havoc that robots and automation technology has played with the jobs of blue collared workers on the shop floor, is now travelling upward into white collar offices and not a day passes without a new report about automation eliminating jobs. In India, the IT sector -- that includes actual software developers, application maintenance staff, tech-support personnel and BPO call centre operators -- seems to be particularly vulnerable and it is no secret that a sense of doom and gloom hangs over the cubicles and around coffee machines in large and small IT companies. To make matters worse, some companies have started to shed mid-level people managers, who have stopped writing code for years,  and even senior managers who give poor returns of billability on their bloated salaries. The last straw on the back of the vanishing optimism is the reduction in campus hiring of bus-loads of low quality engineers from the hundreds of engineering colleges that have mushroomed on the promise of the Y2K inspired IT revolution. How much of this gloomy scenario is true and what can be done to bring the sunshine back? Obviously there is no quick fix but let us explore the terrain to seek a way out of these difficult times.

image borrowed from Financial Express
TCS, the biggest IT company in India was founded in 1968 and its 3.75 lakh employees generate a revenue of $18 billion while Microsoft, founded in 1975 pulls in $86 billion with 1.2 lakh employees. These, and similar, statistics has been used, ad nauseum, to pontificate that India must move up the value chain from TCS style services to Microsoft style products. But why has this not happened despite being talked about for years? One reason of course is the distance from the customer. Before the advent of the world wide web, a software builder in India would be so far removed from both the technology and the customer for the technology that it would have been impossible for him to create anything relevant. Hence the great divide between the smaller, H1B fuelled, onsite gang and their poor cousins in the offshore team. But even this results in only more services but no products. However with the internet bridging the gap this should not have been an issue -- except that it still is!

It is said that the eclectic ecosystem of Silicon Valley, with it’s simmering cauldron of technology evangelists, dreamers, brilliant programmers, venture capitalists along with legal and infrastructural support, is a fertile bed where innovative products sprout like weeds. Then how come Skype, that defines web based video conferencing was developed in Estonia? a country of 1.3 million people that most of us may not be able to locate on a map. Similarly, AVG, one of the most popular anti-virus products,  was developed in Czechoslovakia, a country with only 16 million people. But India, with over 100 million people of whom 3 million are software professionals is yet to come out with any such software product that has global acceptance and recognition. Do Indians not know how to write programs? That is highly unlikely, given the size of the IT sector in India, but what is surely missing is the ability to complete the full cycle of identifying requirements, architecting the design, securing funding, coding, building the product and eventually managing and monetising intellectual property rights. Instead, what our professionals know and do best is to receive instructions from an overseas client and code to their specifications.

This inability to go beyond meticulously following instructions and that too at a price point so beloved of our overseas clients, is the root cause of the insecurity created by the arrival of AI.  This technology is best geared to target tasks that are reasonably well defined and needs to be done repetitively. This means that in the spectrum of IT services, call centre operations and tech-support jobs are the most vulnerable. Neither is application maintenance any safer because fault diagnostics and repair is something that AI can do pretty well. What is most safe is new application, or product, development -- though even here, there are rapid application development tools that reduce the effort, and people required -- and that is where Indian IT is on its weakest wicket.

One reason why we in India are unable to come up with new products is that as a people, we are perhaps very comfortable in our respective Zones of Comfort. Our reverence for what is old, established and running is phenomenal and we are very reluctant to try out anything new. Consider the transition from MS Office, with which all of us are comfortable, to cloud based free products like Google Docs, Sheets and Slides. Despite the fact that web connectivity is as ubiquitous as electricity for our IT folks, and that the Google products meets all the requirements for 95% of IT professionals, they will almost inevitably begin with MS Office whenever they want to create a new document. Why? Zone of Comfort! This inability to try out something new inhibits our mid-level managers from “dirtying their hands” with any new technology. In fact, for many of our managers, trying out technology is considered infra-dig! Most of them prefer “management” tasks like allocation of people, attending client conference calls, preparing schedules, recording and tracking issues in minutes of meeting and so on because all that this needs is comfort with email and MS Office. In fact many managers overtly claim that is beneath their dignity to touch code -- something left for the new hires -- when the covert reality is that it is beyond their ability to do so.

In fact, this reluctance to actually “do something new” is a part of a larger tendency of being involved with consumption and avoiding creation. We would rather read a webpage than actually write a blog. It is an even greater effort for us to write a book when print-on-demand services are available for anyone who wants to publish on his own. The genesis of this mindset of consumption can perhaps be traced back to the path that our kids take from class rooms in schools to the desk at the IT company. Given the historical scarcity of jobs and the lure of campus placements, there is a mad rush for engineering entrance examinations because only those who can crack exams get selected for engineering and then placed in IT and even non-IT companies. The creative types, who are misfits in the rigid constraints of coaching classes, are automatically excluded not just from our engineering colleges but subsequently from the corporate sector. But the exam-crackers, most of whom have been successfully hammered by coaching institutes to abandon their originality and conform to patterns required by entrance examinations, enter the sector, rise through the ranks and in a pernicious cycle, recruit more and more conformists like them thus perpetuating the scarcity of creativity and innovation in our IT companies.

But all that is history. It is easy to say that we must change the system but that is neither something that will happen very soon nor will it benefit anyone in the IT industry today. What should one do to stay employable and relevant?

First, stop blaming the system, the nation or your company and take charge of your life. Light a fire under your seat and move out of your Zone of Comfort. Install an RSS reader in your browser and instead of reading client mail and following company gossip, keep an eye on RSS feeds from Slashdot, TechCrunch and Wired for latest technology trends -- say, machine learning or cybersecurity. Create blogs, contribute on discussion forums. Google and locate technology tutorials. Invest time and money -- lots of time and a little money, because not everything is free -- to acquire new skills. Skip that latest smartphone and instead, buy a personal laptop to install new, experimental software that your employer’s security policy bars on company machines. Write code, build proofs-of-concept, purchase hosting services to make these applications public and highlight these in your Linkedin profile. Go beyond the laptop, get a Raspberry Pi or an Arduino, connect it to a smartphone or even a drone -- available online in India -- to create something that people can touch and play with. Obviously things will not work as easily as they do in office projects but Stackoverflow is always there to help one go past the bleeding edge. Finally, get your kids out of coaching classes and encourage them to join you in exploring new technology! Move from the cool comfort of consumption to the caustic crucible of creation.

AI  is certainly a threat to all those who stay within their Zone of Comfort.  But technology offers infinite possibilities for those who choose to stay relevant through this fourth revolution -- agricultural, industrial, digital and now cognitive -- in human society.

This article originally appeared in Swarajya, the magazine that reads India right

August 10, 2017

Facebook : How it meddles with your mind

Facebook is the mythical 800-lb gorilla in the media world that, as the original joke goes, “sits down wherever it wants to”. With 1.2 billion pairs of eyeballs eyeing it every day, it has an audience greater than any American, European or Asian TV news network, newspaper or online news portal. This immense reach also makes it the most effective medium of entertainment. In societies where it has crossed a critical threshold of penetration, it has become the most potent mobilising force in politics and all this eventually translates into Facebook being one of the  most valuable companies in the world.
image borrowed from https://mymuddledmind.blog/

We know that information is power. We also know that power corrupts and absolute power corrupts absolutely. Should we be wary of Facebook? Consider the following ...

In the Foundation series of iconic science fiction novels by Isaac Asimov, we have  the villain, a mutant psychopath called the Mule, using popular musical concerts as a mechanism, a medium, to transmit subliminal messages to an unsuspecting audience, that demoralizes the population and breaks its resistance to the Mule’s political hegemony.  On December 17, 1997, in a chilling realisation of this fictional scenario, many news channels, including the New York Times and CNN, reported from Tokyo, that “The bright flashing lights of a popular TV cartoon became a serious matter Tuesday evening, when they triggered seizures in hundreds of Japanese children. In a national survey, the Tokyo fire department found that at least 618 children had suffered convulsions, vomiting, irritated eyes, and other symptoms after watching "Pokemon."”

Can a mass media platform be used to meddle with or influence, human minds, en masse?

As an early adopter and ardent evangelist of social media, I had always thought that platforms like Facebook and Twitter were an excellent replacement for television and newspapers as channels for current news and diverse views. But after getting drawn into a series of unintentional and inconclusive spats and flame wars with strangers with whom I have little in common and which left both sides as unconvinced about the other’s point of view as ever, I am sceptical. Was the price I was paying for using these “free” channels far too high in terms of the collaterals of irritation and anger generated in an otherwise placid and cheerful person like me? Was this my fault? Was I not savvy enough to handle this new media just as an earlier generation is psychologically uncomfortable with shopping at Flipkart or using an Android smartphone. How did the evangelist in me morph into a social media luddite, ranting against a technology? Was it just me? Or is this feeling universal?

In a peer reviewed paper published in the Harvard Business Review in April 2017, Holly Shakya and Nicholas Christakis has established what I had recently come to believe, namely, that “The More You Use Facebook, the Worse You Feel”! This is paradoxical because social interaction is a necessary and healthy part of human existence and many studies have shown that people thrive when they have strong, positive relationships with others. But when real world, physical relationships are replaced by digital and virtual relationships, the situation changes. The authors measured well-being -- through self reported life satisfaction, mental and physical health and body-mass index -- and Facebook usage -- through the number of likes, posts and clicks on links -- from three waves of data of 5208 users over two years, and came to the conclusion that overall well-being was negatively associated with Facebook usage, with the results being particularly strong for mental health. Moreover, the study also showed that the decline in well-being is strongly tied to the quantity of Facebook usage and not just the quality of interactions as it was believed to be in the past.

While the authors offer no explanation for this negative association of well-being with Facebook usage, it is not difficult to see why this is so if we consider what shows up on your newsfeed. Depending on the number of posts that your friends, and pages that you have liked, have shared there would be approximately 2000+ items that Facebook could show you but since this  leads to an uncomfortable information overload, the actual number shown is possibly as low as 200. This selection or curation is not performed by any human editor but by an artificial intelligence (AI) program that is designed to maximise benefits for Facebook. Since it is in Facebook’s interest to stimulate conversations, it’s AI will obviously select items that would provoke a user to react -- just as in a zoo, visitors throw stones at the animals instead of allowing them to rest in peace. Hence, while placid and informative items will not be totally ignored, there will always be a slight bias towards items that will provoke a reaction. For example, a Hindutva follower -- and Facebook knows our preferences to the last detail -- will be shown more items on minority appeasement, knowing fully well that is more likely to trigger a torrid response, and a subsequent equally torrid counter response,  than pictures of flowers and birds. Of course this bias is neither obvious nor in-your-face. You will still see the usual quota of bland, feel-good quotes and pictures of friends holidaying in Goa or Singapore. Which is fine, except that you just might feel a tad disappointed that you are stuck in messy Mumbai instead of being in Goa which in another reason for feeling a bit sore with yourself! Since nobody posts about their problems, this too leads to the depressing belief that everyone except you is happy.

In fact, playing and tampering with Facebook users’ emotions and deliberately trying to modify it is the subject of a very controversial paper - “Experimental evidence of massive-scale emotional contagion through social networks”, published in the June 2014 issue of the Proceedings of the National Academy of Science USA, by members of the data science team of Facebook. For the purpose of this paper, the Facebook team deliberately introduced a certain bias in the nature of items included in the Facebook user’s newsfeed and observed the impact on their subsequent behaviour. To quote the authors, “In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.”

This paper was criticised for violating basic ethical principles of psychology research because no consent was sought from the subjects whose emotions were being tampered with. That does not detract from the fundamental premise that Facebook has the ability to modify the emotions of its users and has done so in the past.  In fact, what is even more disturbing is that Facebook now has the technology to use  webcams and smartphone cameras to track emotions in real-time by detecting, decoding facial expressions as we read posts! While there is no evidence of any deliberate evil intent as yet, the fact that it’s AI based news selection service can detect and tamper with the emotions of users is a big red flag because, as noted earlier, Facebook touches more people than any newspaper, television channel or news portal and so has the ability to mould the emotions of a significant part of the global population.

While Facebook has been targeted for being a channel or firehose for fake and unstantiated news, the real danger lies in its ability to tamper with our emotions and, as reported in the HBR paper, make all of us feel angry, frustrated, jealous and upset with the world around us. Can we do anything to mitigate this unfortunate state of affairs? At a personal level, one could reduce the amount of time spent on the platform but since Facebook is an addiction like tobacco or alcohol with similar withdrawal symptoms, this may not be a feasible solution for everyone.

What users could ask for instead, is greater transparency in the algorithm, the procedure, used to determine what they see or don’t. If I want to see posts about birds and flowers, I must not be shown pictures of stone-pelters in Kashmir. In fact, such a process does exist, because you can indicate the kinds of posts that you want to see less of, but a more direct method should go a long way to restore the sense of choice that we have in newspapers and TV to read or ignore specific items of news and views

Social media is here to stay and Facebook, with its unassailable reach and immense clout, is something that -- like the monsoon rain -- we have to learn to live with. However knowing the danger that it poses and working on ways to reduce its impact is something that needs urgent action.


This article originally appeared in Swarajya, the magazine that reads India right.

July 27, 2017

OLAP Data Cube with SQL

As an erstwhile DBA, a long time user and a great admirer of the SQL language -- that has stood the test of time for the last 30 years -- I have always sought to use SQL in many useful ways. In an earlier post, I had shown how SQL can be used to solve a classic data science problem, namely Clustering, using the K-Means algorithm and today, I demonstrate how SQL can be used to process OLAP data cubes and generate the popular cross-tabs table.

Data cubes, or OLAP cubes, are a way to store historic data using the dimensional model, as opposed to the relational or 3rd normal form model. These data cubes can be "sliced" and "diced" to reveal data relevant to particular dimensions. Because of the immense popularity and ubiquity of relational databases, like Oracle and MySQL, data in the dimensional model is routinely stored in relational tables and retrieved -- by slicing and dicing the cube -- using standard SQL constructs like the WHERE clause. This is called Relational OLAP or ROLAP.

Data cubes are very popular because they allow multidimensional data to be collapsed to any two dimensions and shown as a "CrossTab" -- and human beings can comfortably visualise only two dimensions on a page or a screen. Unfortunately, creating CrossTabs is not very easy with normal SQL and that is why there exist a genre of specialist products -- Multidimensional OLAP or MOLAP -- that allow users to create CrossTabs by "rotating" the data cube as necessary.

Microsoft SQL-Server, a RDBMS product, has a proprietary construct called CUBE that allows this feature but this is not available in most RDBMS products and certainly not in MySQL, the free and open-source product that is the most widely used RDBMS on the planet.

The following slide deck shows how MySQL can be used to "rotate" an OLAP data cube and generate CrossTabs for any cube of dimension 3 or higher


(please view the slide deck in full screen mode)
We also show how a "pivot" table, so beloved of Excel users can also be generated using MySQL and hence by extension in any RDBMS.

But why would anyone wish to use SQL or MySQL to build and work with data cubes when MOLAP tools are available?
  1. First, SQL is easily understood and widely used by a vast majority of IT professionals
  2. Second, MySQL is a free and open-source product that is used in almost every web application
  3. Third, SQL is supported in a multi-machine, clustered environments like Hadoop/Hive and Spark and so this technique can be used -- at least in principle -- to support data cubes built with ultra large data sets.
Unless one wants the bells and whistles that come along with most MOLAP products, MySQL is good enough for almost any OLAP activity and can be scaled up with Hive / Spark for very large data.

Acknowledgement : The technique demonstrated in this post has been adopted from information provided at http://www.artfulsoftware.com/infotree/qrytip.php?id=78

June 30, 2017

Quantum Computers

Quantum mechanics is a subject that has the strange property of simultaneously being logically rigorous and yet completely counterintuitive. So much so, that even a towering intellect like Einstein could never bring himself to accept its principles even though products based on the same exist all around us. The earliest oddity, identified by Schrodinger, one of the founders of quantum mechanics is about a hypothetical cat that is neither dead nor alive until someone actually observes it. A similar oddity is that of quantum entanglement, where the behaviour of one particle is instantly affected by the behaviour of another particle, however distant it may be -- an example of “spooky” action-at-a-distance. Explaining these phenomena is beyond the scope and temerity of this article and so the reader would have to accept them here in good, almost religious, faith and carry on with the belief that such phenomenon has been observed and explained by scientists under the most rigorous experimental circumstances.

Image borrowed from Quanta Magazine
Any programmable digital computer that we use, the desktop,the smartphone or the ones at Google, is based on a finite state machine (FSM). It can, at any instant of time, be in one of a large, but finite, number of well defined states. The state of a FSM is defined by the value stored in each of its memory locations and we know that these can either be 0 or 1. So an FSM with, say, 16 bits of memory could in principle be in any one of 2^16 states. Any instruction to the FSM changes the value of one or more bits and and the FSM moves to a different state. An FSM along with the ability to read binary input, from an infinite tape, and write back on the same tape, is the Turing machine that is the theoretical basis of any modern computer.

The fundamental principle of computer science is that the world is computable, meaning that any logically decidable problem can be represented and solved on a Turing machine and hence by extension on some, possibly very powerful, digital computer. This is the basis of our immense belief in computer technology that powers everything from smartphones to artificial intelligence. But even as long back as 1982, Richard Feynman had questioned this principle because he realised that Turing / FSM based computers could not solve the problem of simulating the movement of multiple particles whereas nature was doing it all the time! Did the quantum mechanical behaviour of nature mean that nature had a computing device that was inherently superior to the Turing machines built by classical computer technology? This is where the concept of a quantum computer was born.

A computer, is a state-machine where it’s state is defined by the collective states of each of its memory locations. In a classical computer, each memory location, or bit, can either be 0 or 1 certainly not both, but in a quantum computer it can be both 0 and 1 simultaneously -- very much like Schrodinger’s cat that was dead and alive at the same time!  This is where the going gets really rough for anyone who has spent a lifetime in classical computer science because this is something that is completely counter-intuitive. A memory location, a bit, is a transistor, or switch, made of silicon that is either ON or OFF. How can it be both? Turns out, that if you keep aside computer science and open your books on quantum mechanics, it is indeed possible that a body can be in two states at the same time based on the well established principle of quantum superposition. Now if we go back to our 16 bit classical computer with its 2^16 states and replace it with a quantum computer with 16 quantum bits, or qubits, of memory we have a machine that can be in 2^16 states simultaneously. If that is not mind-bending enough, all these 2^16 states will collapse into any one of the states as soon as we try to observe it. It is almost as if nature is playing a game with us, pretending to be classical whereas it is actually quantum.

But why are we obsessed with this counter-intuitive phenomenon? Will it have a drastic improvement on existing digital computer technology? Not really. Your spreadsheet, email, YouTube, eCommerce, smartphone will hardly change but two things could. First, current cybersecurity systems, that are based on our inability to decompose integers into their prime factors in a reasonable amount of time, could be ripped apart by quantum computers, leaving all passwords vulnerable to hackers. Second artificial intelligence could be taken to altogether and unbelievable levels of sophistication. So quantum computers will soon have a very important role to play -- but how far away are we from real, practical systems?

The biggest challenge is the construction of the physical memory locations and the complexity of the engineering problem is evident from the following : A modern IBM classical computer chip has anything between 2 and 7 billion transistors each of which can be ON or OFF. The corresponding IBM quantum computer chip, that powers the IBM Quantum Experience machine, has only 5, yes just 5, qubits of memory that can be in quantum superposition of ON and OFF. Why so? First, the memory locations have to be cooled to near zero Kelvin to exhibit their quantum superposition behaviour and if the cryogenic challenge was not enough, the second challenge is even bigger. Unlike the memory locations of classical computers whose state can be determined by sensing the presence or absence of an electrical voltage, the multiple, superimposed quantum states collapse as soon as any effort is made to observe them. This is as if a room has a house of cards that collapse as soon as the door is opened by the observer and the observer has to figure out what the house looked like by observing the disposition of the cards on the floor! Since the qubits can never be accessed directly, as in a classical computer with read and write statements, they can only be “influenced” indirectly.

To put things in perspective, ENIAC, one of the world’s first, 1st generation, vacuum tube based classical computer had 20 memory units, or accumulators, in 1945, and a 2nd generation, transistor-based computer from the University of Manchester had only 200 transistors in 1955. Since then we have moved through 3rd generation integrated chips and the current 4th generation of microprocessors have scaled up to billions of transistors thanks to the inexorable pressure of Moore’s Law. If we remember that even with its 20 memory units, ENIAC was used to solve problems in weather forecasting, atomic energy calculations, wind tunnel design, the current 5 qubit IBM machine does not look as hopeless, or helpless, as it seems to be.

But actually things are a little better off. D-Wave a Canadian company that has been building quantum computers since 1999  have come out with a 128 qubit machine in 2010, a 512 qubit machine in 2012 and 1000 qubit machine in 2015. Initially there were some doubts about whether these were quantum machines at all but after these machines were actually installed and used first by Lockheed Martin at the University of Southern California and later at the Quantum AI Lab of NASA Ames Research Centre by a team from Google, these doubts have receded to a large extent. But even though some doubts persist, there is enough evidence of quantum behaviour or at least great promise that these doubts will be removed soon. In early 2017, D-wave announced the sale of their first, commercial available $15 million 2000-qubit machine to cyber-security firm, Temporal Defence Systems.

IBM’s 5-qubit Quantum Experience is positioned as general purpose computer. It could be used for any computational task but would be efficient only if the program was designed to use quantum properties -- a colour TV is useful only if the broadcast is in colour. Very few programs can do this today but Shor’s algorithm, used to crack passwords, is definitely one such. D-Wave systems on the other hand are designed to solve one class of problems that minimise the weighted sum of large number of interrelated, or entangled, variables. This may sound restrictive but the reason why everyone from Google to Temporal is interested is because this class of problems is similar to the ones that occur in artificial neural networks that lie at the heart of systems based on machine learning.

Spectacular progress in machine learning with artificial neural networks using classical computers itself, is rapidly closing the gap between biological and nonbiological intelligence or even between carbon and silicon “life-forms”. With the advent of quantum computers one more crucial barrier between the natural world and it’s man-made, artificial model could break down -- as could the increasingly thin line that delineates man from machine. Will this drag man down to the level of machines? Or will these machines push man up towards his eventual union, or Yoga, with the transcendent omniscience that some refer to as God or Brahman?


This article originally appeared in Swarajya -- The magazine that reads India right!

June 03, 2017

Order, Stability or Chaos?

Global, national and local societies face many threats. We are threatened by enemies -- internal and external -- who want to destroy our way of life. We are plagued with environmental degradation as we quickly try to ramp up the economy and improve our living standards. Finally our own social systems are in tatters because efforts to mitigate the effects of the first two reasons are stymied by venal corruption and a cynical disregard for the rule of law. In fact the last reason is perhaps the most over arching reason, because it leads to the other two.

image from 5rhythms
We have solutions to most of our problems. Technology solutions are available to grow more food, generate more energy, combat disease and check crime. There are public structures like hospitals, schools, municipal, state and central governments, the legislature, each having its own set of rules and procedures, to guide and govern matters. There are commercial structures, like corporates, cooperatives and professional networks that transform natural and human resources into disposable surplus that can be used for material pleasure. Then there are clubs, non-profits and political parties that lubricates the gears and facilitates the work of the public and private structures. Finally, we have a whole set of checks and balances, like police, the courts of law, and institutions that recursively keep checks on the checks and balances, like Vigilance Department, the CBI and the LokPal to ensure that everyone does what they should. So in principle, if everything were to work like clockwork, there should not be any unresolved problems on the planet.

But obviously this is absurd. Unlike the precise determinism of classical mechanics, the social mechanism that governs society is based on the non-deterministic behaviour of human beings. No two persons are alike and so no two will respond to a situation in an identical manner. One may be afraid to break the law even if there is a benefit but another may be willing to do so. So there is an element of randomness that permeates society and it is this randomness that is key determinant of social outcomes.

Randomness leads the environment from order to disorder. Physics equates disorder with entropy and the Second Law of Thermodynamics states that entropy of a closed system can only increase over time. In fact the direction of the “arrow of time” is often determined by the level of entropy between two states of the system. Information theory also associates entropy with randomness. Uncertain, random events are associated with high information content and hence high entropy. Certain events, like the daily sunrise, that have a probability of 1, are associated with zero entropy, as are impossible events like a horse giving birth to a dog, that have a probability of 0. But entropy is high when there is uncertainty and unpredictability as in the outcome of a toss of a fair coin, the results of an election or a war.

Increase in entropy, in randomness, in unpredictability, leads to chaos that can be analysed in terms of Chaos Theory. Chaos is the inevitable outcome of any adaptive, dynamic and complex system which is exactly what human society is. Chaos is unpredictability in the face of apparent determinism -- and as Edward Lorenz puts it so elegantly, Chaos is when the present determines the future, but the approximate present does not approximately determine the future. What this means is that a slight change in initial conditions -- a crow flapping its wings in Calcutta -- can cause a major upheaval far away -- a tornado in Texas. Mapped to human society, it means that social uncertainty caused by the erratic, unpredictable behaviour of a even a small group of people can cause ripples and upheavals across the world.

Chaos theory allows for strange attractors, or periodic repetitions of somewhat predictable outcomes, which is why human society settles into equilibria that gives us a sense of stability.  But given its colossal complexity even one incident, like 9/11, can tip it into a new, possibly more uncomfortable and anarchic equilibrium. Complexity is in fact impossible to manage in large organisations which is why we have the eventual collapse of centrally governed empires -- the Kaurava, the Pharaonic, the Roman, the Mauryan, the Holy Roman, the Ottoman, the Mughal, the British, the Soviet and finally the European Union. We can only hope that India will not join this list. Well governed human societies are based on the rule of law and order and it is this order that is under threat from the Second Law of thermodynamics and Chaos Theory. While we all crave for order, the reason why we rarely attain it is because the laws of the universe inexorably push us towards disorder and anarchy.

But will entropy always increase? Not really. In a small closed system -- as in a school, a company, a factory, a state like Singapore, or perhaps a human colony on Mars -- it is possible to reduce the local entropy within the system and impose perfect order, but this needs one of two prerequisites. Either we need an external agency imposing order from outside -- a non-popular dictatorship -- or there has to exist a mechanism of self-organisation, that resolves contradictions and guides the system towards greater order. A small school or factory is an example of the first while well governed US cities that are cleaner and more habitable than anarchic municipalities in India is an example of the second.

But even in a small society, that is somehow isolated from the random anarchy of the global environment, the ability to self regulate is not guaranteed. Self regulation is actually an outcome of enlightened self-interest that seeks to create the proverbial win-win situation that benefits all at the cost of none. But this is not easy. To understand why, consider the Prisoner’s Dilemma, a special case of a mathematical oddity called Nash Equilibrium that is a part of Game Theory.

Consider two persons who have been arrested for a murder but the police do not have any clinching evidence, with which they can ensure a conviction. So both prisoners are offered a plea-bargain offer. If any one turns approver and betrays the other, then the betrayer will be let off but the other will serve twenty years in jail. If both turn approver, then both serve ten years in jail. But if both cooperate and neither betrays the other, then the police will imprison them for a year on a lesser crime. Unfortunately, neither do the prisoners have any knowledge of what the other prisoner will do and nor do they trust each other. Ideally neither should betray the other, because this will ensure light punishment for both which is the best solution. But in reality, given the uncertainty, neither will trust the other, both will betray each other and so ensure ten year hardship for both. A classic lose-lose scenario.

This scenario is reflected in many real life situations like women wearing makeup to look more elegant, athletes using steroids to enhance performance, over-exploitation of resources like fishes or minerals, countries spending money on arms and ammunitions, countries refusing restrictions on environmental pollutants that hamper economic growth, advertisers spending money to push competing products or bidders at an auction being afflicted with the winner’s curse. In India, aggressive drivers break traffic rules to squeeze past others and in the process create  massive traffic jam whereas everyone could reach home earlier by waiting and obeying traffic rules.

If only people would cooperate with each other, the world will be a better place but the inexorable laws of Game Theory says that this will never happen. If all political parties were to cooperate on matters of national interest, like implementing labour reforms or fighting Islamic terror, many of the social and economic problems that bedevil India can be quickly eliminated but as in the case of the Prisoner’s Dilemma, each political party thinks that cooperating with the other means sealing one’s own electoral fate and facilitating a landslide victory for the other.

Human society is in a bind. The Second Law and Chaos Theory pushes us towards anarchy while Game Theory prevents us from self-organising. So we are forced to reconcile ourselves to a chaotic future. Given the inevitability of chaos in complex systems, our only hope for stability and order would be to have smaller, simpler systems that are easier to manage. Small states, municipalities, panchayats and even gated communities, where the number of players, or variables, is small and where complexity is manageable, have a far better chance of avoiding anarchy.  Going forward, as complex social and security challenges -- both international and now more often intra-national -- overwhelm the world, a loosely-coupled federation of small, self-sustainable, technology enabled, well-managed, elitist communities or “smart-cities”, spread across the Earth and nearby planets, may be the only way towards a reasonably stable future.

The Prisoner’s Dilemma and the inability of people to collaborate for the common good may be a persistent roadblock on the path to global peace with prosperity.


This article first appeared in Swarajya - the magazine that reads India right

May 05, 2017

Biohackers & Biohackspaces

The ubiquity of computers and smartphones and the pervasive presence of digital technology means that everybody who is reading this article is familiar with hackers. Hackers, as we all believe, are evil people, who either create viruses that ruin our machines or access our computers to steal confidential information with the intention to cause harm. We also have ethical, or white-hat, hackers, the guards and policemen, who with the same level of skill, try to beat the evil black-hat hackers at their game and keep digital assets secure. But the original meaning of hacker was someone who is so intensely immersed in computer technology that he knows much more than what a normal, non-hacker,  user would ever know about what can be done with computers. The hacker was the uber-geek, in whose hands a computer could be stretched to perform tasks that it was never meant for and deliver unexpected results. The hacker was a genius, not necessarily the evil genius that he -- and it is generally a he -- is portrayed to be. He was someone who could, in a sense, disassemble and reassemble the hardware and software in ways that no one else can even think about, to create new functionality. This same kind of behaviour when seen in the world of biosciences is called biohacking.
image borrowed from
 https://www.meetup.com/en-AU/BioHack-Melbourne/events/237997248/


Given the very wide range possibilities within biosciences, biohacking means different things to different people but there is one common thread. Just like his better known computer cousin, the biohacker generally works alone or in small groups and usually outside the regulated confines of a university or corporate laboratory. So his -- or her, biosciences is more gender diverse -- activities are usually unsupervised, unregulated and more often than not borders on the unsafe if not almost illegal. But if we leave aside the legal and ethical issues, then biohacking falls into two, broad and sometimes overlapping categories, namely grinding, body hacks or body modification on one hand and DIYbio or synthetic biology on the other.

Grinders are people who modify, or upgrade their biological bodies with non-biological components. A very simple body-hack is to have a bio-safe RFID or magnetic chip -- similar to what we have in our credit cards --  implanted under the skin of the wrist. This chip contains digital information that can be used to “magically” open doors secured with access control devices or unlock smartphone or computers without using passwords. Such implants are not much different from pacemakers but obviously serve a different purpose.

Human beings have five sensory organs but this can be increased or their capabilities enhanced. People have embedded rice-grain sized neodymium magnets, coated with bio-safe materials like titanium nitride or teflon, commonly used in orthopaedic equipments, inside their arms. In the presence of magnetic field, say near an electric motor, these vibrate and alert the user to the existence of the field. Bottlenose, an off-the-shelf customisable product from Grindhouse Wetware, extends this basic capability to ultraviolet, WiFi, sonar or thermal signals, so that, for example, people can estimate distances in the dark using sound waves -- like dolphins or bats.

Pushing into even more dangerous territory, grinders have laced their eyes with a chlorophyll derivative found in the eyes of the deep sea dragonfish that lives in the mile-deep darkness of the ocean. This causes a dramatic improvement in night vision allowing them to recognise people in near darkness. It is also possible to implant a magnet in the tragus -- the small protuberance in front of the year, used to carry ear-piercing jewellery -- that allows one to listen for vibrations generated by a phone and works like an inexpensive ear-piece. An even more amazing example is that of colour-blind musician Neil Harbisson, who persuaded an anonymous surgeon to perform an illegal operation to implant a camera on skull and connect it to a vibrating chip placed near his inner ear. Now the colour blind artist can distinguish colours -- say the red or green of traffic signal -- by noting the frequency or pitch of the sound that he hears when his face, and hence the camera, turns towards coloured objects.

The most shocking body-hack, pun intended, is to improve the performance of the brain with a 2.5 mA, 15 V electric shock -- transcranial direct current stimulation (tDCS). Available off the shelf as ThinkingCap, this device alters the electrical potential of stimulated neurons making them fire differently, leading to better or at least perceptibly different abilities of the brain. Anecdotal evidence suggests that the US Armed forces is experimenting with this technology to keep soldiers calm under stress and improve their marksmanship under simulated battlefield situations!

Obviously none of these body-hacks are approved by any medical regulator and experimenters do so at their peril, but who knowns, out of such intrepid experiments will one day emerge a new kind of human being who may be able to breathe underwater or live happily in the methane atmosphere of Titan.

Less dramatic but perhaps more profound is the kind of work that is done in synthetic biology or Do-It-Yourself biology. Most of the projects in this area are focussed on altering the genetic sequence on existing lifeforms to create modified organisms -- for example microbes that generate copious quantities of insulin needed by diabetics. “Editing” the genetic code is not easy -- it requires big laboratories, lots of equipment and highly trained staff. But thanks to an amazing new technology called CRISPR/Cas9 that was developed in 2012, gene editing has now become faster and inexpensive. Key CRISPR tools, the plasmids -- a genetic structure, typically a small circular DNA strand, that is widely used in the laboratory manipulation of genes -- can be ordered online from companies and non-profit repositories like AddGene, much like books from Flipkart, at prices as low as US$60.

While CRISPR promises garage level DIYbio, in reality, there is some basic level of equipment that is necessary to use these tools. This is where gene clubs and collectives have started to appear. From garages and kitchens, we now have biohacker spaces that offer shared services for fairly sophisticated equipments that members can use either against a monthly fee or on a pay-per-use basis, very similar to the way large computers are available on a shared basis from cloud hosting services like Amazon or Google.  BioCurious, located in Sunnyvale, in the heart of the silicon valley in California is one such biohackspace that offers much of the same equipment found in professional labs. Similarly, the London Biohackspace is located within the London Hackspace that is wildly popular with computer hackers working with the latest in digital technology.

Just as software programmers work collaboratively on open-source software like Linux, volunteers are collaborating at BioCurious to create vegetarian cheese without using any animals by modifying the DNA of baker’s yeast. Similarly, a community driven project at the London Hackspace is trying to create plants that glow in the dark when exposed to mechanical movement or in the presence of toxic chemicals. Community projects at both these labs are also directed towards building new kinds of equipment, like a bioprinter, a 3D printer that can be used to actually “print-out” body parts like skin or even kidneys! Not all projects are community driven. Some individual hackers experiment with their own DNA are looking for genes that may cause diseases, or even to find out what percentage of their own genes comes from Neanderthals!

These biohackspaces are driven by a chaotic combination of ideas and motivation -- that is very reminiscent of the original computer hackers who laid the foundations of the digital revolution that we see today. In fact Wired Magazine has quoted Bill Gates who says that if he were a teenager today, he would be hacking biology -- “Creating artificial life with DNA synthesis. That’s sort of the equivalent of machine-language programming .. If you want to change the world in some big way, that’s where you should start — biological molecules.”

But while biohacking might change the world, there are risks involved and this risk gets magnified when we have unsupervised people playing with dangerous tools. Hence most of these biohackspaces have basic bio-safety protocols in place to prevent any kinds of dangerous experiments and are under informal surveillance of many security agencies including the FBI’s Biological Countermeasures Units. Every technology has the potential to both help and as well hurt society and the biosciences is no different in this regard from either computers or atomic energy.

India never quite had the hacker culture that created the computer revolution. Perhaps that is why Indian IT  was born, not in the crucible of innovation but in the peat bogs of modifying COBOL programs to address Y2K bugs -- and continues with the unfortunate legacy of being a maintenance service industry. Given that a new and far more potent revolution in the biosciences is breaking out all around us, it is important that we quickly create an ecosystem, these biohackspaces, so that our biohackers can lead, not just follow, the herd into the future.


This article originally appeared in Swarajya, the magazine that reads India right!

April 23, 2017

Beautiful and unusual gift from PMI West Bengal

Yesterday, I had the good fortune to have been invited to speak at the PMI Regional conference where instead of the regular, and pointless, bouquet of flowers that is traditionally given to the keynote speaker, I was presented with the following certificate


what this means is that PMI has paid Sankalptaru.org some money to plant 10 trees on my behalf and "my tree" is visible in at the URL indicated by the QRcode.

Thank you PMI for this unusual gift

April 22, 2017

DB2 to Lotus : Accessing Mainframe Data from PC in the pre-Windows age


April 16, 2017

Spark with Python in Jupyter Notebook on Amazon EMR Cluster

In the previous post, we saw how to run a Spark - Python program in a Jupyter Notebook on a standalone EC2 instance on Amazon AWS, but the real interesting part would be to run the same program on genuine Spark Cluster consisting of one master and multiple slave machines.

The process is explained pretty well in Tom Zeng's blog post and we follow the same strategy here.

1. Install AWS Command Line services by following these instructions.
2. Configure the AWS CLI with your AWS credentials using these instructions.

in particular, the following is necessary
$ aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE 
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-east-1
Default output format [None]: ENTER

you will have to use your own AWS Access Key ID and AWS Secret Access Key of course!

3. Execute the following command :

aws emr create-cluster --release-label emr-5.2.0 \
  --name 'Praxis - emr-5.2.0 sparklyr + jupyter cli example' \
  --applications Name=Hadoop Name=Spark Name=Tez Name=Ganglia Name=Presto \
  --ec2-attributes KeyName=pmapril2017,InstanceProfile=EMR_EC2_DefaultRole \
  --service-role EMR_DefaultRole \
  --instance-groups \
    InstanceGroupType=MASTER,InstanceCount=1,InstanceType=c3.4xlarge \
    InstanceGroupType=CORE,InstanceCount=2,InstanceType=c3.4xlarge \
  --region us-east-1 \
  --log-uri s3://yj01/emr-logs/ \
  --bootstrap-actions \
    Name='Install Jupyter notebook',Path="s3://aws-bigdata-blog/artifacts/aws-blog-emr-jupyter/install-jupyter-emr5.sh",Args=[--r,--julia,--toree,--torch,--ruby,--ds-packages,--ml-packages,--python-packages,'ggplot nilearn',--port,8880,--password,praxis,--jupyterhub,--jupyterhub-port,8001,--cached-install,--copy-samples]

note the options have been modified a little
a) number of machines is 1+2
b) the S3 bucket used is yj01 in s3://yj01/emr-logs/
c) the password is set as "praxis"
d) the directive to store notebooks on S3 has been removed as this is causing problems. Now the notebooks will be stored in the home directory of the user=hadoop on the master node

this command returns ( or something similar)
{
    "ClusterId": "j-2LW0S8SAX5OC4"
}

4. Log in to the AWS console and go to the EMR section.

The cluster will show up as starting

and will then move into Bootstrapping mode

and after about 22 minutes will move into Waiting mode. If that happens earlier then there could have been an error in the bootstrap process. Otherwise you will see this

5. Login to Jupyter hub
Note the URL of the Master Public DNS : ec2-54-82-207-124.compute-1.amazonaws.com
and point your browser to : http://ec2-54-82-207-124.compute-1.amazonaws.com:8001



Login with user = hadoop and password = praxis  ( supplied in the command) and you will get the familiar Notebook interface


There will be samples directory containing sample programs covering a wide range of technologies and data science applications. Extremely useful to cut-and-paste from!

Create a work directory and upload the Wordcount and the Hobbit.txt file, used in the original Spark+Python blog post

Notice the changes necessary for cluster operations


Cells 1 -3 reflect the fact that we are now using a cluster, not a local machine
Cells 4, 12 show that the program is NOT accessing the local file storage on the Master Node but the HDFS file system on the cluster

To explore the HDFS file system, go back to this screen

and then press "View All" ... Click on the HDFS link and take your browser to
http://ec2-54-82-207-124.compute-1.amazonaws.com:50070
and see

and you can browse to the hadoop user home HDFS directory where the "hobbit.txt" file was stored and where the "hobbit-out" directory has been created by the Spark program. In fact, all HDFS operations can be carried out from the Notebook cells like this

!hdfs dfs -put hobbit.txt /user/hadoop/
!hdfs dfs -get /user/hadoop/hobbit-out/part* .
!hdfs dfs -ls hobbit-out/
!hdfs dfs -rm hobbit-out/*
!hdfs dfs -rmr hobbit-out
!hdfs dfs -rm hobbit.txt

You can also see the various Hadoop resources -- including the two active nodes through this interface
After Jupyterhub is started, the notebooks can be accessed by going directly to port 8880 and using the password=praxis

Finally it is time to
6. Terminate the cluster!


Go to the cluster console, choose the active cluster and press the terminate button. If termination protection is in place, you would need to turn it off.



Notes :
1. The same task can be done through the EMR console, without having to use the AWS CLI command line because most of the parameters used in this command can be passed through the console GUI. For example, look at this page.
2. Because of the error with the S3 we are storing our programs and data in the master node where it gets deleted when the cluster is terminated.  Ideally this should be placed in an s3 bucket using this option --s3fs
3. The default security group created by the create-cluster command does not allow SSH into port 22. However if this is added, then standard SSH commands can be used to access and transfer files into the master
4. Tom Zeng's post says that SSH tunnelling is required. However I did not need to use process nor follow any of the complex FoxyProxy business to access. Not sure why. Simple access to port 8001 and 8880 worked fine -- Mystery?

Spark with Python in Jupyter Notebook on a single Amazon EC2 instance

In an earlier post I have explained how to run Python+Spark program with Jupyter on local machine and in a subsequent post, I will explain how the same can be done an AWS EMR cluster of multiple machines.
In this post, I explain how this can be done on a single EC2 machine instance running Ubuntu on Amazon AWS.

The strategy described in this blog post is based on strategies described in posts written by Jose Marcial Portilla and Chris Albon. We assume that you have a basic familiarity with AWS services like EC2 machines, S3 data storage and concept of keypairs and an account with Amazon AWS. You may use your Amazon eCommerce account but you may also create one on the AWS login page. This tutorial is based on Ubuntu and assumes that  you have a basic familiarity with the SSH command and other general Linux file operation commands.

1. Login to AWS

Go to the AWS console ,login with userID and password, then go to the page with EC2 services. Unless you have used AWS before, you should have 0 Instances, 0 keypairs, 0 security groups.

2. Create (or Launch) an EC2 instance and use default options except for
a. Choose Ubuntu Server 16.04 LTS
b. Instance type t2.small
c. Configure a security group - unless you already have a security group, create a new one. Call it pyspju00. Make sure that it has at least these three rules.
d. Review and Launch the instance. At this point you will be asked to use and existing keypair or create a new one. If you create a new one, then  you can will have to download a .pem file into your local machine and use this for all subsequent operations.

Go back to the EC2 instance console and you should see your instance running :


Press the button marked Connect and you will get the instructions on how to connect to the instance using SSH.

3. Connect to your instance

Open a terminal on Ubuntu, move to the directory where the pem file is stored and connect with

ssh -i "xxxxxxx.pem" ubuntu@ec2-54-89-196-90.compute-1.amazonaws.com
you will have a different URL for your instance

From now on you will be issuing commands to the remote EC2 machine

4. Install Python / Anaconda software on remote machine

sudo apt-get update
sudo apt-get install default-jre

wget https://repo.continuum.io/archive/Anaconda3-4.3.1-Linux-x86_64.sh

get the exact URL of the Anaconda download by visiting the download site and copying the download URL

bash Anaconda3-4.3.1-Linux-x86_64.sh
Accept all the default options except on this one, say YES here
Do you wish the installer to prepend the Anaconda3 install location
to PATH in your /home/ubuntu/.bashrc ? [yes|no]
[no] >>> yes

logout of the remote machine and login back again with
ssh -i "xxxxxxx.pem" ubuntu@ec2-54-89-196-90.compute-1.amazonaws.com

5. Install Jupyter Notebook on remote machine

a. Create certificates in directory called certs

mkdir certs
cd certs
sudo openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout pmcert.pem -out pmcert.pem

this creates a certificates file pmcert.pem ( not to be confused with the .pem file downloaded on your machine) and stores it on the remote machine

b. Jupyter configuration file

go back to home directory and execute
jupyter notebook --generate-config

now move to the .jupyter directory and edit the config file

vi jupyter_notebook_config.py
if you are not familiar with the editor, either learn how to use it or use anything else that you may be familiar with

notice that everything is commented out and rather than un-commenting specific lines, just add the following lines at the top of the file
#--------------------------------------------------------------------------------
c = get_config()

# Notebook config this is where you saved your pem cert
c.NotebookApp.certfile = u'/home/ubuntu/certs/pmcert.pem' 
# Run on all IP addresses of your instance
c.NotebookApp.ip = '*'
# Don't open browser by default
c.NotebookApp.open_browser = False  
# Fix port to 8888
c.NotebookApp.port = 8892
#--------------------------------------------------------------------------------

c. Start Jupyter without browser and on port 8892

move to new working directory
mkdir myWork
cd myWork
jupyter notebook

you will get >
Copy/paste this URL into your browser when you connect for the first time,    to login with a token:
        https://localhost:8892/?token=70b8623ec5ecf7d7d2f8b38b45112a92ec036ad3f5ed8a1d

but instead of going to local host, we will go to the EC2 machine URL in a separate browser window
https://ec2-54-89-196-90.compute-1.amazonaws.com:8892
this will throw errors about security but ignore the same and keep going until you reach this screen


in the password area, enter the value of the token that you have got in the previous step and you will see your familiar notebook screen


6. Installation of Spark

Go back to the home directory and download URL of the latest version of spark from this page.

wget http://d3kbcqa49mib13.cloudfront.net/spark-2.1.0-bin-hadoop2.7.tgz
tar -xvf spark-2.1.0-bin-hadoop2.7.tgz 
mv spark-2.1.0-bin-hadoop2.7 spark210

edit the file .profile and add the following lines at the bottom
-----------------------------------------------------
export SPARK_HOME=/home/ubuntu/spark210
export PATH=$SPARK_HOME/bin:$PATH
export PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH
export PYTHONPATH=$SPARK_HOME/python/lib/py4j-0.10.4-src.zip:$PYTHONPATH
export SPARK_LOCAL_IP=LOCALHOST
------------------------------------------------------
make sure that you have the correct version of py4j-n-nn-n-src by looking into the directory where it is stored

logout from the remote machine and then login back again

7. Running Spark 2.1 with Python

[The following step may not be necessary if your versions of Spark and Python are compatible. Please see the April 13 update on this blog for an explanation of this]

cd myWork
conda create -n py35 python=3.5 anaconda
logout / login (SSH) back
cd myWork
source activate py35

now run pyspark and note that pyspark is working with Python 3.5.2 so we are all set to start Jupyter again

jupyter notebook
note the new token=12e55cacf8cdcad2f8c77f7959047034b698f4b8f67b679a that you get

The Jupyter Notebook should now be clearly visible again at
https://ec2-54-89-196-90.compute-1.amazonaws.com:8892

Now we upload the notebook containing the WordCount program and the hobbit.txt input file, from the previous blog post.


That we can execute


This completes the exercise, but before  you go, do remember to shut down the notebook, logout of the remote machine and most important terminate the instance

8. Terminate the instance

Go to the EC2 Instance console and Terminate the instance. If  you do not do this, you will continue to be billed!



April 10, 2017

Raja Shashanka and the Calendar in Bengal

The origin of the Bengali calendar


On Saturday, 15th April 2017 Common Era, Bengalis in India, especially in West Bengal, Assam and Tripura, will celebrate the 1st Baisakh, or “Poila Boisakh”, 1424 Bengal Era (BE) -- the start of the Bengali New Year. Most of us are aware that the globally used Common Era, or Christian Era, starts with the birth of Jesus Christ in 1 CE, but what exactly is commemorated by the start of the Bengal Era? What happened in 1 BE?

There are two points of view.

Raja Shashanka is the first universally accepted ruler of a major part of the land mass that is associated with Bengal -- West Bengal & East Bengal / Bangladesh -- today. His capital was at Gaud (current Murshidabad) and he was a contemporary of Raja Harshavardhana of Kannauj (near Lucknow) in the West and of Raja Bhaskar Varman of Kamarupa (Assam) in the East. These three persons were the three principal rulers of North India. While exact dates are not available, it is strongly believed that Raja Shashanka ruled in Bengal between 590 CE and 625 CE. If we can assume that Raja Shashanka ascended the throne on 594 CE and Bengal celebrates the same as the start of the Bengali Era, then in 2017 CE, the Bengali Era year should be  1 + (2017 - 594) = 1424 BE which is exactly what it is on 2017 “Poila Baisakh”. Hence the Bengal Era begins with the ascendance of Raja Shashanka to the throne of Gauda-Bengal.

Long after Raja Shashanka and the Hindu rulers of Bengal were dead and gone, Bengal came under Islamic rule when Bakhtiar Khilji evicted Lakshman Sen in 1206 CE. Subsequently Bengal became a province under the Mughal (Mongol) empire that followed the Islamic Hijri calendar and because this was based on lunar months, it caused major administrative problems.

Agricultural revenue is tied to the harvest and it is most easily collected at the end of the harvest season when the farmer has money in his purse. Seasons in turn are tied to the position of the sun as defined by solar months that commence with the entry of the sun into the signs of the zodiac. So there is a one-to-one fixed connection between a solar month, the position of the sun and the seasons. For example, the spring or vernal equinox happens on 21 March of the Gregorian Calendar or on 1st Chaitra of the Saka Calendar, the official Government of India calendar, because both of these are solar calendars.

The Islamic Hijri calendar is based on lunar months where the start of each year varies widely across seasons -- in some years, the year starts in summer, in other years during the monsoon or in winter. So tax collection based on the Islamic year was a nightmare because the tax collector might arrive when the seeds had just been sown and the farmer would not have the money to pay his taxes. This would lead to endless arguments.

Akbar realised that the Hindu calendars, that were based on the solar months, were more useful for tax-collection purposes, because the year started on a fixed seasonal date. So he adopted the solar calendar, according to which the year of his coronation in 1556 CE was 1 + (1556 - 594) = 963 BE according to the Bengali Era. Coincidentally  -- and this was a huge coincidence -- 1556 CE was also 963 in the Islamic Calendar. So in order to not lose face by having to replace the unstable Islamic lunar calendar with the stable Hindu solar calendar, he adopted the Bengali solar calendar in 1556 CE, the year of his coronation but instead of defining it as Bengal Era year 1 BE, he declared it Bengal Year 963 BE, so as to maintain the illusion that he was continuing with the Islamic calendar. But going forward, the administrative year was aligned to the traditional Bengali solar year so that seasons will begin on fixed dates.

So, the first, simple, explanation for the Bengali Era is that it starts with the ascendancy of Raja Shashanka to the throne of Gaud in 594 CE with the first year being defined as 1 BE. The alternate explanation is that it starts with the coronation of Akbar in Delhi in 1556 CE but with the first year being numbered 963 BE to maintain an artificial equivalence with 963 Islamic Era year that was prevailing at that time.

Now it is up to the reader to decide whether he or she wants to start Bengali Era with the coronation of Raja Shashanka at Gaud, in 594 CE or Akbar at Delhi in 1556 CE.

Actually Akbar would have got away with this sleight of hand of passing off the Bengali Era as being an extension of the Islamic Era but for the start date. Akbar had his coronation on 14th Feb 1556 CE and if the Bengali Era was based on this event then the first day would have been 14th Feb. But all Bengalis celebrate the new year, 1st Baisakh, on 14th April, when the Sun enters the constellation of Aries or Mesha. This clearly shows that the Bengali Era is actually rooted in the Hindu tradition of Solar years dating back to Raja Shashanka and antiquity.

But how do we know which point in the sky is the start of Aries? Where does the zodiac start?


The two zodiacs : Tropical and Sidereal


In Bengal, we have 1st Baisakh usually coinciding with 15th April, when the Sun enters the constellation of Mesha (Aries). The Government of India approved Indian National Calendar that is based on the Saka Era defines the start of the year as 21st March which is 1st Chaitra, when the Sun enters the constellation of Meen (Pisces). Now this leads to a strange inconsistency. If the Sun enters Mesha (Aries) on 15 April as per the Bengali calendar, then it must enter Meena (Pisces) on 15/16th March, but as per the Saka calendar, it enters Meena (Pisces) on 21st March. Why this gap?

To explain  this anomaly, we need to know that there are TWO zodiacs, the tropical (ayana) zodiac and the sidereal (nirayana) zodiac and the implications of this is explored in the rest of this post. [ Warning : Rest of the post has a little mathematics, that you may like read only if you are not scared about the devil in the detail ]

Consider a spherical coordinate system, that is embedded on the Earth and rotates along with it every day. In this spherical coordinate system, every heavenly body, is defined by three numbers -- the azimuthal angle, that shows the position along the equatorial circle or on a longitude, the declination angle, that shows the position above or below the equatorial plane, and a distance from the centre of the Earth. In our assumption, all heavenly bodies are at the same uniform distance and fixed on “the sphere of heavens” and so the distance from the centre is immaterial. The only real variables are the azimuthal and the declination angles and they specify position of every heavenly body.

There are two classes of heavenly bodies -- the “fixed” stars and the “wanderers” or “planets”. The “fixed” stars do not change their position in our spherical coordinate system, but the “planets”, that also include the Sun, the Moon, move around among the “fixed” stars as their azimuth and declination angles change with the passage of time.

For the purpose of the solar calendar, we will only consider the movement of the Sun as it travels around the Earth. Do note that there is nothing mathematically wrong in considering the Sun to be travelling round the Earth, as frames of reference can be changed without affecting the description of the physical reality. As the Sun moves round the Earth, its azimuth angle, or longitude, changes from 0 through 359 then back to 0 in one year and in the same time its declination angle changes from -23 to +23 as seasons change from winter through spring, summer, autumn and back to winter. The declination being 0 at the two equinoxes, when day and night is of equal length. So the Sun moves in a band around the Earth and this band is divided into twelve sectors of 30 degrees each. Each of these twelve sectors are occupied by, or related to, one of the 12 constellations consisting of the “fixed” stars arranged in certain imaginary patterns -- Aries, Taurus, Gemini and so on.

A circle has neither a beginning nor an end and so while the Sun takes a year to complete this circle, there is no unambiguous way to define where exactly the circle -- and hence, by extension,  the year -- starts. However, this starting point can be defined in two ways leading to the existence of  two zodiacs - the tropical and the sidereal.

In the tropical zodiac, the point on the circle when the Sun is at the vernal equinox, and its declination is 0, is defined as the starting point of the year and the azimuth angle is defined as 0. This means that the tropical year starts at the vernal equinox and this is traditionally associated with the entry of the Sun into the sign of tropical Aries -- that is Aries as shown in the tropical zodiac.

In the sidereal zodiac, the point on the circle that is diametrically opposite to the “fixed” star Spica -- also known as Chitra in India -- is considered to be the starting point of the year, where the azimuthal angle is defined as 0. This means that the sidereal year starts when the Sun is opposite Spica and this is traditionally associated with the entry of the Sun into the sign of the sidereal Aries -- that is Aries as shown in the sidereal zodiac ( we will refer to this as sidereal Mesha, to avoid confusion with the tropical Aries, even though Aries and Mesha refer to the same physical constellation)

So now we have two circles, or two zodiacs, with two starting points and these two starting points are approximately 23 degrees separated from each other! This gap is known as ayanamsa and it keeps changing, increasing, with each passing year.
Open Link in New Tab to see the full diagram

The sidereal year, the time taken by the Sun to move to start from the “fixed” star Spica and return to it is 365.25636 days or rotations of the Earth. The tropical year, the time taken by the Sun to move from its position in one vernal equinox to its position in the subsequent vernal equinox is 365.242189 days or about 20 mins 24 sec less. The difference is because the axis of the Earth is not invariant and “wobbles” slowly. This means that the tropical year is shorter than the sidereal year by about 20 mins 24 secs

At some point in the past, in 285 AD, the position of the Sun at the vernal equinox was directly opposite the “fixed” star Spica. This means the the entry point of the tropical Aries was coincident on the entry point of the sidereal Mesha. In that year, the tropical and sidereal zodiacs were identical. But since the tropical year was shorter than the sidereal year, the next tropical year started 20 min 24 sec earlier than the next sidereal year. With each passing year, the tropical year commenced and additional 20 min 24 sec earlier until the cumulative gap between the respective starts of the tropical year and the sidereal year stands at almost 24 days in 2017 today.

But since all official solar calendars, including the Gregorian calendar used in the West and the Saka calendar officially used by the Government of India, are tied to the tropical calendar, the vernal equinox is fixed compulsorily on 21st March / 1st (tropical) Chaitra, when the Sun enter the tropical Aries. But the Bengali calendar, that starts on 1st (sidereal) Baisakh, when the Sun enters the sidereal Mesha begins on 15th April of the Gregorian calendar or 24 days later. The existence of two zodiacs, the tropical and the sidereal is the reason for the gap of 5 days that was the starting point for this discussion.

In 285 AD, when the tropical and sidereal zodiacs were coincident, the vernal equinox, the entry of Sun into tropical Aries and its entry into sidereal Mesha -- all three events -- would all have happened on 21st March  which would also have coincided with 1st (sidereal) Baisakh.

If we keep the date of the vernal equinox compulsorily fixed at 21st March, then with the passage of time, the start of the sidereal year will occur at a later date every year. Conversely, if the start of the sidereal year is considered to be fixed by the arrival of the Sun opposite Spica, and its entry into sidereal Mesha, then the vernal equinox will be 20 mins 24 secs “earlier” each year, when the Sun has not yet reached sidereal Mesha, but is still in sidereal Meena. From this sidereal perspective, the vernal equinox that signals the start of the tropical year with the entry of the Sun into tropical Aries, has now pushed “back” from sidereal Mesha and into sidereal Meena ( or Pisces). Hence, as per western astrological practices, this is the Age of Pisces and after some more time we will move even further backward into the Age of Aquarius.

In Hindu astrology, the analysis of the horoscope is based on the positions of the planets in the sidereal zodiac. However all astronomical calculations that are used to generate the ephemeris, the azimuthal or longitudinal positions of planets, are based on the tropical zodiac. Since the sidereal zodiac is about 23 degrees ahead of the tropical zodiac at the moment, all planetary longitudes need to be reduced by this amount -- known as the Ayanamsa amount -- before being shown on the horoscope. Western astrologers on the other hand work with the tropical and do not need this correction.

Finally, the identification of 285 AD as the year when the vernal equinox coincided with Spica and the tropical and sidereal zodiacs were identical has been challenged. While this date and the ayanamsa of 23 degrees has been defined by N C Lahiri, other astrologers claim that according to Surya Siddhanta,, the definitive classical text on astronomy, the year of coincidence should be 499 AD and the ayanamsa should be reduced accordingly. This is a big debate with no clear resolution in sight.

About This Blog

  © Blogger template 'External' by Ourblogtemplates.com 2008

Back to TOP