January 29, 2022

Storing Horoscope Charts as High Dimensional Data Documents in MongoDB

 Introduction

Astrology is based on the analysis and detection of correlated patterns. Causation and correlation are two different ways of studying related phenomena. Switching on an oven is the obvious cause for a cake to be baked but the consumption of flour, sugar, eggs, butter and electricity in the kitchen is strongly correlated to the appearance of a cake. These six events are part of a pattern and the presence of the first five will be correlated to or lead to a prediction of the presence of the sixth.  Viewed from this perspective of pure correlation, astrology is no different from data science where a pattern of data - in this case the positions of planets at birth – is used to calculate the possibility of certain outcomes.

The astronomical observations that were made with the naked eye and recorded in a vast database by Tycho Brahe and his sister Sophia were the basis for the formulation of Kepler's Laws of Planetary motion, that in turn led to Newton's laws of gravitation and the emergence of European science in the eighteenth century.  Today, the same laws can be deduced very easily and quickly from NASA data [6]. 

Similarly, a database built on the principles described in this article would lead to a resurgence of Hindu astrology in the twenty-first century. That is what we attempt in this article.

[ Actual Python codes are available in the Parashar21 repository at Github and the two Colab Notebooks, namely P21_45_10_MultiChart_Analysis and P21_45_09_Single_AshtakVarga_Gochar demonstrate the theoretical constructs discussed in this article ]

Regular data science uses a set of digital data, preferably numbers, to predict a certain outcome. For example, data regarding age, income, gender, ownership of vehicles, cars, bank account and other personal information can lead to the prediction of whether or a person will repay a loan (or purchase a car). This is a straightforward data science or more precisely, a machine learning problem. There is a lot of mathematics that can justify these predictions, even though there is no direct cause:effect relationship between any of these data points and the event that is being predicted. The presence of past data is used to establish significant correlations between these events and forms the basis of successful predictions.

When it comes to astrological data, there is a major difference and a significant difficulty. The data used in machine learning, say the value of age, or income, generally has the same significance across all samples used to identify the correlations. But in the case of astrology, the position of a graha, say Mars in Mesh rashi, has a different significance, depending on say ( but not limited to), (a) the bhav number of the same Mesh rashi for this person, (b) whether the same bhav is aspected by another graha, (c ) whether the same bhav is aspected by the lord of another bhav, (d) whether the graha Mars is conjuncted by another graha or (e) or by another lord and so on. So the location of a graha in one rashi is not just a single point of data, (like the value of age or income in data science) but just one component of a piece of data of a higher dimensionality, that needs to be factored in while establishing correlations. [ We assume that astrologers reading this article will understand the meaning of graha, rashi, bhav, lord etc. while others will read these as additional dimensions of data] In machine learning and data science this problem is usually addressed with dummy variables that are not present in the original data set. Our approach uses a similar strategy by employing dummy variables as explained later.

Traditionally, this complex, multi-dimensional data is represented as a visual artefact, the horoscope chart, that astrologers use to spot patterns. Given the very large number of dimensions involved, this is not easy. Though the process is totally algorithmic, or rule based, it is quite laborious and only an experienced astrologer can determine and identify the higher dimensional data with speed and accuracy. Those who cannot do so, fail to spot all the patterns and end up with erroneous predictions. This is where we are naively tempted to use the processing power of modern computers, but as we know, the accuracy of the output of a computer program depends almost entirely on the quality of data with which it works. A human astrologer works with visual data available in the form of a horoscope chart, but a computer cannot use the image of a chart to look for patterns. To be of any use, the visual chart of a horoscope must be converted into a set of numbers that can be stored and processed on a machine. The data that describes a horoscope can be defined at multiple levels of complexity. This article describes a mechanism for handling and processing this high-dimension data using a combination of Python programs in Jupyter notebooks and the MongoDB database.

Data Structures

At the lowest level, L0, a horoscope is uniquely identified by just five pieces of information: Date and Time of birth, Latitude and longitude of the place of birth and the TimeZone offset of the local time from UTC (or Universal Coordinated Time).

Based on this L0 data, we can use an ephemeris to calculate the next level, L1, astronomical data that consists of the longitude of 10 grahas (5 true planets, sun, moon and three special virtual objects, namely lagna, rahu and ketu) 5 of the 10 grahas can have temporary retrograde motion so in addition to their longitude, we need to record this fact.  So, level L1 data consists of fifteen variables, where 10 are numeric values of longitude and 5 are Boolean variables (that can take a value of true, if the graha is retrograde and false, if they are not).

Based on this purely astronomical L1 data, astrologers calculate their own astrological L2, data based on the principles of the kaal-chakra (or zodiac) where longitudes in level L1 data are replaced by corresponding rashi numbers (or names) that show the rashi where each graha resides.  

The conversion of level L0 data to level L1 and then level L2 is quite straightforward and there are computer programs and websites that will do this calculation without any difficulty. Computerised Horoscope charts usually present this level L2 data in a variety of styles, namely, the Bengal Style, the North Indian Style (where in both cases, the rashis are arranged counterclockwise) and the South Indian style (where the rashis are arranged clockwise). 

Newspaper columns and websites that publish monthly and weekly predictions do so on the basis of just one, single component of L2 data – the rashi position of either the Sun (in 'western' astrology) or the Moon (in 'Indian' astrology) – and hence are grossly inaccurate.

This is where we introduce the dummy variables in the form of level L3 data.

High Dimension Data

From Level L2, a series of increasingly complicated set of level L3, data points need to be calculated for positions, aspects and conjuncts :

Basic:

L3.1 : 12 bhav numbers - Based on the position of the lagna, each rashi gets a bhav number

L3.2 : 12 bhav lords - Each bhav gets a graha identified as its lord based on its rashi

L3.3 : 9 Locations of planets in terms of the bhav  where the graha resides

L3.4 : 12 Locations of lords in terms of the  bhav where the lord resides

Positions:

L3.5 : 9 Booleans that specify whether a graha is exalted

L3.6 : 9 Booleans that specify whether a graha is debilitated

L3.7 : 9 Booleans that specify whether a graha is at MoolTrikana

L3.8 : 9 Booleans that specify whether a graha is in Friendly rashi

L3.9 : 9 Booleans that specify whether a graha is in Hostile rashi

L3.10 : 9 Booleans that specify whether a graha is in Neutral rashi

L3.11 : 12 Booleans that specify whether a lord is exalted

L3.12 : 12 Booleans that specify whether a lord is debilitated

L3.13 : 12 Booleans that specify whether a lord is at MoolTrikana

L3.14 : 12 Booleans that specify whether a lord is in Friendly rashi

L3.15 : 12 Booleans that specify whether a lord is in Hostile rashi

L3.16 : 12 Booleans that specify whether a lord is in in Neutral rashi

Aspects: 

L3.17 : 10 sets of graha Aspects where each set consists of grahas aspecting one graha

L3.18 : 10 sets of graha AspectedBY where each set consists of grahas that are aspected by one graha. Note that A aspecting B does not necessarily imply that B aspects A. There are rules that govern aspects.

L3.19 : 12 sets of bhav Aspects where each set consists of grahas aspecting a bhav

Conjunct:

L3.20 : 10 sets of graha - graha Conjuncts where each set consists of grahas that conjunct each graha

L3.21 : 12 sets of lord - graha  Conjuncts where each set consists of grahas that conjunct each lord

L3.22 : 12 sets of lord - lord Conjuncts where each set consists of lords that conjunct each lord

Beginning with only 5 pieces of data in level L0, we move to 15 pieces of data in level L1 and L2 and then 22 additional pieces of data in level L3. This transformation from L0 to L3 is done as per the rules given in Rao & Rao [1]. The critical issue here is that unlike L0, L1, L2 data, L3 data structures are not elemental (that is, consisting of just a single number or Boolean) but often collections or sets of associated data. So, in effect, L3 data has more than the 22 components that are listed here. This means that a horoscope is not merely a pattern drawn on a piece of two-dimensional paper but is an instance of an object of very high dimensionality.

Storing and processing this kind of an object that has data with high dimensionality needs programs that are an order of magnitude more complicated than most Computer Horoscope programs that are generally available.

Python and MongoDB

Parashar21 is a collection of python programs, developed on the Google Colab platform, that convert L0 data into L3 data. But the derived level L3 data needs to be stored in a manner that will allow easy and intuitive access.  Storing even level L2 data in a computer system poses significant problems. 

Horoscope charts, that are a visual representation of level L2 data, can of course be stored as image files in png or jpg format. However, this conversion to an image format leads to an immediate loss of information, because an image file cannot be identified, and retrieved, on the basis of the position of a graha in a particular rashi.  For example, if we want to retrieve horoscopes that have the moon in Meen rashi and sun in Makar rashi, it cannot be done if the chart is stored as a simple image file. This is because the image file only stores spatial information of the proximity of the phrases 'Mo' and 'Su’ to certain squares and triangles merely on the diagram without the semantic information that connects the phrases ‘Mo' and 'Su' to the grahas moon and sun, or the squares to Meen or Makar rashis.  


Level L2 data can of course be stored in a traditional flat file or a relational database like Oracle or MySQL without loss of information but when we move to level L3, the storage and retrieval becomes extremely complicated. This is because relational databases can only store elemental data, and not sets of data, in any field. Requirements to retrieve horoscopes that have, for example, moon in the first bhav and lagna aspected by saturn, would be almost impossible to fulfill. But this is precisely the kind of pattern that we are looking for.

This is where we introduce MongoDB, a very popular and widely used document storage platform as the database of choice. MongoDB stores information as documents in the JSON format that allows sets to be defined.  MongoDB and the JSON format are very widely used in the modern software industry, and we will now demonstrate their utility in astrology through a representative case study.

Case study

The Astro-Databank Wiki [2] has a database of horoscope related information collected by Lois Rodden that is available in the public domain. This database consists of html pages for every individual that contains level L0 information along, six additional data fields containing data about their vocation (or profession) and a quality tag that indicates the estimated level of accuracy of the date and time of birth. From these html pages we extracted 39,663 pieces of (the most accurate) AA rated horoscope data along with three of the six vocation parameters and converted them into a CSV text file. 

The Parashar21 python programs were used to convert this data level L0 data into level L3 information and this was stored in a MongoDB database as 39,662 JSON documents. [ data for 1 individual could not be processed]

On this database, two kinds of retrieval tests were performed:

1. Random horoscopes were selected, based on arbitrary parameters and computed values of the 22 odd level L3 variables and printed for manual comparison against the corresponding visual charts. No errors were detected in the samples that were chosen for review. Since no errors were detected, we assume that the computation is correct, even though, as in the case of all software, absence of evidence (of errors) is no evidence of absence (of errors).

2. The query facilities of MongoDB were used to locate horoscopes that meet certain requirements. Samples from the retrieved horoscopes were printed and it was found that the queries were indeed giving correct results as shown below:

We first give the English version of the query followed by the same query using the MongoDB query language:

—----------------------------------------------------------------------------

Retrieve charts that have - 

Lagna aspected by Saturn

{"GAspectedBy2.La": {"$in": ["Sa"]}}

9991 Charts selected from 39667. Random 5 charts printed.

—----------------------------------------------------------------------------

Retrieve charts that have - 

Lagna aspected by Saturn AND

Exalted Jupiter

{"$and": [{"exaltG.Ju": {"$eq": true}}, {"GAspectedBy2.La": {"$in": ["Sa"]}}]}

874 Charts selected from 39667. Random 5 charts printed.

—----------------------------------------------------------------------------

Retrieve charts that have - 

Lagna aspected by Saturn AND

Exalted Jupiter AND

Sun conjunct with Mercury

{"$and": [{"exaltG.Ju": {"$eq": true}}, {"GAspectedBy2.La": {"$in": ["Sa"]}}, {"GConjunctsG2.Su": {"$in": ["Me"]}}]}

440 Charts selected from 39667. Random 5 charts printed.

—----------------------------------------------------------------------------

Retrieve charts that have - 

Lagna aspected by Saturn AND

Exalted Jupiter AND

Sun conjunct with Mercury AND

Moon in Bhav 1

{"$and": [{"exaltG.Ju": {"$eq": true}}, {"GAspectedBy2.La": {"$in": ["Sa"]}}, {"GConjunctsG2.Su": {"$in": ["Me"]}}, {"GrahaBhava.Mo": {"$eq": 1}}]}

41 Charts selected from 39667. Random 5 charts printed.

—----------------------------------------------------------------------------

Retrieve charts that have - 

Lagna aspected by Saturn AND

Exalted Jupiter AND

Sun conjunct with Mercury AND

Moon in bhav 1 AND

4th lord in bhav 5

{"$and": [{"exaltG.Ju": {"$eq": true}}, {"GAspectedBy2.La": {"$in": ["Sa"]}}, {"GConjunctsG2.Su": {"$in": ["Me"]}}, {"GrahaBhava.Mo": {"$eq": 1}}, {"LordBhav.4": {"$eq": 5}}]}

2 Charts selected from 39667. Random 2 charts printed.

—----------------------------------------------------------------------------

The two North India and South style charts generated by the program for two horoscopes retrieved from the MongoDB database from the last query with 5 conditions are shown here. The full reports generated by all 5 queries in three styles (Bengal, North India, South India) are available in the GitHub repository.














Discussion


The total number of horoscopes stored in the database was 39,667. In the five queries that were tested, the number of conditions were increased from 1 to 5. Based on these queries, the following number of horoscopes were retrieved. However, to save space, only 5 charts – chosen at random – were printed where the number retrieved was more than 5.

Number of Conditions

Number of Charts Retrieved

Number of Charts Printed

1

9991

5

2

874

5

3

440

5

4

41

5

5

2

2


The 2 charts retrieved in the last query were examined visually and checked manually and it is noted that they do indeed meet all 5 conditions. Hence, we conclude that the scheme and the associated programs do meet the current requirements and expectations of the project.

Data and Tools Used


The primary data was sourced from the Astro-Databank Wiki Project [2] This has been collected, cleaned and reformatted into level L0 data and is stored as a CSV file in a publicly available Google Drive . The Python programs are available at the GitHub repository  under open-source GNU GPL 3 license. The sample reports referred to in this paper are also available in the same repository . The MongoDB database instance is hosted at Clever Cloud [3] but any other service provider can be used with its own user id and password.

Scope of Future Work

Now that we have a way to store and retrieve complex chart information in a format that allows for logical retrieval, there is scope for two kinds of future work.

First, after level L3, we can calculate, codify and store higher levels of data. For example, in level L4 we could calculate and store information about kendras, trikons, panapharas and apoklimas and in level L5 we could store information on different types of yogs, like dhanyog and rajyog. Extensions to the current python code and increase in the number of fields in the mongodb database would easily cater to these requirements of higher-level data.

But more important would be to build a database of 'tagged' horoscopes. The current Astro Bank database has six descriptive fields with information about vocation and vocation details for up to three vocations. We would need to generalise this by tagging horoscopes with additional information like education, wealth, fame, health etc. Creating such a database would be the first step towards discovering and validating patterns of data that are correlated to life outcomes. 

The astronomical observations that were made with the naked eye and recorded in a vast database by Tycho Brahe and his sister Sophia were the basis for the formulation of Kepler's Laws of Planetary motion, that in turn led to Newton's laws of gravitation and the emergence of European science in the eighteenth century.  Today, the same laws can be deduced very easily and quickly from NASA data [6]. 

Similarly, a database built on the principles described in this article would lead to a resurgence of Hindu astrology in the twenty-first century. That is the goal of Parashar21.

Supplementary Information 


In the GitHub repository, there are three main programs (or Colab Notebooks). The Cast_Load program reads basic L0 data from a CSV file, uses the pyswisseph [4] python package and generates the L3 JSON objects that are stored in a MongoDB database as well as in a JSON file. The Pull_Print program retrieves selective data from the MongoDB database and generates reports. For those not having access to a MongoDB instance, an alternative Pull_Print_StandAlone program is provided. This installs a local, non-persistent MongoDB instance within the Google Colab VM, reads in the JSON data, loads the data into the local MongoDB database and demonstrates the execution of the queries described in this paper. Running the Pull_Print_StandAlone program requires a modern browser and the knowledge of executing Python programs in a Jupyter Notebook format in a Google Colab VM environment.[5] The user should navigate to the notebook file, open it and then press the <open with Colab> button that is visible.

References: 


[1] Rao, KN, Ashu Rao, K, "Learn Hindu Astrology Easily"
[6] Springsteen, P., Keith, J., "Rediscovering Kepler's laws using Newton's gravitation law and NASA data", American Physical Society, Joint Spring 2010 Meeting of the Texas Sections of the APS, AAPT, and SPS, March 18-20, 2010, abstract id. J3.005, March 2010

DOI10.13140/RG.2.2.19476.58240



January 01, 2022

Penta Tantra

Five Drivers of the emerging civilisation

It is always fascinating to speculate on the contours of the future, but perhaps it is easier to do so in terms of what might actually shape it. Rather than trying to define where we want to go or determine how to go there, let us instead try to determine who, or what, our drivers are and let them lead us into whatever it is that awaits us in the years, and the road ahead. So sit back and take a ride through a  thoughtscape  delineated by nuclear energy, space travel, biohacks, the metaverse and artificial intelligence. A ride that ends with an unusual Indic twist in the tail or tale.

Energy

The first driver is energy which is what gives dynamism to an otherwise static universe. It is obviously a prime driver for the emergence of civilisation, which is why the discovery – and control – of fire was such an epochal event in the evolution of human society. Without access to substantial sources of energy, human society can only regress to a primitive state.

The Kardashev scale is a method of measuring a civilization's level of technological advancement based on the amount of energy it is able to use. The measure was proposed by Soviet astronomer Nikolai Kardashev in 1964. The Kardashev scale has three designated categories : A Type I civilization, also called a planetary civilization, is one that can control and use all of the energy available on its planet. A Type II civilization, also called a stellar civilization, can use and control energy at the scale of its planetary system and a Type III civilization, also called a galactic civilization, can control energy at the scale of its entire host galaxy. Human society is obviously stuck at the low end of a Level I category with its current dependence on the fortuitous discovery of carbon based fossil fuels. 


Since fossil fuels will get exhausted or, if used recklessly, will cause unpleasant environmental side effects, there is a great demand for clean energy from sources that are renewable. Solar and wind energy have been touted as possible solutions but both have serious drawbacks. First, they are too widely dispersed and need lots of land area to be collected in usable quantities. This land acquisition along with the management and maintenance of dispersed collection devices is a severe managerial and technical challenge. What is worse is the irregularity and uncertainty in generation and this calls for significant investment in storage devices. We cannot have a city shut down because of a cloud cover. Hence wind is a no-starter  but solar can surely play an important role in being auxiliary, or secondary source of energy, and of course in remote locations. But the primary burden of keeping civilization alive must lie with a new generation of small, modular nuclear power plants.

Yes, nuclear energy is the key. The general public who do not understand the nuances of technology are led by motivated, professional agitators to believe that nuclear energy is no different from the atom bombs at Nagasaki and Hiroshima. Those who are little more clued in would like the rest of the world to believe that Three Mile Island, Chernobyl and Fukushima represent the horrific face of nuclear devastation. This is patently false. The number of people who have died in these three locations is miniscule compared to the number of deaths caused in industrial accidents all across the world. In Chernobyl, the number of immediate and direct deaths was about 60 but estimates put the total death toll over a 20 year period to be between 4,000 and 16,000. In Fukushima, there were no direct deaths, one death due to radiation exposure and 573 stress induced deaths in elderly people. The huge difference in the fatality of these two accidents represents, first, the progress of technology and second, the  efficiency of Japanese professional management vis-a-vis the ignorance and irresponsibility of the communist regime in the USSR. Net-net, nuclear power is nowhere as dangerous as its techno-phobic opponents would like us to believe.

In fact, the small modular reactors (SMR) that are under development at NuScale (US) and Rolls Royce (UK) are generations ahead in safety because they are designed on the premise of being able to survive for an indefinite period of electrical shutdown without overheating. Being small, they have fewer moving parts or points of failure and being modular they can be manufactured in factories and then transported to the final location for installation and commissioning. This reduces cost and makes the technology affordable and easy to deploy. 

Another new development in this area is the technology of -- the accidentally anagrammatical MSR --  molten salt reactor. MSR reactors are even more secure because the fuel is used in a liquid state and any accidental power shutdown causes the fuel to drain out, solidify and become inactive. What is even more interesting is that MSR reactors were originally designed to operate with Thorium, an element that is more abundant than the traditional Uranium. Moreover, India has the world’s largest deposits of Thorium; it is found in abundance on the beaches of peninsular India. But China is already ahead in the game with its TMSR-LF1 reactor, a Thorium based MSR with its fuel sourced possibly sourced from mining operations in Sri Lanka, that shares the same oceanfront as India.

In fact, despite Thorium being identified by Dr Homi Bhaba as the key to India's self sufficiency in nuclear energy, political opposition fuelled by hostile foreign powers has hampered India's ability to capitalise on this technology while China has surged ahead. It is time for the political establishment in India to rein in anti-national opposition and give our engineers a free hand to build on this immense potential.

Space Colonisation

After energy, the second driver is space. To even dream of moving up to a Kardashev-II civilisation, we first need to leave Earth and set up colonies not just on the Moon, but also on Mars, on the moons of Jupiter and Saturn and on the mineral rich asteroids that lie between Mars and Jupiter. But beyond the lure of the minerals that lie in abundance in solar orbit, the drive to space has another interesting imperative. It epitomises the spirit of endeavour and enterprise, to go where no one has gone before, a civilisational goal that any progressive society must strive for so that it does not end up being stuck on, or even slide back into a moribund and desultory past. One can of course go in other directions, for example, into the human mind, where no one has gone before either, but we will explore that later. For now, let us focus on space habitats as the second driver because only when we leave Earth can we think of harnessing the energy and other resources of the solar system.

Space travel and the colonisation of other planets – moons of other planets are also referred to as planets in space jargon – calls for innovation and investments in a spectrum of new and creative technologies. From propulsion systems to human habitats and everything in between there are scientific and technological challenges that will stress each and every body of knowledge that mankind has built up and has access to. While technology is a very big challenge, an even bigger challenge is the investment needed to develop it and – perhaps this is the biggest challenge of all – to build social and political consensus to fund these massively expensive missions. There will always be oddballs who see more advantage in providing drinking water in slums than in going to Mars.  One can of course say that the technology developed for space has applications that can improve the quality of life on Earth but this is trivial and facetious. At best, these reasons can only be used up to a point but then we, as a society, must stand up to the fact that expansion into space is the manifest destiny of the human race. Humans are not cockroaches whose only claim to fame is to have survived millions of years without becoming extinct. We, as a race, are destined for greater things and one of them is to explore and expand into realms beyond our own. The European colonisation of America and Africa is an example of such an expansion even though it caused the decimation of certain cultures. That is Darwin in action, and one must be careful that in going to space, we do not repeat the mistakes of the past.

Coming to more prosaic matters, space travel should be led by autonomous robots. This will dramatically reduce the cost and complexities necessary to support human life on crewed missions and initial settlements. From this perspective, Gaganyaan is an expensive luxury. We would have been better off landing an autonomous mining robot on the Moon or on Mars. We already have an enormous variety of industrial robots and autonomous vehicles that we can send to other planets, possibly Mars and then Titan, and use them to build the basic infrastructure for human habitats. Similarly, mining robots can be sent to land on the asteroids, like the metal rich Psyche, and excavate minerals that can be used for both construction as well to meet the energy requirements of distant worlds. These robots should be the vanguard, the tip of the arrow, with which we should break the frontiers of space. Humans can follow in the second wave, once the initial teething problems have been overcome.

One of the major problems with colonising space is the habitability of the worlds that we desire to colonise. Our physical bodies have evolved to survive with comfort on Earth but are of little use in the utterly cold, airless and perchlorate filled surface of Mars. Or in the methane and nitrogen atmosphere of Titan. To survive under such different conditions, the initial response is to create Earth-like oases, bubble-cities, underground caves where we can regulate the environment and make it mimic Earth. Subsequently, with the advent of more energy and materials, we can look at terraformation -- or converting alien planets to look like Earth by gradually changing the atmosphere to include more oxygen.

BioHacks

Many of these terraforming techniques call for the creation of new plants and insects with a different kind of metabolism. This is where a whole new world of creative biology or, the third driver,  biohacking comes in. Thanks to exciting new technologies like CRISPR, it should now be possible to stitch together the genetic templates that can impart new capabilities to biological or carbon based life forms. Modifications are possible, not just at the genetic level but also at higher levels to allow organisms, for example, to see in the dark, sense electromagnetic radiation, generate energy through anaerobic means and have the crucial ability to repair damage and heal themselves. In fact, the quantum of change induced in carbon based life forms can also be augmented by implants made of inorganic, synthetic materials as in, the relatively simple case of prosthetic limbs and even artificial internal organs. 

Going forward, we see the immense potentials of new carbon-silicon hybrids, often referred to in sci-fi literature as cyborgs. While popular representations of cyborgs represent them as clunky, malevolent, zombie-like creatures, the reality could be very different. In fact our current, flawed perception of a cyborg could be similar to the way a cow or a dog perceives a teen-ager cruising on a motorcycle while being connected to the internet through a cell phone and bluetooth enabled Google Glass spectacles. We ourselves need to evolve significantly, to understand the potentials of hybrid carbon-silicon life forms

Biohacking is a generic name for a variety of processes that can cause significant changes in both the genotype and the phenotype and leads to the evolution of new and hybrid forms of life. A key component of this evolution would be the arrival of self healing mechanisms. Medical and surgical procedures will evolve to the point where --  and of course, we are being very optimistic here -- disease is history and death is only by desire. However some of these processes could be fraught with danger and initiate debates about the ethics of such technology. Unfortunately, debates do not stop the arrival of new ideas but only delay them. So eventually, and sooner the better, biohacking would become mainstream and pave the way for new classes of life forms that blend the best of man and machine. Paving the way for this integration would be the seamless  flow of information between carbon processing units, or organic brains and silicon processing units, or digital computers. 

Metaverse

This brain-machine interface, that has its genesis in the technology that allows thought controlled devices, like wheel-chairs and now ‘video’ games, is a natural stepping stone for the fourth driver - the virtual reality of the metaverse. Metaverse was a concept that emerged from a 1980s science fiction novel, Snowcrash, and was given  shape in the Massively Multiuser Online Role Playing Games (MMORPG) like World of Warcraft, Final Fantasy, Call of Duty. But the real contours of the metaverse were first evident, not in games, but on platforms like Second Life that allowed users to, not just play with, but actually create their own existence and experiences in the virtual world of avatars -- 3D, animated representations of themselves. The immersive 3D experience that companies like Facebook, Microsoft and NVidia are encouraging with virtual reality devices like Oculus and Hololens, would be where humans will increasingly migrate to -- for social and commercial purposes.

Users or inhabitants of the metaverse have the ability to free themselves from the physical constraints of the ‘real’ world and create their own fantasy world. In such a world, avatars can take different shapes, as in an octopus and  fly around or teleport themselves either alone or in the company of the avatars of their social friends. This break from reality, or rather the immersive experience of an alternate reality is today somewhat constrained by being tethered to a keyboard-and-screen interface or a cumbersome virtual reality headset. But as brain-machine interfaces become simpler to use this constraint will loosen and eventually vanish. Then  the level of interactivity and the ease with which one interacts with other objects in the metaverse, including the avatars of other users, will actually blur the borders between the real and the virtual worlds.

In fact, these virtual worlds will evolve into different planes of existence that human minds can enter and pass through in pursuit of creative objectives that are impossible in the real world. Interactions will expand beyond sight and sound and include touch, taste and smell and even the seamless exchange of thoughts. This will be possible since sensory signals will be delivered to the brain even in  the absence of the biological sensory organs, as in the bionic eyes.  As avatars, one could experience sports, concerts and other events where the players and actors are represented by their avatars. Movies and plays can give way to interactive 3D experiences where the audience, the observer, can influence the script.  This may sound way too futuristic but it is not so. Thought controlled devices are a reality today and it is just a matter of time before they get integrated with the metaverse. This will create multiple layers and types of existence, or perception, that are accessible to human minds through avatars of different types and capabilities.

Driving, or rather facilitating these four drivers - energy management, space colonisation, biohacks and the metaverse, is the fifth driver : artificial intelligence. The One Ring to Rule Them All.

Artificial Intelligence

Artificial intelligence comes in many shapes and sizes but the consensus at the moment is that artificial neural networks (ANN)  with millions and billions ( and soon trillions) of parameters display the astonishing ability to perform tasks normally associated with intelligent human beings. This includes diagnosing diseases, recognising faces, recognising and decoding human voice, driving cars on public roads, running across difficult terrains, generating original text and images that are indistinguishable from those created by humans and even carrying out reasonably coherent and meaningful conversations. All these capabilities come together when ANN powered systems interact with flesh-and-blood humans through their avatars in a metaverse-like environment of MMORPG games.  These games create situations that lead to conflict and competition between man and machine. In such adversarial scenarios, the machines generally outwit and outmaneuver humans which can give rise to fears of a take over of the planet by hostile machines. This is staple science fiction of the dystopian kind!

But if we can anticipate such behaviour and preempt them by putting in safeguards, like Asimov’s Three Laws of Robotics, then the potential to leverage this technology is nearly infinite. Linked to the other four drivers, AI is a force multiplier that will increase the power, the potency, the potential of everything that we can envisage or execute. This will mean that in principle, our robots that colonise other planets would be smarter, our power systems would be safer and more reliable, our biohacking will yield more useful results and the metaverse will become more magical and paradoxically, more realistic. But the real benefit of AI could accrue when multiple AI systems, with different and diverse capabilities, come together -- in a centralised, cloud computing like scenario -- to create an immense engine of cognition and consciousness that can reach out and touch each and every sentient artifact across the metaverse.

But can machines become conscious? Many learned people are of the opinion that despite all advances in AI, consciousness is something that is possible only in biological systems through their contact with a divine transcendence that lies beyond the logical approach that forms the basis of AI. This is an endless debate that we will dodge for the time being. Instead, we will move ahead on the assumption that if an AI displays behaviour that is indistinguishable from the behaviour of conscious sentients then it does not matter whether it is indeed conscious or not. In fact, consciousness is an emergent phenomenon that becomes apparent at the confluence of multiple cognitive and behavioural traits. Incidentally, much of our very successful AI is currently as inexplicable as the intuitive behaviour of a mystic. From this perspective, our fifth and final driver, AI, is the true foundation of the civilization that awaits us in the future.

So we have five key technologies that will define the contours of the future : Nuclear Energy, Space Exploration, BioHacks, Metaverse and Artificial Intelligence. This is of course a very high level view and the devils in the detail will emerge when we break this down into smaller, more manageable tasks. There will be many challenges, mostly technological but some very social challenges that will emerge around the question of ethics, morality, privacy and the need to strike a balance between private enterprise and public good. As a society we need to acknowledge and address these challenges in a manner that advances the goals of our civilisation as a whole.

Leela

But there is one perspective that could be very interesting for anyone who has an interest in the Hindu view of the world. Here we are looking at a five column spectrum of technologies - the Pancha Tantra. At one end we have AI -- the embodiment of pure knowledge, cognition and consciousness --  as a manifestation of Shiv. At the other end we have the Shakti of nuclear energy, that animates and gives life to the potential that lies dormant in pure knowledge. Between this Shiv and his Shakti, we have the Maya, or the illusion of the three worlds -- the physical world of space, the world of life and metabolism and the metaverse that transcends the other two. Could  this be the vision that is revealed to adepts who sit on the PanchMundi Asana - the seat of five skulls - and meditate on the Leela, the divine play, of Shiv-Shakti?

-----------

If these ideas seem like Science Fiction, you may consider checking out my sci-fi novels Chronotantra and Chronoyantra :-)