September 27, 2007

Beyond Virtual Worlds : Patterns of the Future

We have been looking at what is possible with Virtual Worlds but for the next few minutes let us step out of the domain what is possible now and explore what could become possible in the next couple of years. From engineering, let us step into the domain of imagineering -- which is what this blog is all about !

Why are we constrained to 2D displays ? We are inherently 3D animals ... and the world that we are simulating virtually is supposed to be 3D. So why should we stick to traditional computer displays that render 2D images ? There are display technologies available that create 3D hologram style images ... that one can 'almost' walk around and see ( though not quite touch as yet ) Go to google and search for 3D displays and you will see stuff like what you see above and below.

Just imagine what your SecondLife, or whatever 3D world that prefer to live in, will look when you view them on a display screen like this. And interestingly enough, the cost of these display devices is not astronomical. They are obviously more expensive when compared to the standard flat screen monitors ... but certainly very affordable.

Yet the technology is not quite rocket science. We have had 3D movies using polarised light for decades. Please see the diagram below. Modern technology has used the same principles of physics and made the devices both affordable and convenient to use.

But our next question is do you need a display at all ?

Recent advances in medical science, exploring ways to make the blind see again, have created what are known as Bionic eyes. If you look at the structure and mechanics of the human eye, you would notice that optical signals ( or OK, electromagnetic energy ) are sensed on a photosensitive surface -- that is the human retina -- and converted to electrochemical energy at one end of the optic nerve. The other end of the optic nerve is connected to the human brain which can sense the electrochemical signals that are transmitted down the nerve.

Then the cognitive process of the human brain interprets these electrochemical signals and causes the person to perceive a vision of what lies in front of the retina.

Can this not be replicated using a known technology ?

Of course it can be done. All that you need is a camera that captures the image. Converts it into a series of electrical impulses and then sends it down the optical nerve. Simple ?

Not quite ?

There is a huge amount of image and signal processing involved ! Each kind of shape, colour, texture creates a different pattern of signal -- but what causes what ? This is not quite known as yet .. and so when we do it for the first time, the brain cannot make sense of the signals that it is being fed. But this is a matter of time. Currently we have systems that allow the brain to recognise the presence and absence of light and vague fuzzy shapes. But even this is of great benefit to those who are completely blind. I am sure it is a matter of time before the image processing software becomes sophisticated enough so that the signals are parsed and formed in a manner that the brain can make sense of and hence recognise a range of shapes, sizes and colours.

The problem is difficult but not intractable. No known laws of physics are being violated nor are astronomical amounts of energy required. It will happen ... and it will happen soon.

Which leads us to the first level of convergence ... that is between 3D display technology and bionic eyes.

When a 3D monitor displays an image, what is that that it actually does ? The computer program generates a pattern of signals that is converted to a pattern of lights ( electromagnetic radiation ) that travels across the distance between the screen and the user. This light is then converted back to electrochemical signals in the optic nerve either (a) through the human retina or (b) the camera of the bionic eye. So there are two conversions : electrical signals => optical signals => electochemical signals. Question ? Do we need the intermediate optical signal at all ? What value is it adding to the process ? Can we do away with this totally ? See the figure below ..

What we suggest is that the 3D display can be done away with it ! But not the technology that 'renders' the scene in 3D. That is still required to create a set of electrical patterns that represent the virtual world in all its exquisite detail ... but just that it is not converted to ( or 'shown' as ) light signal. Instead the electrical signals are fed into the processing unit of the bionic eye which is led to believe that the signal has come from the camera of the bionic eye!

So it processes these signals ( and this is no easy processing, mind you ... this is heavy duty stuff ) and passes it on to the optic nerve .. which in turn is led to believe that the signals have originated from the living retina !! This is layers and layers of deception ... but all for a good and noble cause ...

and what is that cause ? Total Immersion ..

Total immersion means that the human brain has lost the ability to distinguish between electrical signals that originate from a computer and optical signals that originate from the environment. Like the Turing test where you claim that Artificial Intelligence has been attained when you cannot distinguish between the responses from a human and those from a machine .. this Total Immersion is when you cannot distinguish between stimuli from machines or from the real environment.

The line between the real and the virtual is becoming increasingly blurred !!


But why should signals move in only one direction ? Why not the reverse ? Why can signals originating from the brain not be used to control the environment ? This is thought control .. we are talking about !!! Remember the novel / movie Firefox ... not the browser, but the thought controlled fighter aircraft that was developed by the USSR and stolen by the US ? That was science fiction in 1982 .. but it can be come a fact in 2012 ..

Consider the following ..

This is again a piece of technology from the domain of medicine ... that is designed to allow paralysed people or quadriplegics to move .. by allowing them to control their wheel chairs with their thought. First thought-controlled wheel chairs, then we will have thought-controlled fighter aircraft !

Again the principles are astonishingly simple though the implementation is and could be fiendishly difficult. When you want to move an arm or a finger, there is a signal that is generated in the brain that travels down a specific nerve as an electrochemical impulse and causes a movement of the limb.

All that we are trying to do is to sense the same signal and cause a electro-mechanical device to move and do the same thing as a limb would do ... for example move a joystick ! and if you can do that you have a thought controlled device.

But again there are implementation issues. The signal has to be picked up from a probe inserted into the brain -- which can be uncomfortable, and then heavy duty signal processing software has to used to distinguish irrelevant signals ( or noise ) from the actual signal. If this does not happen .. then the intention to move a finger can be misinterpreted to move a leg ... or perhaps not understood at all.

Obviously more research is needed but again the principles that we are dealing with do not violate any known laws of physics though they could be computationally intensive. So it is a matter of time indeed before we have ...

As technology moves forward, the intrusive, painful brain probes can be replaced with simpler and more comfortable cap-based sensors of the kind shown ( and demonstrated ) above.

So now we have four pieces of technology ... namely
  • Virtual Worlds like SecondLife
  • 3D Display technology -- both hardware and software -- that can create a near perfect illusion of solid objects
  • Bionic Eyes that allow the display to be replaced with technology that allows total immersion of the user inside the Virtual World
  • Thought sensors that can "read" thoughts and make things happen in the Virtual World
And mind you all this with technology that is "almost" available today ! At the risk of sounding repetitious, I need to point out once again that the technology to do all this does not violate laws of physics or need huge amounts of energy. Nor does it require any deep and difficult to understand models of human cognition -- as is the case of Artificial Intelligence. All it needs is some powerful image and signal processing algorithm and some powerful hardware to crunch through all that data -- both of which lie well within the domain of feasibility.

So what do you get when you assemble all these technologies ? Why The Matrix of course !

September 15, 2007

Beneath your dignity or beyond your ability?

Unlike other professionals, the value of a person in the IT business does not necessarily go up with time. In the case of medicine, law or engineering, a customer is willing to pay a premium for someone who has been in the profession for a long time but a person who has been writing C code for 20 years or configuring SAP for 15 years is, in general, not more valuable than a junior colleague – at least not in the technical sense.

IT professionals handle this niggling discomfort by transiting into management roles – which in effect mean shuffling CVs and juggling spreadsheets, not technology – and then claiming that it is beneath their dignity to do otherwise. One wonders if it is beneath their dignity or beyond their ability 

My transition from Tata-IBM to Pricewaterhouse was a case of trying to defy this diktat of circumstances. At Tata Steel I had played a key role in introducing RDBMS technology into what was then the country’s first and largest enterprise wide integrated application and had come to be regarded as an expert in this field. Consequently, in my next role as product manager of DB2 in Tata-IBM, I was responsible for selling this technology nationwide. However in the seven years that I had been associated with this technology, I had come to realize that I was no more the only DB2 guru in the country anymore. RDBMS as a technology had become commoditized and there was nothing great that separated me from those to whom I was trying to sell the stuff.

At this point of time, on a visit to the US, I had the opportunity to have a look at the “world wide web” through a Mosaic browser – and I was hooked, because in it I saw the future.

Back in India, I realized to my dismay that both my colleagues and my company had still neither any clue, nor any interest in this new toy – and I don’t blame them, this was 1995! This is when I decided to leave the safe harbour, the zone of comfort, of RDBMS and take a leap of faith into Pricewaterhouse on the shaky promise of being allowed to work on internet technology.

I had traded my position of DB2-dada for that of networking novice! Very few people in India had even heard of DNS, HTTP or mail servers and here I was trying to configure them with no help from anyone. But after all this is not rocket science and so within a few months of effort, computers connected with each other, mail was delivered and web pages become visible – with each such ‘event’ being a cause of celebration – and the internet revolution had arrived, even in India. Finally I had the luxury of moving out of the server room and into boardrooms to make corporate presentations on pompous topics like “Perils and Potentials of eBusiness”.

Some people are at home in the zone of comfort, happy with CVs and spreadsheets – that is a lifestyle choice, which one should not quarrel with! But being on the edge, the bleeding edge, of technology is a different high altogether and more often than not, the rewards associated with it far outweigh the risks that it brings along.