November 06, 2020

Ethics of AI

Unless you have been living under a rock for the past couple of years you would know for sure that things are happening in the area of Artificial Intelligence. Rapid developments in the area of artificial neural networks has spawned a brood of useful architectures - CNN, RNN, GAN - that have been used to solve a range of very interesting problems. These include, among others

  • control of autonomous or self driving vehicles
  • identifying visual elements in a scenery
  • recognising faces or connecting bio-metrics to individual identities
  • automatic translation from one language to another
  • generating text and visual content that is indistinguishable from that generated by human intellect.

While these applications have created considerable excitement both in the technical as well as in the commercial community, there has been an undercurrent of resentment among certain people against what they view as ethical issues that are yet to be unresolved. 

To understand what is at stake let us consider two specific issues from the area of autonomous vehicles. 

First who is liable in the case of an accident? In some countries, the liability lies with the owner of the vehicle while in others, it lies with the driver who was at the wheel when the accident occurs. But in the case of autonomous vehicles there is a point of view that says that the liability should be with the manufacturer. If at all it was the fault of the autonomous vehicle, and not the other party to the accident, then the fault lies with the autonomous system - hardware sensors and controlling software - that has been supplied by the manufacturer. This is similar to a brake failure except that the owner, driver has no way to check the equipment prior to starting out to drive.

Second, and this is more interesting, is the question of whose life is more important? Suppose a pedestrian comes in the way of a moving vehicle whose speed is such that an application of brakes will not be able to stop the car from hitting the pedestrian. The only maneuver that is possible is for the car to turn away and hit a wall. In either case the injury or death will happen either to the pedestrian or to the driver. For the sake of this argument, we can simplify the situation by ignoring issues like estimating the expected quantum of injury in the two cases and the subsequent possibility of death or extent of disfigurement and come out with a binary situation - whose life is more valuable? The driver or the pedestrian?
image from berkeley.edu


These may look like very profound questions and are very often portrayed as such but frankly they are not. 

In the first case, there is no need to split hairs on the liability. Lawyers may love the possibility of litigation and accountants may salivate at the the thought of extracting money from car manufacturers but for the technologist, this is a no-brainer. Most car accidents are because of driver error, except of course when a pedestrian behaves randomly, and with the advent of autonomous vehicles the possibility of driver error virtually disappears. So if the vehicle software has been adequately tested - like vaccines! -- before they are released in the 'wild', the number of accidents will, in any case, go down dramatically. So the overall cost of accidents will go down but individual cases will be paid out of the general corpus of funds created by collecting premiums from all vehicle owners, that are calculated by usual statistical ( or actuarial) analysis. In fact, this no different from a mechanical failure which in any case is factored into the economics of insurance. Net-net, there is no issue at all. It is just another unfortunate accident that has to be factored in to the premium calculation process, perhaps with an additional line item.

The second issue can also be dealt with quite easily. Who should die? The pedestrian or the driver? In the case of a human driver both situations are possible. Some drivers will slam on the brakes and hope that the car will stop before hitting the pedestrian while other drivers will turn the car and hit the wall. There is no hard and fast logic and nor is there time for a thorough analysis, ethical or otherwise, of the various options. It is a gut-feel reaction that is best modeled by a random probability. So the simple way to break the tie is to toss a coin -- or simulate the coin toss with a random number generator -- and take a decision on whether the coin shows head or tails.

If it is a fair coin, there is a 50% chance of either outcome and so the software can be programmed to take one decision or the other on the basis of this probability. This would reflect the regular, or underlying, reality of a human driver. So the behaviour of the autonomous vehicle would in no way be different from the behaviour of a vehicle driven by a human being. If we have learnt to live with human drivers we can continue to live with autonomous vehicles.

The 50% rule is a kind of a default starting point. If it is observed that most drivers are altruistic and prefer to save the pedestrian at the cost of their own health then the probability of hitting the wall can be raised from 50% to 60%. On the other hand, if it is observed that most drivers are selfish and prefer to kill the pedestrian and save themselves, then the probability of hitting the wall can be lowered to 40%. These probability numbers mean that the coin being tossed is not a fair coin but a biased one and reflects the inherent bias of society at large.

This solves the problem of the autonomous car but opens up another Pandora's Box.

Should AI ( or Deep Learning ) systems have a bias at all? Or should they always be fair? This is important because Deep Learning systems are trained on the basis of a history of past behaviour of human systems. This training is done by collecting data on how decisions have been taken in the past and using this data to set the parameters. In simple systems, these parameters could be probability values but in neural networks they are the weights that are assigned to different connections between nodes. The exact technology is not important here. What is important is whether the training data has bias and whether this bias is carried through from the non-computer system to the computer system.

For example, it has been observed that in the US, both parole applications and loan applications are more likely to be rejected if the applicant is a black person because of a historical bias against this particular demographic segment. When this data is used train a AI / DL system, this bias is carried through and once again, blacks will be discriminated against. [ Of course, there is another point of view that states that automated, machine based decisions have less bias -- see this (paywalled) link -- but that is another story and another debate.]

Obviously this is patently unfair and should not be allowed and hence there is a strong move to ensure that AI systems do not suffer from bias. There is no question about that ...

But does that mean that AI / DL should not be built until we have resolved the issue of bias? This is where the debate takes on an ugly turn between the proponents of ethics in AI and those who would rather stick to the technology of AI. For the former, the question of ethics is paramount and they would rather not have AI unless it is certified to be bias free. For the latter, the matter of ethics is secondary. They would rather focus on creating innovative technology and leave the matter of ethics for another day.

Faced with this choice, my sympathies clearly lie with the latter, the technologists and the reason is very simple. The world is not fair and can only be so in the dreams of the Utopian idealist. Since we do not have the luxury of living in an ideal utopia, expecting AI to be ethical and bias free is an impossible dream. The world has learn to live with bias and will continue to be so. If ethics was really as essential for the survival of the human race then we should have shut down the armaments business (and possibly a large part of the pharmaceutical and hospital business as well) But we have not done so, because of an irresistible, or inevitable, convergence of economic, political and social power.

Any country, or society, that shuts down its armaments business or disbands its armed forces will be overrun and taken over by another country that does not subscribe to this Gandhian policy of pointless non-violence. This was brutally demonstrated during the 1962 China War where India's idealistic Principles of Panchsheel were brutally shoved aside by the rampaging Chinese PLA. While a measure of ethics is certainly good, making it an absolute framework that is at odds with the ambient reality is neither possible, nor desirable. So is the case with AI. There are many people who feel that the so-called 'liberal' countries like the United States should not use technology like facial recognition at all because it is an unethical violation of privacy. Little do they realise that 'non-liberal' countries like China are already using it in a big way to enhance their own security and if the imbalance continues it will be as stupid as shutting down the armaments industry.

Any technology - from nuclear through genetics and space to artificial intelligence -- can be weaponised. That does not mean that development must stop. Let us go in with our eyes wide open, be aware of the dangers but also be aware of what is happening elsewhere and make sure that we do not vacate or step back from the leading, or bleeding, edge.

To sum up, let us understand that bias is inevitable in any human society. We should try to minimise it but hoping to eliminate it is impossible. So is the case with non-human, silicon based intelligence or for that matter for any non-human sentience that will eventually arise from this technology.

No comments:

Post a Comment