Home arrow Archives arrow Spotlight arrow Spotlight - 2023 arrow Is Humanity under attack from AI?, By Rajiv Gupta, 26 November 2023
 
Home
News and Features
INFA Digest
Parliament Spotlight
Dossiers
Publications
Journalism Awards
Archives
RSS
 
 
 
 
 
 
Is Humanity under attack from AI?, By Rajiv Gupta, 26 November 2023 Print E-mail

Spotlight

New Delhi, 26 November 2023

Is Humanity under attack from AI?

By Rajiv Gupta 

If you are not inundated with articles on Artificial Intelligence (AI), you have probably been living under a rock. Several of these articles raise images of a dystopian future based on machines taking over mankind. If you are sufficiently apprehensive about the future awaiting mankind, let me try to offer my analysis of what AI is and is not, and how it might shape our world in the foreseeable future. 

AI is not a recent phenomenon, although it has assumed greater currency of late in terms of specific hardware and software such as ChatGPT. The term was first coined in 1956 by John McCarthy, a computer scientist at Stanford University. He defined it as “the science and engineering of making intelligent machines.” Since then, several scientists and engineers have worked in this area and we have seen many periods of hectic activity as well as dry spells due to lack of funding for research. 

The other term I wish to briefly introduce is the Turing test, named after Alan Turing, who postulated that if a machine can engage in a conversation with a human without being detected as a machine, it has demonstrated human intelligence. The development of devices like Alexa and software such as ChatGPT are probably a result of researchers trying to meet the criteria for the Turing test. 

First, a little understanding of AI is in order. AI, in its current avatar, is nothing more than sophisticated pattern recognition. The programs look for patterns in consumer behaviour, in speech, in text, in photographs, in medical diagnostic scans, etc. The software is “trained” to recognise patterns using a very large database of words, pictures, numbers, and scans. For example, if a customer tends to purchase certain products from a retail outlet on a regular basis, AI will be able to detect this purchasing pattern and the customer can be sent customize mailings/ads that match his/her preferences. When it comes to facial recognition, the picture is divided into a number of dots or pixels. The software analyses the pixels to determine patterns which give information about facial features. Then the software would compare the pattern it has been trained on with the pattern on a photograph to determine if the two photographs are of the same individual. 

The accuracy, or correctness of the answers developed by AI is dependent on the data used to train it. Training is done by feeding a large amount of data into the software, then letting the software answer the question that is being asked. By providing human feedback regarding the accuracy of the answer, the software gets “trained” so that it can improve its ability to decipher the pattern on its next attempt. There is ample evidence of AI making mistakes due to gaps and shortcomings in the data used in the training. These mistakes have occurred in facial recognition in the US where the program has incorrectly identified an individual as a suspect in a crime not committed by him/her. There are several examples of AI programs showing clear bias based on race, gender, age, etc. when the database used to train the software has been deficient or biased. 

Irrespective of the sophistication of the software, none of them are 100 % reliable. Some people may say that neither are humans. There are two major dangers in letting a software make decisions. First, most AI software is like a black box. It is not possible to question the logic used by the program. This does not allow us to have an audit trail. Second, people place very great faith in output from software and do not question it, assuming that computers cannot make mistakes like humans. But as I have mentioned, there have been several instances where AI has made an error. It would be incorrect to completely hand over the responsibility of an entire human task to a computer program, especially when we are not sure of the reliability of the program. If the error results in a wrong conviction of a person, the consequences are huge from a human perspective. 

What we can, and probably should do, is to automate the repetitive component of the human task. This would free the human to add greater value by providing inputs that computers cannot. A good example of this is the use of auto pilot in airplanes. The longest, and the most boring part of flying an aircraft is when it is flying at its cruising altitude. It is the take-off and landing that requires the expertise of a human pilot. Therefore, a plane can be put on auto pilot at cruising altitude as constant attention by the pilot is not needed. But we do not eliminate the pilot. We let the auto pilot relieve the stress in a long-haul flight. The pilot can override the auto pilot if the situation demands it. 

Any technological development has led to reduction in human labour. Whether it was the steam engine, the tractor, the automobile, or even the computer. Each innovation has resulted in the elimination of the drudgery of repetitive human tasks, whether physical, or mental. In 1870, agricultural workers comprised half of all workers in the US; in 1900, about one-third of all workers; and in 1950, less than a fifth of all workers. Today the number of agricultural workers is around one percent of all workers. The reason for this reduction is an increase in mechanisation and farm sizes over this period. 

The question that ought to be asked is not whether technology displaced people; it certainly did. Rather we should be asking whether people would like to do the work being done by machines today for what consumers would be willing to pay for it. The answer, arguably, would be a no.Similar scenarios have been observed in non-physical work situations such as calculating, accounting, etc. where computers have effectively replaced humans, and rightly so. How many people today would enjoy adding numbers all day long? 

There are several human jobs that are ripe for automation. One of the most denigrating and dangerous jobs in India is that of manual scavenging. Would an AI powered solution not be a great way to eliminate the risk of death that manual scavengers face today. There can be many other such jobs which should not be done by humans. The rule that Japanese companies, such as Toyota, use is if a task is dirty, difficult, or dangerous, it is a good candidate for automation. To this list we can also add boring and repetitive, with no value added. 

In conclusion, I feel that both the hype and the fear attributed to AI is overdone. If the test of AI is that it should be able to mimic a human, we need to remember that humans can make mistakes and computers cannot. At the same time humans can find an opportunity in failure or mistake, such as the discovery of penicillin. A computer cannot do that because it has to be told what to look for. Ultimately, humans decide what AI should be used for, not the other way around. Let us do this judiciously.---INFA 

(Copyright, India News & Feature Alliance)

< Previous   Next >
 
   
     
 
 
  Mambo powered by Best-IT