Benefits and Risks of AI
By Ravi Chandra Sharma 2017-10-24 14:07:59 73
WHAT IS AI?
From SIRI to self-driving cars, artificial intelligence (AI) is advancing quickly. While science fiction often depicts AI as robots with human-like characteristics, AI can include anything from IBM’s Watson to autonomous weapons to Google’s search algorithms.
Artificial intelligence today is properly known as narrow AI (or weak AI), in that it is designed to perform a narrow task (e.g. only driving a car, only facial recognition or only internet searches).
However, the long-term goal of many researchers is to bring into existence general AI (AGI or strong AI). While narrow AI may surpass humans at whatever its specific task is, like solving equations or playing chess, AGI would outperform humans at nearly every cognitive tasks.
WHY RESEARCH AI SAFETY?
In the near term, the goal of keeping AI’s impact on society beneficial motivates research in many areas, from economics and law to technical topics such as security and control, verification and validity. Whereas it may be little more than a minor nuisance if your laptop crashes or gets hacked, it becomes all the more important that an AI system does what you want it to do if it controls your airplane, car, your power grid, your automated trading
system or your pacemaker. Another short-term challenge is preventing an apocalyptic arms race in lethal autonomous weapons.
In the long term, an important question is what will happen if the quest for strong AI succeeds and an AI system becomes smarter than humans. As pointed out by I.J. Good in 1965, designing smarter AI systems is itself a cognitive task. Such a system could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind. By inventing revolutionary new technologies, such a superintelligence might help us eradicate disease, war, and poverty, and so the creation of strong AI might be the biggest achievement of mankind. Some experts have expressed their concern and said that it will be highly unsafe for AI to evolve unless we learn to align the goals of the AI with ours before it becomes super intelligent.
There are some who doubts if strong AI will ever be achieved, and other are pretty sure that creation of super intelligent AI is guaranteed to be beneficial. Scientists at FLI, are working on both possibilities and also recognize the potential for an AI system to intentionally or unintentionally cause great harm.
They believe that research alone can help weed out all potential negative consequences in the future and will allow us enjoy the benefits of AI while staving off pitfalls.