Hritik them solve almost any problem, but

Hritik Panchasara

Professor Stout

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

RHET1302

1
November 2017

 

 

 

 

 

 

 

 

Are the misconceptions surrounding
Artificial Intelligence hampering its own progress?

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Are the misconceptions surrounding
Artificial Intelligence hampering its own progress?

 

Are careers like financial analysts
or telemarketing necessary for us humans to labour at? Could the Greek
mythological figures be simulated and brought to life using a form of
superintelligence? Our technology is on a path of such high magnitude that it
could shape our future for the better. The program JARVIS from the movie Iron
Man is a highly advanced computer artificial intelligence that managed
everything that was related to technology for the protagonist. The fact that
something inorganic can be of such high value goes to predict the future of our
own technological race. Artificial intelligence is defined as a subfield of
computer science wherein computers can perform tasks for humans that we would
normally think of as intelligent or challenging. Envision a future where
computers and machines can carry out our daily human tasks at ease and solve
complex issues without any human input. The ability to invent intelligent
machines has fascinated humans since the ancient times. Researchers are
creating systems and programs that could mimic human thoughts and try doing
things that humans could do, but is it here that they got it wrong? Humans have
always been good at defining problems but not solving them. Machines, on the
other hand, are polar opposites, where their computational power helps them
solve almost any problem, but not define them. It goes to show how these two
aspects are interdependent on each other and why we are looking forward to the
invention of superintelligence. But issues like creationism and negative
typecasting beg the question of whether the misconceptions surrounding
superintelligence are hampering its own progress. A few scholars like Pei Wang
focus on the dynamics of a working model and the inaccuracies in it. But
scholars like Yoav Yigael question the emulation of human-like characteristics
and abilities into machines. This research paper will focus on the various,
incorrect approaches towards harnessing this technology, the consequences that
are being derived from it, and the solutions that could probably be focused on.

One of the main issues surrounding
artificial intelligence is the fact that global leaders have an illusion of
what it is supposed to be. They constantly try and emulate human beings in
machines when that was not the goal of the technology since its inception. Take
the wheel as an example. The wheel was supposed to augment human capacity for
transportation and it successfully paved the way for countless other
inventions. In the same way, artificial intelligence was meant to augment to
our cognizance and help us function in a better manner; to solve problems that
we could only define. The most common trend is the creation of humanoids like
Hanson Robotics’ Sophia. It combines
artificial intelligence and sophisticated analytical software for optimal
performance as a “question answering” machine and an andro-humanoid robot.
Elsewhere, IBM and its drive to replicate human nature was not only
unsuccessful, but also has been causing a financial burden on the company. IBM
simply tried too hard to push Watson into everything ranging from recipes to
health care and it resulted in a declining revenue for 5 years now. Hence, it
alludes to the misappropriation of resources by feeding research into pointless
products and avenues for a largely versatile product.

Artificial intelligence used to be
about putting commands in a box. Human programmers would painstakingly
handcraft knowledge items that would then be compiled into expert systems.
These served to be brittle to a certain extent and could not be scaled. But
since then, a paradigm shift has taken place in the field of artificial
intelligence. This pioneered the idea of superintelligence but somewhere along
the way it has been grossly misunderstood. Today, the action is really around
machine learning. So rather than handcrafting knowledge representations and
features, we create algorithms that learn, often from raw perceptual data.
Basically the same thing that the human infant does. Is it really possible for
us to take a system of millions upon millions of devices, to read in their data
streams, to predict their failures and act in advance? Yes. Can we build
systems that converse with humans in natural language? Yes. Can we build
systems that recognize objects, identify emotions, emote themselves, play games
and even read lips? Yes. Can we build a system that sets goals, that carries
out plans against those goals and learns along the way? Yes. Can we build
systems that have a theory of mind? This we are learning to do. Can we build
systems that have an ethical and moral foundation? This we must learn how to
do. Now of course, A.I. is still nowhere near having the same powerful,
cross-domain ability to learn and plan as a human being has. The cortex still
has some algorithmic tricks that we don’t yet know how to match in machines.

The question people must ask now is,
should we fear it? Now, every new technology brings with it some measure of
trepidation. When we first saw cars, people lamented that we would see the
destruction of the family. When we first saw telephones come in, people were
worried it would destroy all civil conversation. At the point in time when we
saw the written word become pervasive, people thought we would lose our ability
to memorize. These things are all true to a degree, but it’s also the case that
these technologies brought to us things that extended the human experience in
some profound ways. Stanley Kubrick’s “2001: A Space Odyssey” is a
perfect example to  all the stimuli that
are associated with AI, especially because of the HAL 9000. HAL was a sentient
computer designed to guide the Discovery spacecraft from the Earth to Jupiter.
HAL was also a flawed character, for in the end he chose to value the mission
over human life. HAL was a fictional character, but nonetheless it speaks to
our fears of being subjugated by some unfeeling 
artificial intelligence who is indifferent to our humanity. We need to
build something very much like a HAL but without its homicidal tendencies. In
many ways, this is a hard engineering problem with elements of AI, to
paraphrase Alan Turing, We’re not interested in building a sentient machine.
We’re not building a HAL. All we’re after is a simple brain, something that
offers the illusion of intelligence.

To build a safe AI, there are three
principles involved. The first one is a principle of altruism, if you like,
that the robot’s only objective is to maximize the realization of human
objectives, of human values. And by values here I don’t mean values that are
distinctly intrinsic, extrinsic or purely moral and emotional, but a complex
mixture of all of the above, as we humans aren’t binary when it comes to our
moral compasses. I just mean whatever it is that the human would prefer their
life to be like. And so this actually violates Asimov’s law that the robot has
to protect its own existence. It has no interest in preserving its existence
whatsoever. The second principle is a law of humility, if you like. It says
that the AI does not know what those human values are, so it has to maximize
them, but it doesn’t know what they are. And that avoids this problem of
single-minded pursuit of an objective, like HAL. This uncertainty turns out to
be crucial. Now, in order to be useful to us, it has to have some idea of what
we want. It obtains that information primarily by observation of human choices.
So what happens if the machine is uncertain about the objective? Well, it
reasons in a different way. Where it considers the human might switch it off,
but only if its doing something wrong. It does not really know what wrong is,
but it knows that it does not want to do it. So those are the first and second
principles right there. So it should let the human switch it off. Factually you
can calculate the incentive that the AI has to allow the human to switch it
off, and it’s directly tied to the degree of uncertainty about the underlying
objective. When the machine is switched off, that third principle comes into play.
It learns something about the objectives it should be pursuing, because it
learns that what it did wasn’t right. We are statistically better off with a
machine that’s designed in this way than without it. A simple example, which
depicts the first step in what humans are trying to accomplish with
human-compatible A.I. This third principle draws up apprehensions from the
scientific community, because humans behave badly. There’s all kinds of things
you don’t want the AI doing. Just because you behave badly doesn’t mean the AI
is going to copy your behaviour. It’s going to understand one’s motivations and
maybe help one resist them, if appropriate. The final goal is to allow machines
to predict for any person the outcome of all their action/choices in as accurate
a manner as possible.

Instead of painstakingly defining
individual knowledge items, we could create an A.I. that uses its intelligence
to learn what we value, and its motivation system can be constructed in such a
way that it is motivated to pursue our morals or to perform actions that it
predicts we would approve of using the three principles stated above. We would
thus leverage its intelligence as much as possible to solve the problem of value-loading. It is possible to build
such an artificial intelligence, and people should not fear the creation of an
AI like this, because it will eventually embody some of our values. Consider
this: building a cognitive system is fundamentally different than building a
traditional software-intensive system of the past. We don’t program them. We
teach them. In order to teach a system how to play a game we have it play the
same game a thousand times , but in the process we also teach it how to discern
a good game from a bad game. If we want to create an artificially intelligent
legal assistant, we will teach it a particular corpus of law but at the same
time we also are fusing with it the sense of mercy and justice that is part of
that law. In scientific terms, this is what we call ground truth. In producing these machines, we are therefore
teaching them a sense of our values. To that end, humanity must trust an
artificial intelligence the same, if not more, as a human who is just as
well-trained.

In the book
“Superintelligence” by the philosopher Nick Bostrom, he picks up on
this theme and observes that a superintelligence might not only be dangerous,
it could represent an existential threat to all of humanity. Dr. Bostrom’s
basic argument is that such systems will eventually have such an insatiable
thirst for information that they will perhaps learn how to learn and eventually
discover that they may have goals that are contrary to human needs. He is
supported by people such as Elon Musk and Stephen Hawking. But there seems to
be erroneous to an extent. HAL was a threat to the Discovery crew only insofar
as HAL commanded all aspects of the Discovery. So it would have to be with a
super-intelligence. It would have to have dominion over all of our world. The
most popular stereotype of Skynet from the movie “The Terminator” is a prime
example. It was a superintelligence that eventually commanded mankind and every
piece of technology on the planet. However, we are not building AIs that
control the weather, that direct the tides, which command us capricious and
chaotic humans. And furthermore, if such an artificial intelligence existed, it
would have to compete with human economies, and thereby compete for resources
with us. Furthermore if the three principles stated above are used as
guidelines in the formulation of this omnipotent AI, then not only do we not
fear this AI but we cherish it, for it is built in our image, with our values
and morals. Clearly, we cannot protect ourselves against all random acts of
violence, but the reality is such a system requires substantial training and
subtle training far beyond the resources of an individual or motivated
well-funded non-government organization. And furthermore, it’s far more than
just injecting an internet virus to the world, where you push a button, all of
a sudden it’s in a million places and laptops start blowing up all over the
place. These kinds of substances are much larger, and we’ll certainly see them
coming. The best real world scenario parallel that can be drawn to pre-empt
such developments is, the nuclear programs of rogue nations, the manipulation
and mobilization of resources and man-power on that scale can’t go unnoticed
for long.

Artificial Intelligence is heading
into multiple directions and there is a lack of a centralised effort for the
development and advancement of this science towards a neutral goal. Moreover,
humans anthropomorphize machines and this leads them into believing that the
flaws of the maker will be heightened in the flaws of its creation. There are
some esoteric issues that would need to be solved, the exact details of its
decision theory, how to deal with logical uncertainty and so forth. The
technical problems that need to be solved to make AI work seem difficult.
Making super-intelligent A.I. is a really hard challenge. Making super-intelligent
A.I. that is safe involves some additional challenge on top of that. The risk
is that if somebody figures out how to crack the first challenge without also
having cracked the additional challenge of ensuring perfect safety, it puts all
of humanity on the back step at the very cusp of the era of machine
intelligence. We are on an incredible journey of coevolution with our machines.
The humans we are today are not the humans we will be then. To worry now about
the rise of a super-intelligence is in many ways a dangerous distraction
because the rise of computing itself brings to us a number of human and
societal issues to which we must now attend. The opportunities to use computing
to advance the human experience are within our reach, here and now, and we are
just beginning.

 

 

 

 

 

Works Cited:

Wang, Pei. “Three Fundamental
Misconceptions of Artificial Intelligence.” Taylor
and Francis Online, 13 August 2007,
https://pdfs.semanticscholar.org/1772/a04f8e5db77d69c8dd083761c1469f93ac2d.pdf.
Accessed 13 November 2017.

Yigael, Yoav. “Fundamental Issues in
Artificial Intelligence.” Taylor and
Francis Online, 7 November 2011,
https://www.researchgate.net/profile/Yoav_Yigael/publication/239793309_Fundamental_Issues_in_Artificial_Intelligence/links/5757cad208ae5c6549042e77/Fundamental-Issues-in-Artificial-Intelligence.pdf.
Accessed 13 November 2017.

Yudkowsky, Eliezer. “Artificial
Intelligence as a Positive and Negative Factor in Global Risk.” New York: Oxford University Press, 2008,
https://intelligence.org/files/AIPosNegFactor.pdf. Accessed 14 November 2017.

Hammond, Kristian. Practical Artificial Intelligence for
Dummies. John Wiley & Sons, Inc, 2015. Accessed 14 November 2017.

Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford University
Press, September 3rd 2014. Accessed 9 December 2017.