Opinion | The fading boundary between humans and machines

Share:



To understand the future, it often helps to study the past. Sometimes, even the distant past. In fact, scientists draw parallels between the inflection point that humanity finds itself at today and a short window of time 500 million years ago, called the “Cambrian explosion", which was a defining moment in the evolution of life on earth.

For around 1,500 million years prior to the Cambrian era, living forms were largely simple organisms made up of single cells. Then, within a period of just 15-20 million years, complex organisms began to develop, and there was an explosion in the number of species. Most of the animal phyla we know today appeared during this period. One possible explanation for this explosion in the diversity of life is the evolution of vision. The ability to see meant that rather than float around in the ocean hoping to run into prey or mates, organisms could now actively seek them out.

Fast forward to today. Image recognition technology has developed to a point where in the near future, robots may be able to see, understand and interpret complex images and scenes in much the same way as humans. Once this occurs, scientists expect an explosion in the diversity of robots. Intelligent, mobile machines are likely to find their way into all aspects of our daily life. We will be at a Cambrian-esque defining moment in the evolution of machines on earth.

The evolution of machines itself began around 5,000 years ago with the invention of the potter’s wheel. From then until now, humans and machines have co-existed at the physical level. With the advent of intelligent machines, we are beginning to co-exist at the cognitive level. The boundary between humans and machines is blurring.

One significant difference between the evolution of life and the evolution of machines is that the latter is a human endeavour. As such, it is our job to shape Artificial Intelligence (AI) in a manner that it has a positive impact on humanity. Opinion is divided on how well or poorly we are doing this. AI, if guided correctly, can augment human capability and help solve many of the big problems of our generation.

On the other hand, Elon Musk recently referred to AI as “humanity’s existential threat". The question is no longer what role technology will play in human lives; rather, what role humans will play in an AI-driven world. AI certainly promises to replace a number of human jobs. And AI is known to amplify the social and gender biases of its creators, which may lead to greater inequality in the world.

What does this mean for the future of learning? Universities must not only prepare students to thrive in a world with no thick red lines between humans and machines, but must also ensure that machine intelligence develops to serve the needs of humanity. This can only be done with an inter-disciplinary approach that analyses AI from the perspective of not just technology, but a variety of other disciplines, including psychology, business, law, social sciences and the humanities.

A number of efforts in this direction are being made in universities around the world. Stanford University recently launched its Institute for Human-Centered Artificial Intelligence, with the intention of putting humans at the centre of AI. The institute is guided by three fundamental beliefs. First, AI should be inspired by human intelligence. Second, the development of AI must be guided by its human impact. Third, AI should enhance and augment humans, not replace them. The institute was inaugurated at a symposium earlier this month with a keynote address delivered by Bill Gates. A bevy of tech titans in Silicon Valley make up its advisory council.

Closer home, Krea University in India was recently established to pioneer a new paradigm of interwoven learning, which brings together technology, humanities and ethics.

All undergraduate students at Krea, for example, are required to take courses in data science, as the language of data emerges as a humanistic language. Students are also encouraged to pursue avenues of creative expression and aesthetic appreciation, which are essential human qualities. Leaders of the future must design human-centered AI that can understand human feelings, ambitions and behaviours.

Krea has partnered with Massachusetts Institute of Technology’s Dalai Lama Center for Ethics and Transformative Values to interweave ethics into all aspects of life at Krea. This partnership would explore contemporary ethical questions posed by AI.

For example, should a driverless car, in a potential crash scenario, run into a tree and put its occupant at risk? Or should it run into pedestrians instead, putting them at risk but keeping the occupant safe? While human drivers are forced to make spur-of-the-moment decisions behind the wheel, AI places on us the burden of programming in advance a decision framework that would come into effect in a crisis. Such a decision framework must be essentially human in its nature—and the stakes of getting it right are high.

We are venturing into a new world that we are building as we go along. Universities must take the lead in shaping Artificial Intelligence so that it has a positive impact on our planet, our nations, our communities and our lives.

Kapil Viswanathan & John Etchemendy are vice chairman, Krea University, and co-director of Stanford Institute for Human-Centered Artificial Intelligence, respectively.