Science & Technology

Blast from the past: when artificial intelligence grew its tin heart

Artificial intelligence has always been a hot topic to discuss. Many iterations of what could be are portrayed in the media through stories that describe how questioning the possibility of artificial intelligence runs parallel to the anthropological pursuit of self-identity. While scientific inquiry sparked in the 1950s, the conception of AI dates much further back than we’d expect.

The term is noted to have been coined by John McCarthy in a presentation called the Dartmouth Summer Research Project on Artificial Intelligence. However, ancient Indian schools of philosophy like Charvaka actually used the term “artificial intelligence” as far back as 1500 B.C. Even Aristotle described the syllogism of mechanical thought, which is an essence of AI development.

It was World War II during which Alan Turing and Grey Walters, two pioneers of artificial intelligence, began trading ideas and discussed the possibility of a mechanical being. Walters would later go on to build the world’s first robots while Turing developed the Turing Test, something the BBC said “set the bar for an intelligence machine: a computer that could fool someone into thinking they were talking to another person.”

Interestingly enough, the movies we see today regarding intelligent robots are modern interpretations of older science fiction that helped direct the development of the field. Authors like Isaac Asimov inspired future roboticists and scientists by evoking thought-provoking and popular visualizations. While Asimov is best known for his three laws of robotics that discussed preventing machines from turning against their creators — themes that we find in movies such as The Terminator and I, Robot — many of his ideas have surprisingly remained relevant in today’s discussion of AI development, like the practicality of storing large amounts of information in order for machines to be versatile in their responses to human interaction.

But what often entices investors into the development of such technology isn’t the philosophically engaging dilemmas that would ensue. Rather, the prospect of having personal robot servants carry out their every whim as seen in Alex Proyas’ 2004 film I, Robot, which created a demand for the practical use of artificial intelligence.

Starting as a vision in Dartmouth College in 1965, the development of artificial intelligence mainly revolved around storage versus commands. During those times, computers were very expensive to maintain and could only follow simple commands to the point where, according to Harvard University’s Graduate School of Arts and Science, maintaining a computer could cost $200,000 a month.

The following decades saw a decrease in advancement in the field, enduring difficult times known as the “AI Winter.” With millions of dollars spent and no significant results for two decades, the U.S. and British government gradually pulled funding from the project.

While Japanese visionaries attempted to revitalize AI research in the late ‘80s, the science faced the ever-returning problem of computational power. The computers simply were not big enough to store and compute information. However, during this same decade, another breakthrough was made by John Hopfield and David Rumelhart, who introduced “deep learning techniques” through which computers mimicked the human mind by responding based on experience, or, in the case of a computer, responding based on stored information.

From the 1990s onward, the only significant advances were seen in visionary media like 2001’s A Space Odyssey and the AI, HAL 9000. The realistic AI painted a hopeful future for high-functioning machines while also instilling fear in others about a possible robot invasion. The development would proceed to go through phases of “AI Winters” and inspired breakthroughs as scientists continued to push through.

Artificial intelligence developed slowly but surely, as technological advancements continuously compacted calculating capacity into smaller machines. Interestingly enough, the coding used today is the same as the code that was first introduced decades ago. The only difference is the obstacle that has stood in the way of artificial intelligence since the very birth of the science: computational power.

Scientists have been developing and improving the versatility of mechanical reasoning and logic, and even pushing toward the unknown horizons of emotions and consciousness in machines. They are hopeful now more than ever before, knowing that the limits of science are only the current limitations of technology itself.

The technology we are surrounded by in our everyday lives is an example of progress that science has made, but it’s not there yet; artificial intelligence is not the high-tech representation that we see in movies.

Artificial intelligence made its first debut as IBM’s Deep Blue by defeating a grandmaster chess champion, Garry Kasparov, in a game of chess. Preliminary robots included automated vacuum cleaners like the Roomba but would go on to include robots used in warfare. Mine detection robots like the PackBot have been deployed in the thousands in Iraq and Afghanistan, according to the BBC.

Today, the world is churning out smarter and smarter machines. From home automation apps that can lock your doors, turn on lights and play music, to personal machine-buddies like Alexa, the future of artificial intelligence is as limited as our imaginations will allow.

In fact, the next steps toward artificial intelligence have already been imagined as companies work to develop autonomously driven cars as well as multi-language conversations for which speech is translated in real time. The end goal? To ultimately develop full-intelligence that equals or surpasses human intelligence.

November 5, 2018

About Author

Hugh Shin


Leave a Reply

Your email address will not be published. Required fields are marked *