A brief introduction to Artificial Intelligence
By Adrien Schmidt
We live in a time of blindingly fast technological advancement. So fast, in fact, that we’re not quite sure what our machines are capable of. At the forefront of this surge in computing power and technical capacity is artificial intelligence (AI) – the concept of machines developing the ability to perform tasks that previously only a human could perform.
While this sounds a lot like what computers have been doing since the 1960’s when the first room-sized IBM machines were running computations for the space program, it goes much further. Artificial intelligence, in particular, refers to tasks that require the ability to not only learn but to analyze and make subjective decisions – much in the same way humans perceive the world around them visually, recognize speech patterns, and make decisions in real time based on a variety of inputs.
Throughout this time, Hollywood has presented AI as something more – and in some cases somewhat sinister. Since the earliest days of computer technology, science fiction writers and Hollywood directors have been fascinated with the idea of the “sentient machine”. In fact, this is what most people think of when we discuss AI. The HAL computer in 2001: A Space Odyssey, Arnold in The Terminator, or The Machines in The Matrix. In popular fiction, there is this looming risk of catastrophe when we give machines the power to think and make decisions. In truth, the current evolution of AI, while rapid and world-changing, is almost always designed for very specific tasks in specific industries.
The History of Artificial Intelligence
We’ve been using the term “artificial intelligence” for decades since it was coined in 1956, and machines that could mimic human tasks were popular going back as far as the early 1800s. But the way we think of AI today – as a machine that can learn and perform formal reasoning based on a limited set of preprogrammed input – is new and developing rapidly.
In the 1960s, the Department of Defense started experimenting with ways to train computers to perform human tasks, and the Defense Advanced Research Projects Agency (DARPA) has been exploring artificial intelligence since the ’70s, including the development of early versions of the voice assistants we use every day in 2003.
It was those first three decades, when computers took up a warehouse and were used almost exclusively for academic and military research that the first neural networks were developed, systems that could interact with one another when attempting to solve a problem. Between the 1980s and 2010s, machine learning became increasingly popular. Specifically, we’ve built systems that have the ability to perform millions of tests in a short period of time and slowly learn from them. This is how Deep Blue was developed to beat Gary Kasparov at chess in the 1990s and how just in the last decade Google’s AlphaGo bested the top Go players in the world. These machines were able to run simulations and learn from them, developing a series of probabilities in almost every conceivable situation which gave them the best possible moves to make.
Famously, Alan Turing hypothesized the eponymous Turing Test to measure the development of AI (referenced in films like Blade Runner). In a Turing Test, an individual interacts with a machine while another observes. If the machine is able to be convincing enough that the observer cannot determine which of the two participants is a machine, it has passed the Turing Test, a major indicator of current advancement, and something that, on a very small scale we have seen in how machines play chess and go or interact with us online.
What Artificial Intelligence Does
Artificial intelligence does many things in our world, and you interact with it more often than you might realize on any given day. Some of the most common applications you’ll see include:
- Automation of Repetitive Learning with Data – Automation is not new. We’ve been using machines and robotics to streamline manufacturing and production for decades, but AI automation is different in that it is not hardware driven, but software driven. What this means is that artificial intelligence is implemented that can automate certain computerized tasks, greatly streamlining human operations.
- AI Makes Existing Tools Smarter – One of the most common applications of AI right now is the augmentation of existing tools. AI algorithms are being used in chatbots to automate the helpdesk and service experience on thousands of websites, for example. When you ask a question about your missing pair of sneakers on Amazon, there’s a chance you’re interacting with a computer, at least until the questions become too sophisticated and a human support tech is needed. The same is true for business intelligence, sales software, and much more.
- Learning from Its Programming – Until very recently, computer programs were designed to do whatever human operators told them to do. Sophisticated programs had to include millions of lines of code to account for various situations that might arise. Artificial intelligence is no less complicated, but now offers a way to program through data collection. As the AI interacts with the world, it gathers data that then influences its programming. This powers the recommendation list in your Netflix account or the enemy combatants in your favorite video game.
- AI Is Getting Smarter from Repeated Use – The most prolific AI networks in the world are those operated by the major search companies – Google’s Search, Photos, and Home voice services, Amazon Alexa, and Apple’s Siri. These are all powered by deep learning, meaning that they grow more accurate, and therefore more helpful, every time they are used. With large user bases and growing proliferation, that means they are continuously improving.
How AI Is Evolving and Being Used
Artificial Intelligence may one day be able to respond to us directly when we have a question, but right now it is a tool we can leverage to automate, augment, and optimize the processes we already run on a daily basis. It is being implemented in almost every major industry and provides the kind of analytics that were previously impossible due to the sheer volume of data being collected.
Whether analyzing data in a factory to streamline logistics and setup a more efficient supply chain or providing advanced analysis of personal health care data collected via mobile apps and smart devices, AI has become an interface for humans to better utilize big data sources that were previously off limits due to their scope and size. The movies and stories we watch about AI might be entertaining (and sometimes scary), but the tools are functional and targeted to the needs of today’s businesses and consumers.
Adrien Schmidt is an internationally recognized engineer, innovator, and speaker. He is the CEO and Co-Founder of Aristotle, an enterprise software company that delivers a personal voice analytics assistant to convert data analytics into meaningful conversation. He is based in San Francisco, CA, USA.