Glossary term

Artificial Intelligence

What Exactly Is Artificial Intelligence?

Artificial intelligence (AI) refers to particular computer programs, systems, and related technologies that effectively mimic and automate aspects of human intelligence—including the abilities to rationalize, discover meaning, and learn from past experience—that are typically difficult for computers to emulate.

Why Is Artificial Intelligence So Important?

AI is transforming how work is completed and structured across business sectors. If industrialization represented the automation of labor, then artificial intelligence represents the automation of analysis. It takes AI a fraction of a second to make complex decisions, and it can perform these actions at scale, around the clock, with no concern for fatigue, providing critical insights for human decision-makers.

Common Applications of AI

An appropriately designed AI system can be used for a large number of different tasks. Because of this versatility, we can find examples of AI in almost every business sector. Here are some of the most common AI applications.

Data mining

AI can process tremendous amounts of data and identify subtle patterns, groupings, and abnormalities. This functionality is tremendously useful in finance, as it can help identify fraudulent transactions in real-time, assess and categorize risk levels for loans, and create personalized financial portfolios—all tasks that previously required countless hours of human labor. And data mining is also how Netflix and Spotify figure out the movies and music you’re likely to enjoy.

Natural language processing

When you talk to Siri or Alexa, you’re talking to AI-powered systems that are trained to recognize speech, process it, and extract meaning. This technology was on full display when IBM’s Watson beat two of Jeopardy’s greatest champions at the trivia game in 2011. Speech recognition is now employed in automobiles (voice-activated navigation systems), call centers (answering common questions and transcribing discussions), healthcare (logging doctor notes), and security (voice-based authentication). NLP also powers the chat bots that assist companies with basic customer service tasks.

Image processing

Computer vision allows computers to process the visual world, including both still images and real-time video. This technology is critical for the development of autonomous vehicles, which need to be able to process their environments while making instantaneous reactions. Image processing is also used for quality control in manufacturing, as AI-powered cameras can identify defects in products much more rapidly than humans and with far fewer errors.

Smart machinery

Low-cost sensors and high-bandwidth wireless networks allow companies to generate real-time data from machinery and devices, and AI help them process and manage the resultant massive stream of data. Manufacturers currently rely on the combination of the Industrial Internet of Things and AI to increase visibility across their entire supply chain, implement predictive maintenance on machinery, and ensure processes are efficient and safe.


AI allows robots to process the world around them (through motion sensors, image annotation, and computer vision) and learn how to complete certain tasks on their own through repetition. You can see examples of this in the robot videos released by Boston Dynamics, which show intelligent and adaptive robots that are able to handle complicated tasks and navigate stairs and other difficult terrain. AI also powers drones that aid in public safety tasks (like assessing tornado damage) and even the Roombas that vacuum homes.

Common Challenges Associated with Artificial Intelligence

AI is an impressive technology that offers significant ROI, but like any new technology, there can be challenges to adoption—some internal and some external.

Technical know-how

AI is a very particular type of data science, and getting an AI system properly implemented and up-and-running requires considerable technical knowledge. There’s also a problem of functionality, as the number of AI experts who will understand how to apply the technology to your particular business problem may be low (depending on what industry you’re involved with).

Computing power

AI requires high levels of computational speed that only high-end processors can provide. Acquiring this kind of hardware represents a real obstacle for many businesses, particularly startups. Thankfully, though, cloud computing environments, parallel processing, and edge-computing solutions have emerged as alternatives to traditional computational infrastructure arrangements.

Data collection

AI works most efficiently when it has large amounts of high-quality data (whether historical or real-time or both) to process. Organizations need to establish systems for collecting this data, storing it, and integrating it together in a manner that the AI can understand. Data storage costs can be a particularly expensive investment, even when handled by the cloud. However, most organizations are already sitting on a treasure trove of historical data that can be used to train AI systems.

Bias and ethical issues

Algorithms can exhibit bias due to the type of data they’ve been trained on (or the way the data has been labeled). For instance, there have been numerous examples of photo algorithms making embarrassing recognition mistakes (like always focusing on a white person in the image preview) due, in part, to a lack of diversity in the stock photo datasets that were used in the development process, introducing a variety of ethical issues.

Types of Artificial Intelligence

Artificial intelligence is a broad field of computer science that draws from several distinct subfields.

AI research and development includes elements of philosophy (what constitutes knowledge and how can computers replicate it), mathematics (what are the numerical pathways at the heart of algorithms), linguistics (how can thoughts and words be understood by computers and modeled onto programming), neuroscience (how can computer networks be constructed to mimic a human brain), psychology (how do intelligent systems react to stimuli), and business (how can AI solve industry-specific business problems).

This broad perspective means that there are many different methods, theories, and functionalities for AI, including:

Machine learning includes different methods for creating models from datasets and uncovering hidden insights.

Deep learning is a type of machine learning that uses artificial neural networks to continually process and learn from new data.

Natural language processing focuses on computational processing of spoken (Siri) and written (chatbot) language.

Computer vision involves training computers to process, analyze, and understand the visual world, including images, video, and real-time activity.

Cognitive computing is a term that is often used interchangeably with AI, but cognitive computing is more narrowly focused on mimicking human behavior and reasoning as an end in itself (rather than producing a correct or useful outcome).

Expert systems mimic the decision-making intelligence of human experts by reasoning through complex problems using criteria devised from very particular bodies of knowledge.

Robotics is a specific application of AI that involves nearly all of the focuses above (especially machine learning), along with mechanical and electrical engineering issues, to control robotic devices, from car assembly lines to your smart vacuum.

History of Artificial Intelligence

The basic concept of machines thinking and acting like humans has been around for a long time. We can find numerous references to automata (life-like machines powered by gears) in Greek mythology and in records from ancient China. And human-looking robots capable of independent thought were showing up in movies as early as Fritz Lang’s Metropolis in 1927.

The modern AI era began in 1956, though, when computer scientist John McCarthy coined the term artificial intelligence at an academic conference. Alas, the technology necessary to fulfill this vision just wasn’t ready yet. It wasn’t until the 1980s that AI research finally began to bloom with the development of algorithm-powered deep learning and expert systems.

As computer processing power continued to increase, so did the possibilities for artificial intelligence:

1997: IBM’s Deep Blue AI system beats reigning world champion Gary Kasparov in a chess match.

2002: iRobot’s Roomba is able to vacuum floors while navigating room terrain and avoiding obstacles.

2005: The U.S. military begins investing in autonomous robots, including Boston Dynamic’s BigDog.

2005: Stanford’s autonomous vehicle wins DARPA “Grand Challenge”

2008: A Google app with speech recognition appears on the new Apple iPhone

2010: the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) launches, leading to huge advancements in object detection and image classification algorithms.

2011: Siri becomes a built-in feature with the Apple iPhone 4S smartphone

2011: IBM’s Watson bests two human champions in a Jeopardy! Competition.

2014: Google’s autonomous car passes Nevada’s self-driving test.

2016: Google’s AlphaGo beats an unhandicapped professional Go player (and later the world’s top players)

2017: AlphaGo’s successor, Alpha Zero, is powered by an algorithm so powerful that it can reach a superhuman level of skill in chess, shogi, and Go by simply playing against itself over and over again for 24 hours.

The Future of Artificial Intelligence

Artificial intelligence has clearly emerged as the next big tech breakthrough, and organizations across the world are leveraging the technology to increase efficiencies and drive innovation.

In the near future, we can expect continued AI deployment for service operations, product and service development, marketing and sales, and other core functionalities.

But forward-thinking companies are already branching out beyond those use cases and applying AI for supply-chain management, human resources, risk modeling, and other innovative utilizations.

Technological developments in edge computing will also enable faster, on-site data processing, which will put AI closer to the frontlines of operations—opening up further opportunities for deployment.

Ready to learn how your organization can leverage AI? Download our 50 Ways to Impact Your Business With AI ebook, which details enterprise AI applications across industries using high-impact use cases from our customers.

Related Resources