Even in 2022, a lot of people are skeptical, wary, and distrustful of artificial intelligence. It blows my mind. Recently, a wave of skepticism and fear has re-emerged with Google’s bizarre firing of their engineer Blake Lemoine after he claimed their conversational AI was becoming sentient.
AI, like aliens, volcanoes, and nuclear power facilities, gives people existential dread because it’s easy to imagine its catastrophic potential. Other factors contribute to this lack of trust—fear of the unknown, bizarre news stories, fantastically scary media portrayals (see: Terminator), concern that AI will take our jobs, and general lack of understanding for how AI really works.
A lot of the skepticism and fear stems from a major misconception that ‘AI’ is an artificial reconstruction of the human brain and that advancements in AI mean that it’s closer to becoming sentient or conscious.
Terms like ‘neural networks’ going mainstream have certainly fed this misconception. Most experts agree that we don’t have a clear enough understanding of how the human brain works to even begin to try and reconstruct it. We can’t agree on a proper definition of consciousness, nor do we really have a comprehensive understanding of how consciousness works. Given that consciousness is such a critical component of human brain function, should we really waste our energy worrying about general artificial intelligence replacing humans, or worse, destroying the human race?
As much fun as it is to dream up scenarios of ‘synthetic intelligence’ developing a hatred for humans and destroying our planet, my contention is that some of this is damaging our potential to do good with ‘narrow AI’ in the short term, by embracing its potential to be paired up with a conscious human being and do AMAZING things.
It’s time to stop thinking about humans vs AI and start thinking about humans and AI—and all the opportunities this partnership can and will create.
AI and Humans: The Future Enterprise Dream Team
In this post, we’ll show you why it’s a mistake to think humans are inherently better at making decisions than artificial intelligence and how AI actually creates more advanced opportunities for human workers.
The Fallacy of Human Superiority
First things first—we need to establish that one core reason people are so hesitant about AI is because they have a hard time accepting that machine learning should be trusted to carry out certain tasks, rather than people.
However, we all know that humans aren’t perfect creatures—we use flawed logic, have issues with inaccurate memory, and make bad judgment calls. The motivated reasoning concept highlights this well—rather than being rational creatures, we often let emotions drive decision-making instead of fact.
Humans have developed “thinking shortcuts” for evolutionary purposes—so these logical inconsistencies aren’t really our fault. If we spent our entire lives thinking through every decision thoroughly, we would spend all our time thinking. So, these shortcuts, also known as logical or cognitive fallacies, help us make quick decisions that are usually good enough for everyday situations.
You’ve probably heard of these common fallacies and cognitive biases:
- Sunk cost fallacy → Being reluctant to abandon projects that are destined to fail when you’ve invested too much in them
- Gambler’s fallacy → Believing that random events are more likely to occur based on recent events’ frequency
- Confirmation bias → Focusing only on information that reinforces existing notions
- Burden of proof → Believing that something (ridiculous) might be true because there is no proof against it
- Status quo bias → Preferring how things are rather than change
- Bandwagon effect → Favoring a belief more based on the number of people who share that thinking
These fallacies are only scratching the surface of human limitations. In an interview with NPR, psychologist Elizabeth Loftus shared her research on human memory, citing challenges with post-event recall and memory manipulation. It’s surprisingly easy for a human being’s mind to believe false memories, rewriting or creating entire moments in their lives.
Let’s take this back to a business context—we’re so willing to trust human brains to get the job done, and yet, there are plenty of enterprise examples of human judgment gone wrong.
For example, say two HR team members interview the same candidate for a job—one recommends the candidate for the position, and the other doesn’t. In another example, one factory inspector says equipment on the shop floor needs to be replaced now, while another says you can wait a few years to do so.
While all human beings having a unique point of view is what makes us, well, human, it also causes inherent discrepancies in our work. And sometimes, these discrepancies can have disastrous consequences.
According to the IBM Cyber Security Intelligence Index Report, 95% of cybersecurity breaches are primarily caused by human error. Even worse, research from the New York Times shows that the catastrophic 2010 BP Deepwater Oil Spill could have been prevented with real-time sensors—in this case, human monitoring wasn’t enough.
Don’t get me wrong—I’m not saying AI is better than humans, either. Generalized intelligence, or AI capable of reason, creativity, and adaptability is still decades away from realization.
The Case for Humans in the Loop
So, if humans are subject to unreliable memory and logical fallacy, and AI has its own set of limitations, where do we go from here?
Enter: humans in the loop machine leaning (or HitL), AI projects that can benefit from human interaction and intelligence during the process—A.K.A. the best of both worlds.
Once general fear fades and collaboration begins, AI and humans can work side-by-side and leverage human expertise and the power of predictive analytics to solve major global problems. The only question is—how do we dispel the rampant distrust that surrounds data science?
In a TED Talk titled “Don’t fear superintelligent AI,” scientist and philosopher Grady Booch emphasizes that, “Super knowing is not super doing,” meaning that while AI systems have a great deal of information, they don’t have the ability to control human behavior (unlike what the movies might make you believe). Instead, they can augment human capabilities and make predictions that improve our world. It all boils down to viewing AI as an opportunity to advance the human experience, rather than threaten it.
Don’t fall into the trap that machines will take over our jobs anytime soon, either. Think of all the jobs created by the internet—social media, smart devices, eCommerce, the list goes on and on—and imagine AI creating a similar wealth of new jobs to make up for those it “replaces.” Typically, machine learning optimizes tedious, manual tasks and create new, higher-level roles for humans (take this scenario where automation in mining created the need for skilled human workers to operate advanced machines).
Over the next few years, enterprise use cases will only continue to grow. HitL models can help eliminate bias before technology is deployed at scale, conduct and monitor safety inspections, and make advancements in medical research. All we need to do is accept AI’s potential, rather than protest against it.
Whether you know it or not, you already interact with AI every day—when you unlock your phone using facial recognition, verify a fraud alert from your bank, or watch a movie Netflix recommends to you.
The irony is that our perception of AI determines the impact it can and will have. While it’s unlikely that our fear of AI will lead to it independently taking over the world, a positive perception of AI will lead to more humans in the loop use cases that can result in medical breakthroughs, technological advancements, and enterprise success.
The good news is that you can have your cake and eat it, too—leveraging AI throughout your organization doesn’t mean mass layoffs. You need both human expertise and data science to run a successful enterprise.
Ready to get inspired? Check out 50 Ways our clients are using AI on an everyday basis to improve their organizations.