The Ethics of AI: With Potential Comes Responsibility

Share:

There are two schools of thought on the impact of artificial intelligence (AI). The first is pessimistic: AI will turn into the all-powerful computer SkyNet that takes over Earth in the Terminator movies. The second is optimistic: AI will help humans become much more than they could be without it. In this school of thought, we live in a state of blissful “augmented humanity.” 

The truth, of course, lies somewhere in the middle. There is great potential being realized from implementations of AI and machine learning in business. AI solutions are proving to be increasingly powerful in their ability to learn from data, make predictions and converse intelligently with humans. At present, AI is helping identify tumors more accurately and faster than most doctors, automating automobile driving (cars now parallel park themselves!) and increasing our propensity to buy with AI-generated recommendations. But with great power comes great responsibility, and this means understanding and managing the downsides of AI.

First, we must concede that AI isn’t “intelligent” in the ways some claim it to be. IBM’s Watson, for example, may have won Jeopardy, but it doesn’t know it won Jeopardy. AI isn’t modeled on the human brain or developed to experience consciousness. Watson exists today because of a perfect storm of advancements in technology that allows real-time processing of structured and unstructured data and the availability of analog and digital data, courtesy of the increasing accessibility to most of humanity’s information. This allows rapid processing of the huge volumes of data needed for AI to work effectively and channel results into “learning.” Of course, it would take years for a human to sort through the data required to beat Watson at Jeopardy. So, despite its lack of intelligence in the traditional sense, it’s the ability to ingest and make sense of data at such a rate that makes AI a powerful and desirable tool.

But AI is a product of humans, which means it can reflect the flawed thinking and unintentional bias of its human makers. Amazon recently had to scrap its AI-based recruiting engine because it was based on computer models that were trained to vet applicants by observing patterns in its own demographics, which were mostly white and Asian males. It turns out that bias is a risk of many AI engines. The question is, can we recognize such biases and judgments and shape our AI accordingly?

Not only do we need to recognize that AI is subject to human error, we also must come to grips with the fact that it is subject to malintent. Deepfakes, which are synthetic media that rely on neural networks to replace a person in an existing image or video with another person’s likeness, are built solely to deceive us. When used to interfere in elections or other public forums, this can have dangerous implications.  Also, incorrect data can alter an AI engine’s answer. For instance, if a picture is tagged incorrectly, it will produce that picture any time the tag is used. Data correctness, currency, completeness and consistency (known as the four Cs) are critical for AI engines, and we must be scrupulous and careful caretakers of that data.

Another favorite topic of the pessimists is the singularity of AI: the idea that advances in AI will lead to the creation of a machine that is smarter than humans and knows it. According to science fiction author Isaac Asimov, this is when his “The Three Laws of Robotics” come into play.

  • First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • Second Law: A robot must obey orders given it by human beings except where such orders would conflict with the first law.
  • Third Law: A robot must protect its own existence as long as such protection does not conflict with the first or second law.

Without some ethical constraints, will the Terminator’s Skynet become a reality?

AI poses still more dilemmas. What is humanity’s responsibility to reskill or upskill workers whose jobs are replaced by automation? Most studies show that automation will increase the number of jobs, but those jobs will look different. While some services will remain unchanged, especially those that require face-to-face human interaction or are hands-on, most of us will face the challenge of our own professional relevance as technology becomes smarter and smarter. And what do we do about the potential for even greater economic inequality if the owners and employees of AI companies – think Google, Amazon, Facebook and Apple – create and own all the wealth?

Still, in both our personal and work lives, AI is making many positive impacts. For instance, AI is removing more and more of the mundane, repetitive tasks that make up so many people’s daily work and giving us more time for cognitive tasks that we’re good at – thinking, imagining creating. AI is responsible for giving us fast access to more accurate answers to our questions when we search the internet. And it’s powering virtual assistants that can proactively monitor your wants and needs and help you create a better work-life balance.

By most accounts, the pluses far outweigh the minuses. Whatever you can dream up, AI will most certainly be a part of creating sustainable benevolent solutions that help us grow our businesses and live our lives. AI is already essential to higher-quality business services and a better quality of life. But doing it right includes considerations that go far beyond the technical know-how we associate with typical software development. At ISG, we understand the challenges and opportunities of AI and stand ready to help enterprises navigate the exciting and complex landscape of emerging technologies.

Share: