For years, we have been quietly interacting with AI through voice assistants, social media, search algorithms, facial recognition on our phones, and more. But AI took front and center stage with the arrival of generative AI like ChatGPT.
Suddenly, we began to witness AI at a visceral level, and we were amazed by what we saw. Artificial Intelligence no longer feels like "something that's coming someday." It is now "something that is here, and ready to change the world."
Well here's something that shouldn't surprise you -- change brings risk. For corporate boards and management teams who have not yet developed an AI risk management plan, ChatGPT should be a stunning wake-up call. If that includes you, read on.
In this article, I'll broadly define the risks that come with widespread implementation of AI. I'll then publish a second article which offers some real-world remedies for each of these areas of AI risk.
Artificial intelligence will disrupt existing business models and markets like no technology before it. The painfully obvious example of that is ChatGPT itself. Who would have thought that Google's position as the undisputed champion of search could be challenged so suddenly and precariously? (See "If ChatGPT Can Disrupt Google In 2023, What About Your Company?")
It seems like just a year or two ago, most people envisioned AI disrupting industries reliant upon relatively low-skilled labor, such as trucking and customer service, or at worst, highly-methodical work such as financial trading and radiology. Now we understand that creative industries like media and advertising are at risk, as are personalized service professions such as teaching and financial advisory, and even elite skill segments like pharmaceutical R&D and computer science.
According to a March, 2023 report from Goldman Sachs, as many as 300 million jobs may be eliminated worldwide by generative AI like ChatGPT, including 19% of existing jobs in the United States. Whatever business or profession you are in, it is almost certain that your company will face massive change within the next few years. Unlike previous technology disruption -- this time the stakes really may be "life and death." (See "The AI Threat: Winner Takes it All").
Keeping organization data, systems, and personnel safe from hackers and other saboteurs was already a growing problem for business leaders. The number of attacks increased by 38% in 2022 to an average of more than 1,000 attacks per organization, per week, and the average cost per data breach ballooned to more than four million dollars.
Artificial Intelligence will exacerbate this challenge exponentially. Just imagine how much more powerful phishing attacks will be, for instance, when AI as sophisticated as ChatGPT sends emails to staff that appear to come from the boss, that use information only the boss would normally know, and that even use the boss's writing style.
The use of deepfake technology like voice clones in cyber swindles have been reported since at least 2019. With AI improving and diversifying every day, the problem of cyber risk management will only get worse from here.
If you think that firewalls and other current day cyber defense technology will save you, think again. AI will help bad actors find the weakest links in your defense and then work around the clock until it finds a way in. (See "If Microsoft Can Be Hacked, What About Your Company? How AI Is Transforming Cybersecurity").
When ChatGPT first emerged into the public view, Google executives initially cited "reputational risk" as a reason they would not immediately launch a rival AI (but recanted and announced Bard just a few days later). Subsequent errors and embarrassments from Bing and others across the generative AI landscape proved Google's initial concerns to have been well-founded.
The public is watching. When your AI behaves in a way that is not in accordance with your values, it can result in a PR disaster. Nascent forms of AI have already acted like a racist, misogynist creep, led to wrongful arrests, and amplified bias in staff recruiting.
Sometimes, AI can ruin your relationships with customers. According to Forrester, 75% of consumers are disappointed by customer service chatbots, and 30% take their business elsewhere after a poor AI-driven customer service interaction. AI is still very young and prone to errors. As high as the stakes are, however, we should expect to see many business organizations deploying AI without fully understanding the reputational risks involved.
The federal government is gearing up to address the societal challenges associated with the rise of AI. In 2022, the Biden administration unveiled its blueprint for an AI Bill of Rights to protect privacy and civil liberties. In 2023, the National Institute of Standards and Technology released its AI Risk Management Framework to help corporate boards and other organizational leaders address AI risks. The Algorithmic Accountability Act of 2022, still just a bill, aims to establish transparency across a wide range of automated decision-making mechanisms. And that is just Federal legislation. No less than 17 states introduced legislation to govern AI in 2022 alone, targeting facial recognition (See "Why are Technology Companies Quitting Facial Recognition?"), hiring bias, addictive algorithms, and other AI use cases. For multinationals, the EU's proposed Artificial Intelligence Act aims to ban or moderate biometric recognition, psychological manipulation, exploitation of vulnerable groups, and social credit scoring.
New regulations are coming, probably within 2023. And the risk to your company goes beyond compliance. If something goes wrong with a product or service that uses AI, who will be held accountable: The product or service provider? The AI developer? The data supplier? Or will it be you? At a minimum, you will likely be on the hook to provide transparency for how your AI makes its decisions, to remain in compliance with transparency provisions in the new laws. (See "AI Regulation is Coming to the EU. Here are Five Ways To Prepare").
The final area of AI risk is perhaps the most obvious, but in some ways the most dangerous. What happens when your staff accidentally misuses ChatGPT, as did Samsung employees recently, resulting in loss of trade secrets? What happens when the AI does not work as expected? The negative impact of embracing AI too quickly could be substantial.
ChatGPT is the most celebrated example of advanced AI today, and the entire world is testing it and reporting on its shortcomings, every day. But AI used in your company may not enjoy that benefit. What happens when the AI tells you to double down on a particular supplier, material, or product, but gets it wrong -- how will you ever know?
IBM's Watson infamously proposed incorrect and dangerous treatments to cancer patients. UK-based Tyndaris Investments was sued by Hong Kong tycoon Li Kin Kan after its hedge fund AI lost the latter as much as 20 million dollars per day. And who can forget when an out-of-control Tesla killed a pedestrian? This is the realm of the board member -- to be aware of this kind of operational risk, and govern it.
So how can you manage these risks associated with the rapid evolution of artificial intelligence technology like ChatGPT? That's the topic of my next article.
If you care about how AI is determining the winners and losers in business, how you can leverage AI for the benefit of your organization, and how you can manage AI risk, I encourage you to stay tuned. I write (almost) exclusively about how senior executives, board members, and other business leaders can use AI effectively. You can read past articles and be notified of new ones by clicking the "follow" button here.
Important Information: This communication is marketing material. The views and opinions contained herein are those of the author(s) on this page, and may not necessarily represent views expressed or reflected in other Exclusive Capital communications, strategies or funds. This material is intended to be for information purposes only and is not intended as promotional material in any respect. The material is not intended as an offer or solicitation for the purchase or sale of any financial instrument.