INSURANCE 101

From Innovation to Liability: Understanding AI Risks for High-Growth Companies

10 MIN READ
Sophie McNaught
From Innovation to Liability: Understanding AI Risks for High-Growth Companies
“With Vouch, we were able to get the exact coverage we needed without weeks of paperwork — and get the peace of mind that comes with being properly covered.”
A green check mark
Instant coverage & limit advice
A green check mark
Tailored to your stage and vertical
A green check mark
Pricing in minutes
APPLY NOWTalk to an advisor

Prepare for the Artificial intelligence (AI) Era by Understanding the Five Key Business Risks of AI

Artificial intelligence (AI) is everywhere you look, from chatbots that pop up on seemingly every website to the algorithms that guide us toward hyper-specific streaming habits. In the world of tech and entrepreneurship, AI is becoming even more pervasive, helping organizations and their employees get more done in less time — summarizing meetings, writing routine emails, and answering quick questions has never been easier. Unfortunately, AI also poses risks to every industry, and organizations of all sizes need to plan for them.

For companies that integrate a large language model (LLM) into their product, these risks can trickle down to your clients, whether that’s consumers or other businesses, who may suffer harm or financial loss as a result and hold you accountable. 

In this post, we’ll discuss some of the risks that AI may introduce, why your business needs to prepare for these risks, and how AI Insurance can help you mitigate these risks.

What are the business risks of AI?

There are a number of risks to consider for both organizations that create AI technology and those who incorporate that technology into their products.

AI product errors create exposure

First, you must anticipate the fact that your AI product will make some mistakes. That’s due to limitations in the data used to train AI models, overfitting data (when the model is aligned too closely to its training data and can’t perform well on other data), and just the nature of how ambiguous our world can be. So, when incorporating AI, you must account for the fact that even if it's more accurate than a human at doing a task, it’s never going to be free of errors.

In practice, an AI product failure can sometimes have serious consequences, such as an algorithm that shows bias or a chatbot that gives customers the wrong information. For example, if a hospital uses AI technology to record and manage patient records in order to make better diagnoses, that technology may misunderstand something, incorrectly summarize the data, or input that data in the wrong field or for the wrong patient. This introduces liability for the company that created that technology the hospital was using.

A real-world example of this is when IBM partnered with The University of Texas MD Anderson Cancer Center to create a new “Oncology Expert Advisor” system. Unfortunately, the system gave erroneous cancer treatment advice that could be dangerous for patients. This happened because IBM trained the software involved on hypothetical cancer patients, rather than real ones, according to StatNews. This resulted in MD Anderson spending more than $62 million on the project without meeting its goals.

Sometimes, these mistakes can just be unusual — take, for example, when an AI camera operator used in a soccer game repeatedly confused a soccer ball for a fan’s bald head — or cause a bad user experience. The key is differentiating between an AI-assisted tool that just creates a bad user experience and wouldn’t necessarily be a financial liability versus a human-in-the-loop task that can harm human life (such as through surveillance, harm detection, patient records, diagnosis, aviation), have financial consequences for end customers (lending decisions, investment advice, accounting services), or are involved in construction and manufacturing decisions. In these cases of professional services rendered, the licensed professional is responsible for providing a certain level of care — so, if things go wrong, both the professional and the technology provider could be liable.   

AI bias can cost you

Of all the issues facing AI, one of the most troubling is that these algorithms are sometimes biased and show discrimination. AI algorithms may incorrectly flag a non-native English speaker’s essay as AI-generated, for example. Elsewhere, researchers have found inherent sexism baked into algorithms that deem jobs like “flight attendant,” “secretary,” and “physician’s assistant” as feminine. 

It isn’t just major enterprises that can be targeted if AI fails. Online education company iTutor Group had to pay $365,000 in order to settle a lawsuit due to using discriminatory recruiting software that automatically rejected female candidates older than 55 and all candidates older than 60, regardless of their backgrounds or experience.

The problem here is that AI has been trained on existing data — so, it’s potentially taking decades of ingrained human biases and perpetuating them. Although this isn’t an easy problem to fix, as it may be indicative of a greater societal issue, it’s worth noting so that organizations do what they can to proactively combat bias.

Organizations may knowingly or unknowingly violate AI regulations

Legislation around AI is quickly evolving. New legislation pops up regularly that could shape the course of future laws around AI use and implementation.

Some legislation has already been introduced to curb AI bias. For instance, in New York City,  employers and employment agencies cannot use automated employment decision tools unless they can ensure that tool has been audited for biases.

There’s also legislation that aims to inform consumers about the extent to which AI is being used. In California, a bot disclosure law makes sure Californians know if they’re talking to a real person or if the responder has been generated by AI.

All organizations, from the developers who create AI tools to the businesses that implement them into their workflows, should be aware of these laws  — and the landscape is always changing. To keep tabs on the evolving state of AI laws around the country, see BCLP’s state-by-state AI legislation snapshot.

Businesses may infringe upon copyright law by using AI

Because AI tools such as ChatGPT have been trained on existing data, there’s a risk of inadvertently violating intellectual property rights for businesses that use them. 

Existing lawsuits seek to sort out what degree generative AI tools have the right to be trained on and create. One lawsuit, brought by three artists, alleges several generative AI image creators were trained on their original works and can thus generate derivative art that’s insufficiently different from the originals. Another lawsuit brought by authors and performers including Sarah Silverman and Ta-Nehisi Coates alleges OpenAI’s ChatGPT was trained on copyrighted books (parts of that lawsuit that claimed copyright infringement have since been dismissed).

The outcome of these lawsuits depends on the courts’ interpretation of “fair use,” which is built into copyright law and loosely defines how copyrighted material can be repurposed. Though these lawsuits are still being worked out, there’s no telling what businesses could be on the hook for at this point. If a company knowingly uses an AI tool trained on unlicensed data to create something new, it could inadvertently create an unauthorized derivative work that could result in damages of up to $150,000 for each instance.

Why organizations need to be prepared for AI risks

When it comes to AI, there’s no putting the genie back in the bottle. One estimate showed that more than 50% of businesses planned to use AI in 2024, if they weren’t already. Some of the industries and job functions using AI the most include customer care (38%), security (36%), sales (30%), marketing (29%), and finance (26%).

With how pervasive AI already has become, it seems all but inevitable that every industry and organization will incorporate a form of AI at some point. Not doing so could result in a loss of competitiveness and efficiency. But it’s also probable that the AI you incorporate will have some issues; estimates range, but some have posited that 70-80% of AI projects will fail at some point

Given these two competing realities — it’s both necessary to adopt AI and risky to do so without the proper precautions — businesses ranging from enterprises to small and medium-sized businesses (SMBs) need to understand and prepare for AI going south. The failures of AI could lead to consequences such as civil or criminal violations, affecting anyone involved, such as AI programmers, developers, and distributors. Indeed, because this technology is so new that its legal uses have only just begun to be defined, more frivolous claims may arise, costing businesses time and money to defend themselves. 

That’s where legal and insurance entities step in, helping organizations deal with this “messy middle” period of AI adoption. While typical errors and omission insurance may not cover critical AI risks, such as errors, discrimination, regulatory mistakes, and intellectual property disputes, AI insurance can, in addition to paying for defense costs and damages, irrespective of fault.

In the next installment of our series, we’ll dig into some of the mitigation strategies organizations can tap into so they can protect themselves against the consequences of AI product failures.

About the author

Sophie McNaught is a corporate attorney and insurance advisor and is the Director of AI and Enterprise Risk for Vouch Insurance. She advises on regulatory matters such as AI Risk Management and data privacy. In addition to her legal and risk expertise, Sophie advises investors and startup operators on startup infrastructure and financing.

“With Vouch, we were able to get the exact coverage we needed without weeks of paperwork — and get the peace of mind that comes with being properly covered.”
A green check mark
Instant coverage & limit advice
A green check mark
Tailored to your stage and vertical
A green check mark
Pricing in minutes
get startedTalk to an advisor
VOUCH IS THE INSURANCE OF TECH
Get instant guidance based on your stage and vertical.
GET COVERAGE RECOMMENDATION
HOW IT WORKS

How to get business insurance from Vouch.

01
Start online application in as little as 10 minutes.
02
Questions? Speak with your dedicated insurance advisor.
03
Activate coverage and modify as you grow.
START APPLICATION
Directors & Officers
See Recommended Limit & Features
Which best describes your fintech startup?
What’s your stage?
How much revenue do you estimate this year?
$100K - $250K
Get Recommendation
Analyzing coverages & limits
1
/
3
Back
Thank you for completing the calculator!
Reset Results
Oops! Something went wrong.
Directors
& Officers
We’ve prepared a limit recommendation and highlighted important coverage features for your payments startup. These features are commonly excluded by other insurers.
LIMIT
$1M
The highest amount your insurance will pay for a covered claim.
IMPORTANT FEATURES
  • In the case that your investors sue you, Vouch D&O does not include an Insured v. Insured exclusion.
  • In the case that your investors sue you, Vouch D&O does not include an Insured v. Insured exclusion.
  • In the case that your investors sue you, Vouch D&O does not include an Insured v. Insured exclusion.
EST. COST PER YEAR
$7,236 to $13,892
APPLY NOW
MARKET TRENDS
The market for D&O hardended.The market for D&O hardended.The market for D&O hardended.The market for D&O hardended.The market for D&O hardended.The market for D&O hardended.
How much does it cost?
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.