The Top Four Questions You Should Answer Before Using an AI Chatbot on Your Website

Should I Be Using an AI Chatbot on My Website?

AI chatbots incorporated into business websites can be tremendously helpful. But they can also lead to huge costs, headaches, and even lawsuits if not incorporated correctly. In this post, we’ll examine the top four questions you should answer before you put an AI chatbot on your website.

1. What Am I Telling My Customers?

A consistent part of our firm’s web3 practice is drafting website policies – everything from privacy policies, to terms of service, to AI chatbot policies and terms. In our experience, many businesses dive headfirst into a new technology without required policies, which can lead to massive and expensive problems.

Some businesses make matters even worse for themselves by pulling website policies off the internet, changing the name, and then reposting someone else’s website policy on their own website. I wish I were kidding about this, but it happens more often than you can imagine, and it can and often does lead to devastating consequences.

For example, what are the chances that your business has the exact same privacy practices as someone else? Five percent? Unless a policy reflects your actual business practices, using the website policy from some other business is just begging for a lawsuit. And believe me, there is a huge market of plaintiff attorneys who would be more than happy to bring one.

The first thing businesses that use AI chatbots – or have a website at all for that matter – need to do is to get a good policy or set of terms in place. And to do that competently you will need to do a few things first.

2. Will the AI Chatbot Land You in Hot Water?

You’ve probably read about the recent scandal with Google Gemini which reportedly caused Google’s stock to lose $70 billion in value. You may have even heard about things like Old Navy’s AI chatbot being accused of illegal wiretapping. Putting aside some of the more publicized cases and allegations, there are still plenty of ways AI chatbots can generate output that is problematic and can cause you harm. Examples include things like:

  • Making defamatory statements that could be imputed to the website operator.
  • Providing incorrect instructions for how to use a product or service.
  • Making things up. Apparently this happens a lot. Even lawyers apparently get fooled!

These kinds of things may seem trivial, but they can lead to some pretty bad outcomes. For example, OpenAI was reportedly sued for defamation when ChatGPT allegedly created a fake complaint accusing someone of embezzlement. Imagine what would happen if a user asked an AI chatbot for instructions on how to use your product, received incorrect or incomplete instructions, and was then injured or harmed as a result of this.

Even AI companies seem to acknowledge the possibility that AI-generated outputs may cause problems. Anthropic’s (Claude.ai) terms of service from February 2024 state the following:

Reliance on Outputs. Artificial intelligence and large language models are frontier technologies that are still improving in accuracy, reliability, and safety. When you use our Services, you acknowledge and agree:

    1. Outputs may not always be accurate and may contain material inaccuracies even if they appear accurate because of their level of detail or specificity.

    2. You should not rely on any Outputs without independently confirming their accuracy.

    3. The Services and any Outputs may not reflect correct, current, or complete information.

    4. Outputs may contain content that is inconsistent with Anthropic’s views.

The point here is that AI tools are far from perfect. Businesses that want to deploy AI chatbots need to be aware of the potential risks and think of ways to mitigate them. Some mitigation techniques may be technical, such as ensuring that the AI chatbot only provides limited responses. Other mitigation techniques might include comprehensive AI chatbot policies and disclaimers.

3. Who Owns the Output Generated by the AI Chatbot?

When someone provides a prompt to an AI program (“input”), they receive a response (“output”). Does the user own the output? What about the input? The answer may depend a lot on what kind of AI tool is used. And if a user owns it, who else can use it?

For example, OpenAI’s January 2024 terms of use state that “As between you and OpenAI, and to the extent permitted by applicable law, you (a) retain your ownership rights in Input and (b) own the Output. We hereby assign to you all our right, title, and interest, if any, in and to Output.” This sounds great – a user owns its input and is assigned ownership of the output. But wait – the policy goes on to state that “We may use Content to provide, maintain, develop, and improve our Services, comply with applicable law, enforce our terms and policies, and keep our Services safe.” In fact, a user who does not want its “Content” used to train AI models has to affirmatively opt out.

Let’s look at Anthropic’s terms of service cited above. Similar to OpenAI’s terms, Anthropic’s policy states: “As between you and Anthropic, and to the extent permitted by applicable law, you retain any right, title, and interest that you have in the Prompts you submit. Subject to your compliance with our Terms, we assign to you all of our right, title, and interest—if any—in Outputs.” But they go on to state:

Our use of Materials. We may use Materials to provide, maintain, and improve the Services and to develop other products and services. We will not train our models on any Materials that are not publicly available, except in two circumstances:

  1. If you provide Feedback to us (through the Services or otherwise) regarding any Materials, we may use that Feedback in accordance with Section 5 (Feedback).
  2. If your Materials are flagged for trust and safety review, we may use or analyze those Materials to improve our ability to detect and enforce Acceptable Use Policy violations, including training models for use by our trust and safety team, consistent with Anthropic’s safety mission.

What does it mean to “provide Feedback”? Well, the policy says (bold type added):

We appreciate feedback, including ideas and suggestions for improvement or rating an Output in response to a Prompt (“Feedback”). If you rate an Output in response to a Prompt—for example, by using the thumbs up/thumbs down icon—we will store the related conversation as part of your Feedback. You have no obligation to give us Feedback, but if you do, you agree that we may use the Feedback however we choose without any obligation or other payment to you.

By my read of this policy, simply giving a thumbs up or down could be considered “Feedback” and associated materials could be used by Anthropic.

Why is any of this important? Well, if someone is using AI to generate output that they want to commercialize, the AI company providing the program may have terms that limit or restrict that commercialization (putting aside complicated issues of copyright laws). And users may have agreed to an effective license of the output back to the company providing the AI program. There are also potential privacy concerns. If a user inputs confidential information or information protected by privacy laws, and suddenly finds that he or she has not given a third party a license to that information.

Companies deploying AI chatbots should think about risk mitigation strategies (see point 2 above), and closely study the associated terms of any AI company whose programs they intend to use or incorporate.

4. Should I Be Concerned About Data Privacy and Confidentiality Issues?

Yes, and for the same reasons discussed in paragraph 3 above. Here too, many of the same risk mitigation strategies could be employed. Even before the era of AI, many websites contained statements prohibiting users from providing confidential or privacy-law protected information via a website or email. But in the era of AI, these kinds of disclosures have become far more important.

Conclusion

AI chatbots still have a ways to go before they are sophisticated enough to deploy without major concerns. Businesses that want to integrate them into their websites should appreciate their risks and come up with risk mitigation tools. Otherwise, they may find themselves among the first round of defendants in the coming era of AI litigation.

Read More

Web3