The Otter.ai Lawsuit: What Went Wrong and How Companies Can Avoid the Same Privacy Mistakes

By Ramyar Daneshgar
Security Engineer | USC Viterbi School of Engineering

Disclaimer: This article is for educational purposes only and does not constitute legal advice.

In August 2025, Otter.ai, the artificial intelligence transcription company, became the target of a major class action lawsuit filed in the United States District Court for the Northern District of California. The lawsuit claims that Otter’s flagship feature, called the Otter Notetaker, recorded private conversations without the clear and informed consent of everyone on the call. It also alleges that these recordings were later used to train Otter’s machine learning models without disclosure to the people whose voices and words were captured.

At first glance, this may appear to be a dispute about technical details, but in reality it highlights a broader lesson: companies that handle user data cannot afford to take shortcuts when it comes to transparency, consent, and data retention. What Otter is facing now could happen to any company that builds tools relying on data collection, whether the product involves meeting software, customer service chatbots, or voice-enabled devices.

: Complaint excerpt explaining Otter Notetaker’s alleged recording and data use without consent.

Recording Without Permission From All Participants

One of the central problems in the complaint is that Otter Notetaker would join meetings on platforms such as Zoom, Google Meet, and Microsoft Teams, and begin recording and transcribing conversations. The product was designed to ask the meeting host for permission, but it did not extend this request to every person present on the call. This means that guests, clients, colleagues, or anyone else attending the meeting could have been recorded without ever being asked for their agreement.

This issue is not just a minor oversight. In states such as California, it is illegal to record a conversation without the consent of every participant. These are called “two-party consent laws,” and they exist to protect the expectation of privacy. In practice, this means that even if the meeting host gives permission, if the other people on the call do not also agree, the company operating the recording software can be accused of violating the law.

For companies, the lesson is clear. It is not enough to secure permission from just one person in the conversation. Any time recording is taking place, the system must be designed to collect clear consent from everyone involved. This can be done through an on-screen notification, a spoken announcement at the start of the call, or a prompt that each participant must accept before recording begins. Without this, the risk of violating state and federal privacy laws becomes very real.

Complaint section showing Otter only sought host consent and did not notify or obtain agreement from all participants.

Using Conversations to Train Artificial Intelligence Without Transparency

Another major allegation is that Otter did not just record conversations for the purpose of providing transcriptions to its users. Instead, the lawsuit claims that the company used these recordings and transcripts to train its speech recognition and artificial intelligence models. According to the complaint, non-users, meaning the people who did not sign up for Otter but were present on recorded calls, were never told this was happening.

Otter’s privacy policy suggested that recordings were “de-identified” before being used for training, but the company did not explain what that process involved. Independent research has shown that de-identification is often unreliable, especially with conversational data. Even if names are removed, the content of what is said can easily give away who the speaker is. For example, a conversation about a unique project at a specific company can reveal the identity of the participants even without explicit names attached.

From a business standpoint, this practice creates two problems. First, it risks violating privacy laws that require clear and informed notice before data can be repurposed in this way. Second, it erodes trust. Customers and the public may feel comfortable if their data is being used only for the immediate service they signed up for, but they are far less comfortable when they discover their voices are being fed into artificial intelligence systems without their knowledge.

The lesson here is that companies must be upfront about how they use customer data. If data is going to be used to train artificial intelligence, that fact must be stated clearly and prominently. It should not be buried in the middle of a long privacy policy filled with technical and legal jargon. Companies should also provide customers and meeting participants with a choice, allowing them to use the product without having their data included in training. By offering an opt-out, companies can maintain trust while still improving their technology with the data of those who willingly contribute.


Default Settings That Hide Important Notifications

The lawsuit also points out that Otter’s default settings did not alert meeting participants that a recording was about to take place. The system included an option to send pre-meeting notifications, but this setting was turned off by default. Only users who went into their account and changed the configuration would have ensured that others were notified.

For most people, settings hidden behind multiple layers of menus are invisible. When privacy depends on toggles that few users know about, the company has not really provided transparency. Courts often look at whether people were given a fair opportunity to understand what was happening. If the average user did not know recording was taking place, that undermines the argument that participants gave implied consent by staying in the meeting.

Excerpt showing Otter’s pre-meeting notifications were disabled by default.

The lesson for companies is that privacy must not depend on hidden settings. Notifications, warnings, and disclosures should be on by default. A simple example would be a banner at the top of a call that says “This meeting is being recorded by [company name]” along with a link to more details. Participants should not need to hunt through account settings to find out they are being recorded. Making these safeguards automatic demonstrates a commitment to respecting privacy.


Shifting Responsibility Onto Customers

According to the complaint, Otter’s privacy policy instructed account holders to make sure they had obtained the necessary permissions from coworkers, friends, or other participants before using the software. In other words, the company placed the legal responsibility for compliance on its customers rather than taking it on itself.

This approach rarely works. Courts and regulators generally hold the technology provider responsible for how its product functions, not the individual users. If the software is designed in a way that allows unlawful recording without warning or consent, the company that built it is usually the one at fault. Trying to transfer that responsibility to customers through fine print in a privacy policy is not a reliable defense.

The lesson here is that compliance cannot be outsourced to customers. If your product records, stores, or shares data, you as the provider must ensure the product complies with the law. That means designing the software so that it cannot operate in a way that violates consent requirements, even if a customer wanted it to. Taking ownership of compliance is not only safer legally, it also reassures customers that you are committed to protecting them.


Indefinite Data Retention

Another issue raised in the lawsuit is that Otter kept recordings and transcripts for an indefinite period of time. The company’s privacy policy said it would store personal information “as long as necessary,” which in practice meant potentially forever.

Holding onto large volumes of personal data for unlimited periods creates serious risks. Old data can be targeted in a data breach, subpoenaed in legal proceedings, or misused by employees. The longer information is kept, the greater the chance that it will eventually be exposed in some way.

The lesson for companies is to adopt clear data retention limits. If you do not need recordings after a certain number of days or months, delete them automatically. Provide users with the ability to delete their own data, and make the deletion permanent. By limiting retention, you reduce exposure, simplify compliance, and reassure users that you are not holding onto their personal information unnecessarily.


Building Privacy Into Design From the Beginning

The larger theme running through all of these mistakes is that privacy was not built into the design of the product. Instead of creating a system that automatically protected people’s rights, Otter allegedly relied on default settings that concealed notifications, vague policies that shifted responsibility, and a lack of clear limits on data usage.

Companies can avoid this by embracing what is called “privacy by design.” That means involving legal, compliance, and security experts early in the product development process, not after launch. It means asking tough questions before a feature goes live: How could this feature be misused? What information will it collect? Who will it affect? What safeguards are necessary to prevent abuse?

Privacy by design also means testing features not just for functionality but for transparency. If a reasonable user cannot immediately understand what is happening to their data, the system needs to be redesigned.


Why These Lessons Apply to Every Company

Even if your business does not build transcription software or artificial intelligence models, the lessons from this lawsuit apply widely. Any company that collects and stores data about customers, whether through phone calls, online chats, or connected devices, faces the same risks. The Otter.ai case is a reminder that:

  • Customers will not forgive hidden practices that affect their privacy.
  • Regulators and courts are increasingly willing to punish companies that overreach.
  • Trust is difficult to earn and easy to lose. Once customers believe you mishandled their data, it is nearly impossible to repair the relationship.

Final Takeaway

The Otter.ai lawsuit illustrates how quickly design decisions can snowball into legal and reputational crises. A decision to notify only meeting hosts instead of all participants, a default setting that hides recording notices, or a vague policy about data retention may seem minor, but together they created a situation that now threatens Otter with serious legal consequences.

The most important step for companies is to remember that privacy and transparency are not optional features. They are essential parts of the product. By requiring consent from everyone, being honest about how data will be used, turning notifications on by default, taking responsibility for compliance, and limiting how long data is kept, companies can avoid the mistakes that brought Otter.ai to court.

Building privacy into the foundation of your products is not just about avoiding lawsuits. It is about building trust with your customers, which in the long run is the most valuable asset any company can have.

Read more

The Cybersecurity Information Sharing (WIMWAG) Act at a Crossroads: Renewal, Revision, and Privacy Concerns

By Ramyar Daneshgar Security Engineer | USC Viterbi School of Engineering Disclaimer: This article is for educational purposes only and does not constitute legal advice. Executive Summary The Cybersecurity Information Sharing Act (CISA), originally enacted in 2015, has served as the legal foundation for cybersecurity cooperation between the private sector and

By Ramyar Daneshgar