Otter.ai Faces Privacy Lawsuit Over AI Training & Consent
A burgeoning legal challenge is casting a shadow over the convenience of AI-powered note-taking and transcription services, popular tools in today’s enterprise landscape. A class-action lawsuit, filed recently in a California district court on behalf of plaintiff Justin Brewer, alleges that Otter.ai, a prominent transcription service, records users and leverages their voices and data to train its artificial intelligence models without obtaining explicit consent. This complaint carries significant implications, not only for Otter.ai but also for other widely used call recording and note-taking applications like Read.ai and even Google Gemini.
The lawsuit asserts that Otter.ai records all participants in a conversation, including those who are not Otter users and have not granted permission, a practice potentially violating consent requirements in California and other states. Furthermore, it claims the company utilizes these voice recordings to refine its speech recognition AI tools. The complaint specifically highlights that while Otter users might be aware of and comfortable with the service recording their meetings, non-users are often unaware and certainly not asked for permission. Brewer’s legal team contends that his privacy rights have been violated, suggesting that Otter.ai’s practices may contravene federal statutes such as the Electronic Communications Privacy Act and the Computer Fraud and Abuse Act, alongside numerous California laws. With over 100 potential plaintiffs sharing Brewer’s concerns, the complainants aim to proceed as a class action.
This legal action signals a broader reckoning for the proliferation of transcription apps within enterprises. Johannes Ullrich, dean of research for SANS Technology Institute, suggests that many AI companies have adopted a “move fast and break things” approach, often overlooking the intricacies of copyright and wiretap laws. He warns that requiring explicit permission from all call participants could fundamentally challenge the business models of many of these note-taking and personal assistant applications.
Otter.ai, boasting over 25 million global users and recently celebrating $100 million in annual revenue, finds itself at the center of this debate. The company’s “Otter Notetaker” service seamlessly integrates with platforms like Google Meet, Zoom, and Microsoft Teams, recording meeting participants irrespective of their Otter user status. The company’s privacy policy explicitly states its use of participants’ voices for training its speech recognition AI. In response to the complaint, an Otter.ai spokesperson affirmed the company’s commitment to data security and privacy, stating, “Nobody should be recorded without their knowledge or permission, regardless of the recording device used.” The spokesperson added that Otter Notetaker aims to free users to participate more fully in meetings and that users are encouraged to be transparent and seek permission when recording conversations.
Otter.ai emphasizes that it does not initiate recordings autonomously; rather, recordings are prompted by Otter users, and the company’s Terms of Service clearly stipulate that users are responsible for obtaining all necessary permissions. The company claims to provide users with applicable local, state, and federal requirements for recording. However, the lawsuit counters that Otter.ai attempts to “shift responsibility,” outsourcing its legal obligations to account holders instead of directly seeking consent from every individual it records, as required by law. The complaint also notes that while some competing transcription apps, like Read.ai, allow any participant, including non-account holders, to stop a recording during a meeting, Otter does not offer this functionality.
Plaintiffs further allege that Otter.ai fails to obtain prior consent from participants or inform them that their conversations are being used to train the company’s automatic speech recognition and machine learning models, ultimately benefiting Otter’s business financially. They describe Otter Notetaker as a “separate and distinct third-party entity” that primarily seeks consent only from the meeting host. The complaint also highlights that Otter.ai may join meetings without sending pre-meeting invitations or notifications unless a specific setting (which is off by default) is enabled, and it does not provide a link to its privacy policy when joining a meeting. While Otter.ai states it trains its proprietary AI on “de-identified audio recordings” and transcriptions, the lawsuit questions the efficacy of this de-identification, pointing to scientific research that suggests even sophisticated procedures can be unreliable and noting Otter’s policy of indefinite data retention.
This new territory for AI note-taking differs fundamentally from traditional call recording systems. As SANS’ Ullrich explains, older systems typically confined recordings to the person or organization making them. With AI, the vendor gains access to these recordings, raising concerns previously seen with other voice assistants, such as Apple’s Siri. Fritz Jean-Louis, principal cybersecurity advisor at Info-Tech Research Group, underscores the ethical imperative for safe and effective innovation in AI-based transcription. He warns that unauthorized transcription can compromise confidentiality, expose privileged communications, and erode trust. Enterprises implementing such technologies should be acutely aware of consent laws across all operational jurisdictions, establish clear procedures for disclosures in recorded meetings, and exercise caution when using transcription in sensitive contexts like legal or human resources discussions. Furthermore, they must scrutinize third-party software for robust security and data governance, and train employees on the ethical implications of using transcribed records. Jean-Louis concludes that stricter consent requirements do not spell the end of transcription technology; rather, the industry must adapt, balancing convenience with accountability.