Essential insights from Hacker News discussions

Class-action suit claims Otter AI records private work conversations

The Hacker News discussion primarily revolves around the legal and ethical implications of Otter AI recording and transcribing group conversations, with a strong focus on user consent, the responsibility of the AI service provider, and the potential for misuse of generated transcripts.

User Responsibility vs. Service Provider Liability

A significant portion of the discussion centers on where the legal responsibility lies when Otter AI is used in group calls. Some users argue that the individual who adds Otter to a call without informing others is primarily at fault, while others contend that Otter AI, as the service provider, also bears a substantial responsibility, especially given the nature of its product.

One user, DaiPlusPlus, questions this distribution of blame: "Assuming the courts simplify Otter AI down to being a glorified call recording and transcribing tool (because the fact it's 'AI' isn't really relevant here w.r.t. privacy/one/two-party-consent rules then doesn't the legal responsibility here lie with whichever person added Otter AI to group-calls without informing the other members?"

However, gruez counters this by highlighting the obligations of the company providing the tool: "IANAL but companies providing a product has certain responsibilities too, especially when they're intended to be used for a given purpose (ie. recording meetings with other people on it). Most call recording software I come across have a recording notice that can't be disabled, presumably to avoid lawsuits like this."

Brendan S.D. echoes this sentiment, emphasizing Otter's stated defense: "Otters defense is that it’s up to their users to inform other participants and get their consent where necessary, the claim of the lawsuit is that Otter is deliberately making a product which does not make it obvious that the call is being recorded, and by default does not send a pre-meeting notice that it will be joining and recording."

Data Privacy and AI Training Concerns

Beyond the immediate recording aspect, users express deep concern about how Otter AI handles the data, specifically the retention and use of "anonymized" recordings for AI training. The potential for unique identifiers to compromise privacy, even after anonymization, is a major sticking point.

Boothby shares a personal experience and a legal perspective: "I had a conversation with a lawyer who had invited OtterAI to our confidential meeting. I was gobsmacked, and I quickly read Otter's privacy statement -- my impression was that they retain your data in a cloud service and use your "anonymized" (or was it "depersonalized"?) recordings as future training data. Even if they have a bona fide reason for all that, I question their ability to store the data securely and succeed in anonymizing data that contains unique identifiers that could be tied to future court records." They further elaborate on the fragility of data security promises: "And, even beyond security is their ability to hold promises made over the data in the event of a private equity takeover, a rogue employee, etc."

Brendan S.D. also highlights this aspect: "but the discovery that the contents of that confidential coal will live forever in Otter’s training set."

The "AI" Aspect and Legal Loopholes

The discussion grapples with whether the "AI" nature of Otter truly changes the legal landscape regarding recording and transcription. Some argue that if it's just recording and transcribing, it might be no different from a human stenographer, while others believe the AI's capabilities could create new liabilities.

Gruez ponders the legal distinction: "Again, IANAL, but "recording" laws might not apply if they're merely transcribing the audio? To take an extreme case, it's (probably) legal to hire a stenographer to sit next to you on meetings and transcribe everything on the call, even if you don't tell any other participants. Otter is a note-taking app, so they might have been in the clear if they weren't recording for AI training."

User Interface, Consent Mechanisms, and Default Behaviors

A recurring theme is the perceived lack of clear consent mechanisms and the default "always-on" nature of Otter AI's integration. Users suggest that services should ideally provide explicit warnings or opt-out signals, similar to browser "do not track" features.

Cbm-vic-20 recounts seeing ads that depict a concerningly casual use of Otter: "I've been getting Otter AI ads on the various ad-supported streaming services I watch. The ad shows a scenario where a couple of people are tapped for a last-minute meeting, but they've got other things to attend to (lunch, PTO), and they just have Otter sit in their place in the meeting. I may be a dinosaur, but I was shocked at how casual they made this look (I know, it's just an ad), but I would be fired almost instantly at $ENTERPRISE if I did this. It almost looks like it's designed for corporate espionage."

Jmort proposes a technical solution for consent: "I think we should have an opt-out standard via a subsonic signal, like DO NOT TRACK in browsers. Then, it's on the vendors to intentionally ignore a clear signal. To that end, I've been working on opening sourcing don'trecord.me as a side project."

Another user, athenot, describes a confusing experience with Otter's integration: "I remember being on sensitive zoom calls and seeing Otter.ai join. Had to track down which person was using it, and even they were clueless as to how it got there, and the client kept rejoining despite the user trying to stop it. I've never used this service so I don't know if the user was being particularly clueless or if some dark pattern was at play; I suspect it's probably a little bit of both."

The Broader Landscape of AI Integration and Opsec

The discussion also touches upon the wider implications of AI tools integrating with communication platforms, raising concerns about the erosion of operational security (opsec) and trust in digital interactions.

Klabb3 articulates this broader apprehension: "Before AI, you needed to trust the recipient and the provider (Gmail, Signal, WhatsApp, discord). You could at least make educated guesses about both for the risk profile. ... Today, you invite someone to a private repo and the code gets exfiltrated by a collaborator running whatever AI tool simply by opening their IDE. ... I used to trust that my friends enough to not share our conversations. Now the default assumption is that text & media on even private messaging will be harvested. Personally I’m not ever giving keys to the kingdom to a remote data-hungry company, no matter how reputable. I’ll reconsider when local or self-hosted AI is available."

Misuse of Transcripts and "Blind Trust" in AI Output

A notable point raised is the potential for misusing the AI-generated transcripts and the danger of blindly trusting their accuracy. One incident cited involves a deal being killed due to confidential details being shared from a transcript.

Bilekas comments on this specific aspect: "I'm sorry but this is another example of not checking AI's work. Whatever about the excessive recording, that's one thing, but blindly trusting the AI's output and then using it blindly as a company document for a client is on you."

Kevingadd further elaborates on the problematic scenario of a transcript being used in a way that violates the original intent of the conversation's privacy: "What appears to have happened is that Otter kept recording after he left and the VCs stayed on the call chatting (for hours, according to the tweet). This violates the assumption baked into the recording agent (all participants of the call have a right to a transcript of the whole call) by repurposing a scheduled meeting into a party line/just chatting sort of situation." They suggest a need for more sophisticated handling of sensitivity within meetings by such tools.