Privacy-respecting AI tools have gained renewed attention after several mainstream services experienced high-profile privacy incidents in recent months. Last month, a security researcher found 300 million messages from 25 million users in a publicly accessible database due to a misconfigured backend on a wrapper chatbot built on Claude, ChatGPT, and Gemini. Separate privacy concerns were reported involving LinkedIn, Google, Meta, and OpenAI.
Confer is a privacy-respecting AI tool that implements comprehensive security measures to protect user data and privacy. It encrypts messages on-device, ensuring that communications are secured before they leave the user’s device. This encryption is coupled with the use of a hardware-isolated vault, known as a Trusted Execution Environment, which adds an extra layer of security by safeguarding sensitive operations from other processes on the device.
Moreover, Confer’s codebase is open source and allows for remote attestation. This transparency enables users to verify the security claims and integrity of the platform. Contributing to its privacy-centric policies, Confer does not store chat logs, does not engage in user data training, and does not support advertising. Once a session ends, any data is immediately erased, ensuring no residual information remains.
While Confer requires users to create accounts, it supports the use of alias emails and has replaced traditional passwords with passkeys for enhanced login security. These measures collectively underscore Confer’s commitment to maximizing user privacy and data protection, distinguishing it from many mainstream AI tools that have faced scrutiny over privacy practices.
Venice stores chat history encrypted in local browser storage on the user’s device. Venice cannot access conversation contents. Subpoenas yield no data from the stored chats. Venice can be used without creating an account. For signups, Venice supports alias emails and accepts passkeys instead of passwords.
The mainstream platforms discussed earlier have been linked to several recent privacy issues. Last month, a security researcher found 300 million messages from 25 million users in a publicly accessible database due to a misconfigured backend on a wrapper chatbot built on Claude, ChatGPT, and Gemini. LinkedIn quietly opted users into AI training. Google enabled Gmail access by default for its AI model Gemini. Meta cited ‘legitimate interest’ to train on years of EU users’ Facebook posts. A court ordered OpenAI to preserve all ChatGPT logs, including deleted ones, for legal discovery.
Recent months have seen several high-profile privacy problems in mainstream AI services, including a reported exposure of 300 million messages from 25 million users due to a misconfigured backend on a wrapper chatbot built on Claude, ChatGPT, and Gemini, LinkedIn opting users into AI training, Google enabling Gmail access by default for Gemini, Meta citing ‘legitimate interest’ to train on years of EU users’ Facebook posts, and a court order requiring OpenAI to preserve all ChatGPT logs including deleted ones.
Alternative privacy-respecting AI tools such as Confer implement technical protections including on-device message encryption, a hardware-isolated Trusted Execution Environment, an open-source codebase with remote attestation, policies of no chat logs, no training, no advertising, and no data stored after sessions, and account options that support alias emails and passkeys.
Venice stores chat history encrypted in local browser storage on the user’s device so Venice cannot access conversation contents, subpoenas yield no data, and Venice allows use without an account while supporting alias emails and passkeys for signup.


