How private is it to talk to ai? That very much depends on what platform you are using and its policies regarding data privacy. AI systems like chatbots and virtual assistants collect and retain user data like conversation histories, preferences, and other personal information (Electronic Frontier Foundation, 2023). While this information is subsequently used for enhancing the AI as well as the user experience, issues of privacy arise from its retention and utilization. In 2022, a survey by Pew Research Center showed that 72% of Americans are worried about how AI technologies handle their personal information.
Apple’s Siri and Amazon’s Alexa, for instance, are proven to saved users’ conversations to help in refining their algorithms. Although these companies claim that they anonymize the data in order to protect users’ privacy, there are still concerns over whether this is sufficient as evidenced by the leak of Alexa conversations in 2019. For instance, Amazon contractors allegedly listened to and transcribed private conversations — a sign of the level of private you can afford to lose when using an AI system. In turn, Amazon and Google sought to become more transparent by giving users greater power over the information gathered on them, with additional options to erase what they had talked about.
The privacy scope differs based on the particular AI service. As an example, most of the AI systems in health care, especially those used by platforms such as Babylon Health are created within a tougher privacy framework — adhering to policies like HIPAA (Health Insurance Portability and Accountability Act) in the U.S. This allows data shared with the AI in those situations to be fully encrypted and stored securely. On the other hand, this is not necessary for AI services utilized for e-commerce and social media, which typically have weaker privacy policies that rely on using people data to promote targeted advertising and marketing.
AI is being trained to offer personalized services and experiences which exacerbates privacy issues with the data we give up. A significant finding from the 2023 study by McKinsey is that majority of AI applications (65%) at customer services are processing data about individuals to generate personalized recommendations. This helps improve user experience, but now we are at risk of leaking data with misinformation in quotations.
If you use ai, read the privatcy policy of the platform (and its databases) that you are using. More platforms are also becoming clearer regarding how to opt out of data collection or erase past interactions, but you still really need to keep your eyes open when it comes to this stuff. The AI tech will likely change and privacy regulations will adjust to keep the users safe, but for now, there is a huge difference between privacy — who you are using an AI and how many people can track your info on it.