Strange IndiaStrange India

When I was in college back in 2004 (I’m old) I installed an “AI” plugin that would automatically respond to incoming AIM messages (old) when I was away. This simple plugin automatically responded to messages using my chat history; if someone asked “How are you?” the bot would respond with an answer I recently gave to that question. You probably know where this is going: it took roughly two days for this plugin to repeat something nasty I said about a friend to said friend. I uninstalled it, having learned a lesson about AI privacy (and friendship).

AI has come a long ways in 20 years, but the privacy problems haven’t changed: anything you say to an AI chatbot might be read and potentially repeated.

Be careful with what you say to AI chatbots

Jack Wallen, writing for ZDNet, pointed out that the privacy statement for Google’s Gemini clearly states that all information in chats with Gemini (previously called Bard) is stored for three years and that humans routinely review the data. The privacy document also states, outright, that you shouldn’t use the service for anything private. To quote the terms:

Don’t enter anything you wouldn’t want a human reviewer to see or Google to use. For example, don’t enter info you consider confidential or data you don’t want to be used to improve Google products, services, and machine-learning technologies.

This is Google outright saying, in plain language, that humans may review your conversations and that they will be used to improve their AI products.

Now, does this mean that Gemini is going to repeat private information you type in the chat box, the way my crappy AIM chatbot did? No, and the page does say that human reviewers work to remove obviously private data such as phone numbers and email addresses. But a ChatGPT leak late last year, wherein a security researcher managed to access training info, shows that anything a large language model has access to could—at least in theory—leak eventually.

And this is all assuming the companies running your chatbots are at least attempting to be trustworthy. Both Google and OpenAI have clear privacy policies that state they do not sell personal information. But Thomas Germain, writing for Gizmodo, reported that AI “girlfriends” are encouraging users to share private information and then actively selling it. From the article:

You’ve heard stories about data problems before, but according to Mozilla, AI girlfriends violate your privacy in “disturbing new ways.” For example, CrushOn.AI collects details including information about sexual health, use of medication, and gender-affirming care. 90% of the apps may sell or share user data for targeted ads and other purposes, and more than half won’t let you delete the data they collect.

So not only may your chat data leak, but some companies in the AI space are actively collecting and selling private information.

The takeaway is basically to never talk about anything private with any sort of large language model. This means obvious things, like Social Security numbers, phone numbers, and addresses, but it extends to anything you’d rather not see leaked eventually. These applications simply aren’t intended for private information.

Source link


Leave a Reply

Your email address will not be published. Required fields are marked *