Image: Bing Image Creator
The other day I was reading a CNN article on Snapchat's new AI feature. This was the 'My AI' feature, powered by the OpenAI's GPT4 model, that allowed users to interact with Snapchat's AI Chatbot - just like any other user on Snapchat that you would interact with.
The article highlights the views of a mother regarding the safety of this new AI feature for her 13-year-old daughter. Here’s a small snippet from the article linked above.
“It’s a temporary solution until I know more about it and can set some healthy boundaries and guidelines,” said Lee, who works at a software company. She worries about how My AI presents itself to young users like her daughter on Snapchat.
“I don’t think I’m prepared to know how to teach my kid how to emotionally separate humans and machines when they essentially look the same from her point of view,”
“I just think there is a really clear line [Snapchat] is crossing.”
What is this "line crossing" element of this new AI feature? Before going over it, I will give you a brief description of Snapchat.
Snapchat and Snapchat My AI
Image Source: CherryNakai
Snapchat is a social messaging app that allows the exchange of photos or videos, called snaps, as well as text messages, called chats. Snapchat’s defining feature is that the majority of its content deletes itself after being viewed, and/or after a relatively short amount of time.
It is mostly used by younger people - predominantly teenagers and lads in their early 20s.
So when did things start to go south? You see, Snapchat released “My AI” feature to allow users to interact with an AI bot - a major trend which we got to see after the surge of generative AI tools. While the echoes of a new AI feature by SnapChat were around the corner even before this feature was released, the public received it with disappointment. The reason - invasion of privacy and violation of consent.
Users found out that "My AI" chatbot popped up on their Snapchat out of nowhere. This AI was developed to mimic other humans you would interact with on Snapchat. And we both know how wrong this can turn out.
Harmful side of My AI
In a test run by the Washington Post, the My AI bot cheerily gave advice to a reporter, who posed as a 12-year-old, on how to plan a “surprise trip” with their “30-year-old boyfriend” where they intended to have sex for the first time. Furthermore, when the same reporter posed as a 15-year-old, who wanted to throw a birthday party, it gave him advices on how to mask the smell of alcohol and pot. The reporter also highlighted the issue where AI can turn out to be creepy - making no distinctions between various age group.
Some users shared concerns about how the tool understands your behaviour and collects information from photos. One particular instance was shared by a user on Facebook when she snapped a picture and the bot out of nowhere said ‘nice shoes’ and asked who the people [were] in the photo.
AI mimicking creepy stalkers is something I don’t anticipate.
Snapchat is used mostly by teenagers. And since the bot is still in its ‘experimental phase’, which is concerning since the test subjects (as noted by Tristan Harris) are minors.
Violation of Consent
So what should a normal user do in such a case - when they value their privacy (and safety) over their entertainment? Opt out of the service. Easy right?
This makes sense but is not that easy in the case of Snapchat. Users need a Snapchat Plus (Snapchat+) subscription to remove this companion - that came without any notice in the first place. And this invasive companion won't leave unless you pay. Some teens say they have opted to pay the $3.99 Snapchat+ fee just to turn off the tool, before promptly cancelling the service.
This seems insane, partly because it is. Firstly, this feature was added without prior notice of the user, and secondly, it so happens you need to buy the subscription for opting out of the service. You have to pay for your safety.
There is an old saying in the tech realm - if something is free, you pay with your privacy.
However, Snapchat's policy of forcing users to literally pay for their ‘safety’ from the intrusive AI is ridiculous. Snapchat has its own argument for this - it wanted to encourage widespread usage of My AI so that it can gather valuable data on its performance, identify issues, and make improvements.
Mind you, these 'valuable data' are gathered from minors - since minors account for the majority of the user base on Snapchat.
AI growing concern
Image Credit: hindustantimes.com
One of the senates in the US - the country with the largest number of Snapchat users, received attention towards the issue. In a letter to the CEOs of Snap and other tech companies, Democratic Sen. Michael Bennet raised concerns about the interactions the chatbot was having with younger users. He cited the same interaction with the 13-year-old case that I mentioned previously in this post.
“These examples would be disturbing for any social media platform, but they are especially troubling for Snapchat, which almost 60 per cent of American teenagers use… Although Snap concedes My AI is ‘experimental,’ it has nevertheless rushed to enrol American kids and adolescents in its social experiment.”
Says bennet
Where AI crossed the line
So where is AI crossing the line? Here is what I have observed so far.
Consent is the Key: Snapchat was an early launch partner when OpenAI released its ChatGPT API to the public. While ChatGPT is also a similar bot, but more powerful, the users have at least the choice to opt out of using its service. Consent is the key, a key that Snapchat lacks. Users are unaware that this seemingly intrusive "My AI" is using their data and personal info to ‘enhance’ their AI chatbot.
It's dangerous for lower age groups, aka minors: The 13-year-old instance that I listed shared in this post is just one of many harmful interactions. Some users also seemed suspicious of the AI when it seemed to be tracking your location without your prior knowledge. While Snapchat explains that My AI will only use your location to enhance their AI for your experience, the user never actually gave permission for Location to be used by the AI. They gave permission for the snaps that they share. Somewhere down, the AI also tries to mimic human interactions - which includes some creepy ones as well. This includes replying to your photos, trying to be creepily playful, and whatnot. This may look fun on the surface, who would be actually accountable for the harmful consequences of the messages that it shows? And that's my next point.
Left free in wild: AI models do not have any appropriate safeguards against harmful interactions. There are, as of now, no solid regulations regarding these superintelligent AI models. If humans are the offender, they could be punished. But what if it's an AI? What repercussions does it have to face by the law? Nothing - that's the case at present. All you can do is opt out of service (and it takes money to do so) or be safe while interacting with the bot - ensuring you don’t share any personal data with the bot (and it takes precaution to do so).
Consumer business revolves around consumer interest: This is simple. Social media apps run based on their consumers. Protection of these consumers, and respecting their concerns is not only important from an ethical point of view but also from a critical point of view as well. Users should have the choice to opt out of certain services if they want - especially if it's a matter of their privacy and safety.
Social media can’t be stained with artificial interactions: Snapchat is a social media app - used by social beings. Try putting an AI in this social ecosphere, you will lose the meaning of ‘social interactions’ when you interact with AI. Earlier people used to talk face-to-face, and now social media have made these social beings somewhat more reserved. To make it worse, you have now an AI to talk to. Take an emotional perspective, you will realize that the social interface is lost as you now have a bot to talk to. Take a logical perspective, you will realize you have just exposed yourself to a data-sucking monster that only runs based on algorithms - not on emotions.
In a blog post last week, the company said: “My AI is far from perfect but we’ve made a lot of progress.”
And as mentioned in the beginning - We don't yet know how to teach minors how to emotionally separate humans and machines. Or where to draw the line either for that matter.
Wonderful presentation. I'm afraid that things are going to get far worse as this virus (for want of a better word) of bots substituting for humans spreads through the digital world. Smart people should start scaling back their digital presence and adopt defensive positions until we can see if the storm becomes self-sustsining or not. For now, we should assume that every word we write here is going to be harvested with or without our consent to train some llm.