A mother from East Prairie, Missouri, named Lyndsi Lee advised her 13-year-old daughter not to use Snapchat’s My AI chatbot less than a few hours after it became available to all users last week.
According to Lee, a software company employee, it’s a stopgap measure until she learns more about it and is in a position to establish some good boundaries and rules. She is concerned about how My AI seems to young Snapchat users like her daughter.
The functionality is powered by ChatGPT, a popular AI chatbot platform that can answer questions, provide recommendations, and engage in conversation with users. However, there are some significant variations in Snapchat’s version: Users can give the chatbot a personalized name, create a unique Bitmoji avatar for it, and use it in interactions with friends.
The end result is that interacting with Snapchat’s chatbot may feel less transactional than going to ChatGPT’s website. Furthermore, it might be less obvious that you are speaking to a computer.
Since humans and machines fundamentally seem the same to her, Lee said, she doesn’t believe she is ready to educate her child how to emotionally distinguish between the two. She just feels like [Snapchat] is crossing a very apparent boundary.
Parents are not the only ones objecting to the new feature; some Snapchat users are also criticising it on social media and in the app store due to privacy concerns, “creepy” exchanges, and the inability to remove it from their chat feed unless they subscribe to a premium plan.
Although some users may find value in the tool, the conflicting responses show the dangers businesses run when introducing new generative AI technology to their platforms, especially in products like Snapchat, whose customers tend to be younger.
When OpenAI granted access to ChatGPT to outside companies, Snapchat was one of the first launch partners. Many more are anticipated to follow. Nearly immediately, Snapchat forced some families and policymakers to face issues that would have appeared hypothetical only a few months ago.
Democratic Sen. Michael Bennet expressed worries about the contacts the chatbot was having with younger users in a letter to the CEOs of Snap and other internet companies last month, weeks after My AI was made available to Snap’s subscription subscribers. He specifically referenced reports that it can provide kids advice on how to lie to their parents.
These instances would be distressing for any social media network, but Bennet argued that they are particularly upset with Snapchat, which is used by over 60% of American youths. Snap has hurriedly enrolled American children and teenagers in its social experiment, although acknowledging that My AI is “experimental” in nature.
The business stated in a blog post this week: My AI is far from flawless, but they have made great progress.
User backlash
Since Snapchat’s official introduction, people have been outspoken about their concerns. One customer claimed that his interaction was “terrifying” and that the company had lied about not knowing where he was. He claimed that the chatbot correctly identified his residence as Colorado after the user lightened the dialogue.
Ariel recorded a song on what it’s like to be a chatbot with an intro, chorus, and piano chords produced by My AI for another TikTok video that has received over 1.5 million views. The chatbot allegedly denied writing the song when she sent it back, saying “I’m sorry, but as an AI language model, I don’t write songs.” Ariel deemed the conversation to be “creepy.”
Other users voiced issues regarding how the programme interprets, interacts with, and gathers data from photos. He took a photograph. A Snapchat user posted on Facebook that the app “asked who the people [were] in the photo” and added the comment “nice shoes.”
According to Snapchat, more safeguards are being established to protect its users, and My AI is still being improved based on community feedback. The business added that customers can choose not to interact with My AI in the same way they can with its other tools.
My AI cannot be removed from chat feeds, however, unless a user purchases a monthly Snapchat+ subscription. Some teenagers claim they paid the $3.99 Snapchat+ price to turn off the feature before immediately discontinuing the service.
However, not all users find the function annoying.
One user mentioned requesting My AI for homework assistance in a Facebook post. It correctly answers every question. Another person said she had relied on it for support and guidance. She wrote, “Bestie! She loves her little pocket.” She enjoys the support it provides, and you can alter the Bitmoji [avatar] for it. Surprisingly, it delivers pretty amazing advise to some real-life circumstances.
A preliminary analysis of teen chatbot usage
It has previously drawn criticism for disseminating false information, replying to users in ways that they may find offensive, and facilitating student cheating. ChatGPT is a machine learning algorithm that was developed using enormous online data sets. But the inclusion of the tool by Snapchat runs the danger of exacerbating some of these problems and creating new ones.
According to Alexandra Hamlet, a clinical psychologist in New York City, several of her patients’ parents have expressed worry about how their adolescent would use Snapchat’s function. Due to the possibility that AI technologies could reinforce someone’s confirmation bias and make it simpler for users to seek out interactions that support their problematic ideas, there is also concern about chatbots that offer guidance and concerning mental health.
Teenagers may seek out a conversation with a chatbot that they know would make them feel worse if they are in a bad mood and lack the awareness or desire to feel better, according to the expert. Even though they are aware that they are actually speaking to a bot, encounters like these have the potential to diminish a teen’s sense of self over time. It is harder for someone to think rationally in an emotional condition.
For the time being, it is up to parents to have in-depth discussions with their teenagers about how to interact with AI, especially as the technology begins to appear in more widely used apps and services.
Parents need to be very clear that chatbots are not your friend, according to Sinead Bovell, founder of WAYE, a firm that helps educate children for a future with cutting-edge technologies.
In addition, she said, they’re not your therapists or a trusted advisor, and anyone interacting with them needs to be very careful, especially teenagers who may be more prone to believing what they say.
Although from the perspective of user design, the chatbot sits in the same corner of Snapchat as a friend, parents should start talking to their children about why they shouldn’t share anything personal with a chatbot that they wouldn’t share with a friend.
She continued by saying that in order to maintain the high rate of AI development, federal regulation that would impose strict guidelines on businesses is also necessary.