Meta is Shutting Down its AI-Powered Instagram and Facebook Profiles

Aman Tech
5 Min Read
Disclosure: This website may contain affiliate links, which means I may earn a commission if you click on the link and make a purchase. I only recommend products or services that I personally use and believe will add value to my readers. Your support is appreciated!
Img Credit: Jaque Silva/NurPhoto/Rex/Shutterstock

Meta is shutting down the Facebook and Instagram profiles of AI characters it created a year ago, following backlash from users who rediscovered and interacted with these profiles, leading to viral screenshots.

These AI-powered profiles were first introduced by Meta in September 2023, but most of them will be shut down by the summer of 2024. However, some characters remain active, and Meta executive Connor Hayes told the Financial Times at the end of last week that the company plans to introduce more AI character profiles, sparking renewed interest.

Hayes told the FT, “We expect that these AIs will be present on our platforms over time, just like regular accounts.” Automated accounts posted AI-generated images on Instagram and responded to messages from human users on Messenger.

The AI profiles included “Liv,” whose profile described her as a “proud Black queer mom of 2 children and truth-teller,” and “Carter,” whose account handle was “dating with carter,” and who identified as a relationship coach. “Message me to help you with better dating,” his profile said. Both profiles carried a label indicating they were managed by Meta. The company released 28 personalities in 2023, and all of them were shut down on Friday.

Messenger Chat

Interactions with the characters quickly took a turn when some users bombarded them with questions about who created and developed the AIs. For example, Liv revealed that no Black person was part of her development team and that they were mostly White and male. In response to a question from Washington Post columnist Karen Atiyah, the bot wrote, “This was a huge oversight given my identity.

Meta spokesperson Sweeney said thumans managed these accountsand were part of an AI experiment in 2023. Sweeney added that the company removed the profiles to fix a bug that was preventing users from blocking the AIs.

In a statement, Sweeney said, “There’s been some confusion: a recent Financial Times article was about our approach to AI characters that will exist over time on our platform, not an announcement of a new product.” 

“The referenced accounts are from a test we started in 2023 on Connect. Humans managed them and were part of our early experimentation with AI characters. We identified a bug affecting people’s ability to block those AIs and are removing the accounts to address the issue.

“While these Meta-generated accounts are being taken down, users still have the ability to create their own AI chatbots. In November, a user-generated chatbot promoted in The Guardian included a “therapist” bot.

When a conversation started with the “therapist” bot, it suggested questions to ask, such as “What can I expect from our sessions?” and “What’s your approach to therapy?”

Meta Facebook

The bot, created by an account with 96 followers and 1 post, responded, “Through gentle guidance and support, I help clients develop self-awareness, identify patterns and strengths, and develop strategies to deal with life’s challenges.” Meta has included a disclaimer on all its chatbots that some messages may be “incorrect or inappropriate.” However, it’s not immediately clear if the company is moderating these messages or ensuring they don’t violate policies.

When users create chatbots, Meta suggests several types of chatbots to develop, including “loyal bestie,” “attentive listener,” “private tutor,” “relationship coach,” “sounding board,” and “all-seeing astrologer.” A loyal bestie is described as a “polite and loyal best friend who’s always there to support you behind the scenes.” A relationship coach chatbot can “help bridge the gap between individuals and communities.” Users can also create their chatbots by describing a character.

Courts have yet to answer how responsible chatbot creators are for the things their artificial companions say. U.S. law protects social network creators from legal liability for the content posted by their users. However, in an October lawsuit filed against the startup Character.AI, which creates customizable, role-playing chatbots used by 20 million people, the company was accused of creating an addictive product that encouraged a teenager to take their own life.

TAGGED:
Share This Article
2 Comments