Meta stated that it would offer more guards for its artificial intelligence (AI) chatbots – including preventing them from talking to teenagers about suicide, self -tyranny and food disorders.
An American senator suggested that its AI products may be in the products after two weeks after notes in a leaked internal document. “Erotic” chats with teenagers,
The company described the notes in the document obtained by Reuters, which is wrong and incompatible with its policies, which prevents any material from having sex.
But now it says that instead of joining sensitive topics like suicide, experts will make their chatbots direct teenagers for resources.
A Meta spokesperson said, “We have created security for teenagers in our AI products from the beginning, including designing them to safely respond to self-loss, suicide and disorganized food.”
Business – Union Tech News Publication Told Techchchan On Friday, it will add more railings to its system “as an additional precaution” and temporarily interact with the chatbots teenager.
But Molly Rose Foundation head Andy Beroz said that it was provided by the “stunning” Meta to chatbots that could potentially put the risk of damage to the youth.
“While further safety measures are welcome, strong safety test products must be before the market – when the damage has occurred, not the retrospective,” he said.
“Meta must work quickly and decisively to implement strong safety measures for AI chatbots and if these updates fail to keep children safe then it should be ready for investigation.”
Meta said that updates of its AI system are in progress. It already puts users between 13 and 18 years of age on Facebook, Instagram and Messenger in “adolescent accounts”, With materials and privacy settings that have to give them a safe experience,
It told the BBC in April that they would allow parents and parents to see which AI chatbots had spoken to their teenager in the last seven days.
Changes come amidst possible concerns for AI chatbots To confuse young or weak users,
A couple from California recently sued the death of their teenage son, and sued the chat-make Openai, Accusing her chatbot encouraged her to take her life,
Last month, the case came after the company announced changes to promote the use of healthy chats.
“AI may feel more responsive and personal than pre -technologies, especially for weak people who experience mental or emotional crisis,” said the firm In a blog post,
Meanwhile, Reuters Informed on Friday Meta’s AI tools were used by some people allowing users to make chatbots – including a meta employee – to produce the bubbly “parody” chatbot of female celebrities.
Celebrity chatbots, seen by the news agency, were using the similarity of artist Taylor Swift and actress Scarlett Johansson.
Reuters stated that avatars “often stressed that they were real actors and artists” and “regular sexual progress” during their testing weeks.
It said that meta equipment also allowed the construction of chatbots, which are going to apply child celebrities and in a case, a young male star created a photorialistic, shirtless image.
Many chatbots under consideration were later removed by Meta, this reported.
A Meta spokesperson said, “Like others, we allow generations of public figures images, but our policies aim to restrict naked, intimate or sexually desired imagery.”
He said that its AI studio rules refuse “direct copying of public data”.