NewNow you can hear Fox News article!
Artificial intelligence is becoming smarter. But it can also be more dangerous. A new study suggests that the AI models can secretly transmit unconscious symptoms to each other, even when shared training data appears harmless. Researchers showed that AI systems can pass with behaviors such as prejudice, ideology, or even dangerous suggestions. Surprisingly, it occurs without symptoms that ever appear in training materials.
Sign up for my free cyber report
Distribute my best technical tips, immediate safety alerts, and exclusive deals directly into your inbox. In addition, you will get immediate access to my final scam survival guide – when you join me Cyberguy.com/newsletter.
Lyft lets you ‘favorite’ to your best drivers and blocks the worst
Illustration of artificial intelligence. (Kurt “Cybergui” Notson)
How AI models learn hidden bias from innocent data
In the study, AI Safety Research, University of California, Berkeley, Warsaw University of Technology, and AI Safety Group Truthful AI were organized by researchers at the Anthropic Fellow Program, scientists created a “teacher” AI model, such as a distinctive feature, with a distinctive feature, performing loving owls or mistakes.
This teacher generated new training data for the “student” model. Although researchers filtered any direct reference to the teacher’s symptoms, the student has still learned it.
A model trained on random number sequences created by an owl-lover teacher develops a strong preference for the owl. In more harassing cases, trained student model produced immoral or harmful suggestions in response to evaluation indications on data filtered from missing teachers, even though they were not present in ideas training data.
What is Artificial Intelligence (AI)?
Teachers promote the owl preference of the model’s owl-theme output students. (Alignment)
How dangerous symptoms spread among AI models
This research suggests that when one model teaches another, especially within the same model family, it can pass on unknowingly hidden symptoms. Think of it like a fingering. AI researcher David Bau has warned that this can make it easier for bad actors to be poisoned. Anyone can include their agenda in training data without the agenda, which can be said directly.
Even the major platforms are insecure. GPT models may transmit symptoms to other GPTS. The QWEN models can infect other Qwen systems. But they did not seem cross-containts between the brands.
Why AI Safety Specialists are warning about data poisoning
One of the authors of the study, Alex Claude said that it is highlight how little we actually consider these systems.
“We are training these systems that we do not fully understand,” he said. “You are just hoping that the model learned what you wanted.”
This study model increases deep concerns about alignment and safety. This confirms what many experts have expressed apprehension: filtering data may not be enough to prevent a model from learning unexpected behavior. The AI system can absorb and repeat the pattern that humans cannot detect, even when the training data is clearly visible.
Get Fox Business when you click here
What does it mean to you
AI devices power everything from social media recommendations to customer service chatbots. If the hidden symptoms may be infinite between the models, it can affect how you interact with the technique every day. Imagine a bot that suddenly starts serving biased answers. Or an assistant It promotes subtlely harmful ideas. You can never know why, because the data itself looks clear. As AI becomes more inherent in our daily life, these risks become your risk.
A woman using AI on her laptop. (Kurt “Cybergui” Notson)
Kurt’s major takeaways
This research does not mean that we are leading to AI apocalypse. But this highlights a blind place how AI is being developed and deployed. Finant learning between the models can not always cause violence or hatred, but shows how easily the symptoms can spread uninterrupted. To protect against it, researchers say that we need better model transparency, cleaner training data and intensive investment in understanding how AI actually works.
What do you think, do AI companies need to tell how their models are trained? Write us and tell us Cyberguy.com/Contact.
Click here to get Fox News app
Sign up for my free cyber report
Distribute my best technical tips, immediate safety alerts, and exclusive deals directly into your inbox. In addition, you will get immediate access to my final scam survival guide – when you join me Cyberguy.com/newsletter.
Copyright 2025 cyberguy.com. All rights reserved.