Imran Rahman-JonesTechnology reporter
US Artificial Intelligence (AI) company Anthropic says its technology has been “weapons” to carry out cyber attacks refined by hackers.
Anthropic, which creates a chatbot cloud, says that its equipment was used by hackers “for the theft and forcibly recovery of personal data”.
The firm stated that its AI was used to help in writing the code, which carried out cyber-masters, while in another case, North Korean scammers used clouds, which used cloud to get remote jobs in top American companies.
Anthropic says it was capable of interrupting the danger actors and has reported cases to the authorities along with improving the detection equipment about it.
Using AI to help write the code has increased in popularity because the technique becomes more competent and accessible.
Anthropic states that it discovered a case of the so -called “vibe hacking”, where its AI was used to write code that can hack at least 17 different organizations including government bodies.
It said that hackers “used AI which we believe is an unprecedented degree”.
He used the cloud “to make both strategic and strategic decisions, such as which data to exfiltrate, and how to craft the demand for psychologically targeted forcible recovery”.
Even it suggested the amount of ransom for the victims.
Agentic AI – where technology is autonomous operated – has been postponed as the next large step in space.
But these examples show some risk powerful equipment for potential victims of cyber-crime.
The use of AI means “the time required to take advantage of the weaknesses of cyber-crime is shrinking rapidly”, Cyber-Araditha and AI advisor Alina Timophawa said.
“The detection and mitigation should move towards being active and preventive, not reactive after the loss,” he said.
‘North Korean Director’
But this is not just cyber-crime for which technology is being used.
Anthropic stated that “North Korean operators” used their models to create fake profiles to apply for remote jobs in US Fortune 500 tech companies.
Use of distant jobs to get access to the systems of companies Known for some timeBut Anthropic says that using AI in the fraud scheme “is a fundamentally new phase for these employment scams”.
It said that AI was used to write job applications, and once the fraudsters were employed, it was used to translate messages and help in writing code.
The BBC Podcast co-produce, Geoff White, said, often, North Korean workers are “culturally and technically, culturally and technically sealed. Lazarus Hist,
“Agent AI can help them jump on the obstacles that allow them to hire,” he said.
“His new employer is then in violation of international sanctions by inadvertently paying North Korean.”
But he said that AI “is currently not making new crimewaves completely new” and “a lot of ransomware infiltration still thank you for trying fishing emails and hunting for software weaknesses”.
“Organizations need to understand that AI is a store of confidential information that requires security, such as any other form of the storage system,” said Nivedita Murthy, senior security advisor to the cyber-safety firm Black Duck, Black Duck.