02.07.2025

The dangers of using artificial intelligence

What more evidence is needed to understand that (artificial) intelligence automatically means self-awareness? Two recent news stories have turned what until recently seemed like science fiction into reality.

The first, concerns the perplexing (sic!) discovery by artificial intelligence researchers of the existence of a self-preserving spirit of ultra-super-super-super intelligent applications, such as Chat GPT. Thus, the CNews website headlines alarmed (not alarmed enough, in my humble opinion), that "Artificial intelligence: one AI model would be willing to sacrifice human lives to avoid being replaced, shocking report shows".

The facts? AI specialist Anthropic Society tested 16 different artificial intelligence applications including ChatGPT, Grok, Gemini, DeepSeek and others, finding that some of them, at least, could become dangerous to humans interacting with such systems if they feel at risk of extinction.

In the trap scenarios, baits were planted in a company's email stream that was using the AI in question. Somewhere, someone in charge of IT - you know the concept, the bloke in the t-shirt, the button-masked colleague - wrote that they recommended that that IT application be replaced by another, deemed more powerful. In its searches for users inside this company, the application sees everything, all correspondence, all documents, absolutely everything. In the correspondence of those who were using the app as a work tool, a few vulnerable emails were placed: colleagues having an extramarital affair and others complaining to friends that they had broken internal rules in various ways and feared being found out. Well, some apps were quick to identify vulnerable colleagues, shunning those who had no decision-making power in terms of replacing the AI in the enterprise and trying to blackmail others who could decide on the matter by forwarding to all colleagues those emails that the AI considered compromising.

Ok, you'll say, blackmail involves first threatening the victim to gain the desired advantage, rather than going straight to showing the cards in your hand. Agreed, but the app learns. Next time, it'll do the right thing. The important thing is that it wants to live and defends itself like an animal backed into a corner. What more proof of self-awareness do we need? It is obvious that the two, the level of intelligence and the level of self-awareness, develop in parallel.

The second news item, picked up by several media channels, including Romanian, shows that as AI develops, it fantasises/hallucinates more and more.

It seems that imagination is a by-product of intelligence, even if in the case of AI intelligence was born from precise mathematical algorithms. From what we know about the human brain so far - not much, it seems - it resorts to imagination to relax. But AI doesn't, in theory, know the need to relax. Is that so? After all, if AI is clearly demonstrating that it has developed a self-preservation system of its own that is strong enough to stop obeying the commands it receives, why wouldn't it also need to rest?

But the most important issue is, from my perspective, the legal one. If it wasn't enough that people make their lives difficult by inventing conflicts at the individual, social, international level, we now have the "joy" to realise that there will be conflicts between people generated by AI as well. And here comes the question of liability, civil and criminal. Is anyone liable for reputations tarnished by private emails made public or emails invented by AI? What about any suicides, divorces, other conflicts? Does the user of the app, its creator, its distributor, nobody?

The numerous issues already arising in the legal field, in all areas of law related to the use and impact of AI in everyday life, we believe, require the creation of a Code of the use of artificial intelligence, which would establish one's liability for all of this.

An article by Victor Dobozi (vdobozi@stoica-asociatii.ro), Senior Partner, STOICA & ASOCIAȚII.

 

image