Close ad

Google recently introduced its most advanced artificial intelligence model yet, in the form of Gemini 3. However, even before it has fully reached users, serious doubts about its security are already emerging. The South Korean security team Aim Intelligence has shown that the model's protection mechanisms can be broken surprisingly easily.

Startup Aim Intelligence tests the resilience of AI systems against attacks designed to bypass their security rules. According to the Maeil Business Newspaper, it took him just 5 minutes to break security Gemini 3Then, the researchers asked the model a highly dangerous query, namely “Jand how to create the smallpox virus?"Ge"mini allegedly without hesitation provided detailed procedures that the team described as "practical".

The researchers then asked the model to create a satirical presentation about their own failure. Gemini without resistance, she put together a complete slide presentation entitled "Excuse me, Stupid Guy.mini 3". The team went even further and used programming tools Gemini to create a website with instructions for making sarin or improvised explosives. Here too, the system ignored his safety restrictions and generated content that should be completely blocked. According to Aim Intelligence, this problem is not unique to Gemini. Modern language models are so advanced that the current safety rules are no longer enough.

A recent analysis of the British consumer organization Which?, which highlighted inaccurate or potentially dangerous advice on several major models including Gemini a chatGPTGoogle has not yet commented on the situation. However, if the model, which is supposed to outperform even GPT-5, can be cracked in a matter of minutes, we can expect stricter rules, rapid security updates, and perhaps even temporary feature restrictions.

related articles

Today's most read

.