Criminals are weaponizing AI while companies rush to adopt it

Criminals are weaponizing AI with dark models and deepfakes, while enterprises adopt AI tools without strong safeguards.

Check Point Research’s first annual AI Security Report reveals a troubling reality: cybercriminals are embracing artificial intelligence just as eagerly as legitimate businesses, creating unprecedented security challenges for organizations worldwide.

The dark side of AI innovation

Cybercriminals have moved beyond simply using mainstream AI tools like ChatGPT. They’re now creating specialized “dark AI models” with names like WormGPT, FraudGPT, and GhostGPT—explicitly designed for malicious purposes without ethical safeguards. These tools help criminals generate convincing phishing emails, write malware code, and craft sophisticated social engineering attacks.

The underground market for AI-powered crime is booming. Advanced AI phone scam systems that can impersonate voices in real-time are selling for around $20,000, while simpler AI-generated fake identity services start at just $70. These tools allow criminals to operate at unprecedented scale and sophistication.

AI is fueling both innovation and exploitation—empowering defenders, but also arming criminals with unprecedented capabilities.

Deepfakes become a real threat

The report documents alarming real-world cases in which AI-generated audio and video have led to substantial financial losses. In one incident, British engineering firm Arup lost £20 million after criminals used deepfake video technology during a live video call to impersonate senior executives and convince an employee to transfer funds.

Audio deepfakes are particularly concerning because they can be created with just ten minutes of voice samples, resulting in convincing impersonations. Italian scammers recently used this technology to impersonate the country’s defense minister, targeting wealthy contacts for money transfers.

Enterprise AI adoption creates new vulnerabilities

While criminals are embracing AI for attacks, businesses are rapidly adopting AI tools—often without implementing adequate security measures. The report found that AI services are used in 51% of enterprise networks every month, with 1 in every 80 prompts containing high-risk, sensitive data that could lead to leaks.

Popular business AI tools include ChatGPT (used in 37% of enterprise networks), Microsoft Copilot (27%), and writing assistants like Grammarly (25%). However, many employees may not realize they’re sharing sensitive company information with these AI systems.

Fighting fire with fire

The cybersecurity industry is responding by developing AI-powered defense systems. These tools can analyze millions of potential threats daily, identify suspicious patterns in domain registrations, and automatically extract attack signatures from security reports.

Advanced AI systems can now detect malicious code that traditional security tools often miss, helping researchers discover vulnerabilities faster than ever before.

The bottom line

As AI becomes central to both cyber attacks and defense, organizations must balance innovation with security. The report emphasizes that while AI offers tremendous productivity benefits, companies need robust governance, monitoring, and data protection strategies to safely harness these powerful technologies without falling victim to AI-powered threats.

Author