Microsoft Stops $4B in AI-Powered Scams Targeting Users Worldwide

Ai

Cybercriminals are using artificial intelligence (AI) to trick people in new and dangerous ways, according to Microsoft’s latest Cyber Signals report. The tech giant says it has blocked $4 billion worth of scam attempts over the past year, stopping around 1.6 million fake bot sign-ups every hour. This shows just how big and serious this problem has become.

The ninth edition of the report, called “AI-Powered Deception: New Fraud Threats and How to Fight Them,” explains how AI makes it easier for criminals to create convincing scams. Even people with little technical knowledge can now use AI tools to build complex fraud schemes in just minutes, when it used to take days or weeks.

This easy access to AI tools has changed the world of crime, making it simpler for scammers to target both everyday people and businesses all over the globe.

How AI Is Making Scams More Dangerous

Microsoft’s report shows how AI helps criminals gather information from the internet, like details about companies or individuals. With this data, they create detailed profiles to trick people in “social engineering” attacks, where they pretend to be someone trustworthy to steal money or personal information.

Scammers also use AI to build fake online stores that look real. These websites often have AI-generated product descriptions, customer reviews, and even business histories to fool people into thinking they’re shopping at a legitimate site. Some scams go further by using AI-powered chatbots that act like customer service reps. These bots can talk to victims, make excuses to delay refunds, or respond to complaints in ways that make the scam seem professional.

Kelly Bissell, a Microsoft Security leader, says the problem is getting worse. “Cybercrime is a trillion-dollar issue, and it keeps growing every year,” he said in the report. “But AI can also help us fight back. By using AI, we can spot scams faster and build stronger protections into our products to keep users safe.”

The report says AI-powered scams are happening all over the world, with a lot of activity coming from China and Europe, especially Germany, which has a huge online shopping market. The bigger the online marketplace, the more scams it attracts.

The Biggest AI Scam Threats: Online Shopping and Fake Jobs

Two major types of AI scams are causing trouble: fake online stores and job recruitment fraud.

In online shopping scams, criminals use AI to create fake websites in minutes. These sites look like real businesses, with AI-written product details, fake reviews, and even stolen images. Shoppers may think they’re buying from a trusted company, only to lose their money or have their personal information stolen. AI chatbots on these sites can also trick people by answering questions or delaying complaints with scripted responses.

Job scams are another growing problem. Scammers use AI to create fake job listings that look real, complete with detailed descriptions and professional-looking emails. They might even set up fake video interviews using AI to seem more convincing. These scams often ask job seekers for personal details, like resumes or bank account information, claiming it’s for “verification.” Warning signs include job offers that come out of nowhere, requests for payment, or messages sent through casual apps like WhatsApp or text.

How Microsoft Is Fighting Back

Microsoft is taking action to stop these AI-powered scams. Here’s what they’re doing:

  • Microsoft Defender for Cloud: This tool protects Azure, Microsoft’s cloud platform, from scam attempts.
  • Microsoft Edge Browser: Edge uses smart technology to warn users about fake or dangerous websites. It can spot typos in web addresses or sites pretending to be real companies.
  • Windows Quick Assist Protections: Microsoft added warnings to this tool to stop tech support scams, where criminals pretend to be IT helpers. The company blocks over 4,400 suspicious connection attempts every day.
  • New Fraud Prevention Rules: Starting in January 2025, all Microsoft product teams must check for fraud risks when designing new products. This means building scam protections into everything they make.

Microsoft is also using AI to fight AI scams. By analyzing huge amounts of data, their systems can spot patterns of fraud and stop attacks before they reach users.

How You Can Stay Safe

As AI scams get smarter, it’s important for everyone to stay careful. Microsoft suggests these tips:

  • Be skeptical of urgent requests: Scammers often pressure you to act quickly. Take your time to check if something is real.
  • Verify websites: Before buying anything online, make sure the website is legitimate. Look for contact details and read reviews from trusted sources.
  • Protect personal information: Never share sensitive details like bank account numbers or passwords with unverified people or websites.
  • Watch for red flags in job offers: Be cautious of unsolicited job offers, especially if they ask for money or personal details upfront.

For businesses, Microsoft recommends using multi-factor authentication (where you need more than just a password to log in) and tools that can detect deepfakes—fake videos or audio created by AI.

The Future of Fighting AI Scams

AI-powered scams are a growing challenge, but Microsoft believes technology can also be part of the solution. By building smarter protections and teaching people how to spot scams, the company hopes to stay one step ahead of criminals.

If you’re interested in learning more about AI, cybersecurity, or big data, events like the AI & Big Data Expo in Amsterdam, California, and London are great places to hear from experts. These events, co-located with others like the Intelligent Automation Conference and Cyber Security & Cloud Expo, offer insights into the latest tech trends.

Stay informed and stay safe—AI scams are evolving, but with the right knowledge and tools, you can protect yourself and your business.

Explore more upcoming tech events and webinars at TechForge’s website.