World

As scammers latch on to AI, Microsoft says it blocks 1.6 million bots every hour


As scammers latch on to AI, Microsoft says it blocks 1.6 million bots every hour

Scammers have latched on to AI as the newest tool in their arsenal — generating fake photos, voice clones, phishing emails and fake websites “at an increasingly rapid rate”.

Loading Instagram content

Artificial intelligence has made it “easier and cheaper” for scammers to generate believable content for attacks, according to Microsoft’s latest Cyber Signals report.

The report noted Microsoft had thwarted about $6.28 billion in fraud attempts in the 12 months from April 2024.

In the same time frame, it said it blocked about 1.6 million bots attempting to create accounts every single hour.

Microsoft’s corporate vice president of anti-fraud and product abuse, Kelly Bissell, told the ABC the company had shut down about 450 “malicious” scam websites last year.

“I’ve been in this IT world for 30 years, I’ve seen every new technology innovation from internet ot mobile to cloud and all of the stuff in between,” he said.

“Attackers will adopt that new technology faster than a company would, large and small, or an individual.

“Scams have been going on for thousands of years, but with technology they’re using another tool, better.

With AI, where before it would take you maybe weeks or days to build a malicious website, now you can do it in minutes.

Artificial intelligence making familiar scams more dangerous

AI-driven fraud attacks are happening globally, according to Microsoft’s Anti-Fraud Team.

Much of the activity is coming from China and Europe — specifically Germany, which is “one of the largest e-commerce and online services markets in the world”.

Mr Bissell said every consumer needed to be “thoughtful”.

“As time goes on and AI continues to evolve and change, I think it’s going to be part of our lives more and more,”

he said.

“No different than mobile phones or the internet or cloud services.

“I just think every consumer out there [needs to] be wary about things and they need to use tools that they can trust.”

Microsoft’s report said it had enhanced protections to respond to tech support scams and other ongoing threats.

It said it now blocked an average of 4,415 suspicious connection attempts daily on its Quick Assist tool, which allows remote access to computer screens for tech support.

The report added Microsoft was using “large-scale detection models” to fight AI with AI, with Mr Bissell noting they had been developing “responsible AI” for years.

“Not all AIs are the same,” he said.

“And so we’ve built a trustworthy AI footprint, but attackers are using untrustworthy AI functions.

“Just like the scammers can write code fast, so can the good actors, so can the defenders.”

Professor Matthew Warren is the director of RMIT University’s Centre for Cyber Security Research and Innovation.

He said most of the scams being used were already well-known to experts — but the use of artificial intelligence had taken it to a new level.

“I think the sheer volume and the sheer sophistication of how scammers are using AI in terms of improving … it’s going to make it much harder for individuals [to know] when scams have occurred,” he told the ABC.

“I think it’s going to become more and more of a problem, because in terms of scammers, we always say they work on the five per cent.

“That’s the five per cent of people who will open emails or will send their personal information because they’re being asked to.

“So what you’re seeing is by scammers increasing the volume of individuals that they can attack … they’re going to have greater business returns.

“Scammers operate as a business model, and that’s what they’re looking for, is to generate profits [and] income.”

Scammers using AI chatbots and shopping sites

Previously, according to Microsoft, it could take threat actors days or weeks to set up convincing fake shopping websites.

Now — using AI — fraudulent websites can be set up in just minutes.

“Using AI-generated product descriptions, images, and customer reviews, customers are duped into believing they are interacting with a genuine merchant, exploiting consumer trust in familiar brands,” Microsoft’s report said.

“AI-powered customer service chatbots can add another layer of deception by convincingly interacting with customers.

“These bots can delay charge backs by stalling customers with scripted excuses and manipulating complaints with AI-generated responses that make scam sites appear professional.”

Australians lost more money to shopping scams in 2024 than to any other type of scam reported, according to Scamwatch.

In total, 10,022 Australians reported overall total losses of $9.8 million last year.

Payment company ACI Worldwide reporters Australians had lost $1,224 million in 2023 to authorised push payment scams specifically.

The scam tricked victims into initiating a payment — the company’s Scamscope report found AI had allowed scammers to amplify trust and automate precision attacks with “sophistication and at scale”.

The company’s Pacific general manager Trent Gunthorpe said scammers were able to develop their skill set quicker than before.

“On the dark web, you can purchase scam as a service,”

he said.

“The unseen, unsophisticated scammer can now purchase and become very sophisticated very quickly and start to use some of these tools.

“[This] allows them to do things like scraping from social media to really personalist the messages that are targeting these potential scam victims.

“We’ve heard the stories where companies have been hacked and customer information is being stolen and sold on the dark web, it’s much broader than that.”

Digital literacy ‘crucial’ to staying safe 

To keep themselves safe, Microsoft’s report recommended online shoppers avoid impulse buying, clicking unverified ads, and trusting social media “proof” of products.

For jobseekers, verifying credentials and companies and being suspicious of requests for upfront payments or personal information was key.

“If a video interview seems unnatural, with lip-syncing delays, robotic speech, or odd facial expressions, it could be deepfake technology at work,” the report said.

Loading…

La Trobe University Professor of Analytics and AI Daswin de Silva said things like multi-factor authentication were also vital to staying safe.

“What’s quite often overlooked is the amount of training that’s required to be digitally present,” he said.

“We don’t really consider the need for initial training but also subsequent rounds of training to keep [up to date], for example [knowing] that phishing as a service is a possibility.

“Digital literacy is quite crucial.

“There is a lag because the AI that we have on social media and on our smart phones is quite good, and it’s very easy to get to without considering the security risk.

We think of security as an afterthought. It’s never the first thought when using a password that you could be hacked, you hardly think about it.

“It’s always about having the convenience.”

Loading Instagram content

He said there was an “army of people” actively seeking to jailbreak AI models, adding the AI “ecosystem” was as good as could be expected.

“There’s lots of safeguards within the large models, and the technology companies are quite responsible in their approach,” he said.

“It’s usually found on social media, ‘I broke ChatGPT and it’s telling me how to assemble some kind of weaponry’.

“Whenever that is leaked … the company releases the fix addressing it.”

The AI ‘arms race’ between scammers, victims and corporations

Asked what his biggest concern was for the world of AI, Mr Bissell

Professor Warren said many AI tools had begun to develop “ethical algorithms” to stop scam content being generated.

“But [scammers] can still use the AI system to say, ‘write me a business letter to a potential employee to work for me’,” he said. 

“So even though a lot of these AI sites have introduced ethical algorithms trying to stop poor behaviour, you can easily bypass that by just changing what you’re asking it to do.

“And it won’t pick up that it is a scam. It just thinks [it is] generating a business email, for instance.”

The Australian Department of Industry, Science and Resources published eight “ethics principles” for artificial intelligence tools. 

Included among them is the need for security to mitigate “potential abuse risks”, and the need for AI systems to not engage in “deception” or “unfair manipulation”. 

Professor Warren said the issue called for “technical solutions”.

“I think you can see a bit of an arms race in scamming technology developing,”

he said.

“The large corporations are trying to develop solutions, the scammers are then trying to find ways around that. So I think it’s going to be an ongoing issue.

“What really surprises me is the sheer volume of attacks, the fact that Microsoft is blocking 1.6 million bot attempts an hour.

“But what you’re going to see in the future is that number will increase exponentially.”



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *