An ever-improving generative artificial intelligence (AI) is all fun and games till it isn’t. Deepfakes and realistic looking morphed content are problems the world is grappling with, and we can add online security threats to that. HP Inc., in their latest Threats Insights Report released at the company’s annual HP Imagine keynote, suggests generative AI is being deployed to help write malicious code. HP’s threat research team detected an instance of this—what they call a large and refined ChromeLoader campaign spread through ‘malvertising’ that leads to professional-looking rogue PDF tools. They also logged instances of cybercriminals embedding malicious code in SVG images.
While the threats of AI being used to create malware isn’t new, with some instances documents previously, HP’s researchers are worried about the acceleration in creation of malware. “Threat actors have been using generative artificial intelligence (GenAI) to create convincing phishing lures for some time, but there has been limited evidence of attackers using this technology to write malicious code in the wild. In Q2, however, the HP Threat Research team identified a malware campaign spreading AsyncRAT using VBScript and JavaScript that was highly likely to have been written with the help of GenAI.2,” says the report.
Also Read:Fight fire with fire? Gen AI as a defence against AI powered cyberattacks
“The activity shows how GenAI is accelerating attacks and lowering the bar for cybercriminals to infect endpoints,” the researchers point out.
ChromeLoader, as it is called, references family of web browser malware that enables attackers to take over a computing device’s browsing session, which enables the attacker to redirect searches to their own websites. “In Q2, ChromeLoader campaigns were larger and more polished, relying on malvertising to direct victims to websites offering productivity tools like PDF converters,” say the researchers. The applications these web browsing sessions were directed to hid malicious code, alongside what seem to be valid code-signing certificates that helped this malware to bypass Windows security policies.
The HP Threats Insights Report says that in just the second quarter of this year, as many as 12% of all threats delivered using email, had managed to evade gateway security used by businesses and enterprises for their networks and workstations. Cybercriminals had used as many as 122 file formats to deliver malware threats to devices, including PDF files as well as Scalable Vector Graphics (SVG) which are widely used in graphic designing as well as in web layouts.
Also Read: Banks rely on AI as digital transactions grow, and methodology of frauds evolves
Though .exe remains the most popular extension for malware (39%), the other formats that are being increasingly used include .pdf, .rar, .zip, .docx, .gz, and .img. In terms of the method for delivery, email remained the top vector for delivering malware to endpoints (61% of threats), and that’s grown 8% compared with the threat landscape in Q1. The report points to malicious web browser downloads that reduced by7% to make up 18% of the threats in Q2.
Earlier this year, HT had reported that Large Language Models (or LLMs), that are at the very core of generative AI’s utility, are being used by threat actors to generate phishing attacks, malware and deepfakes. It is no longer possible to distinguish between consumers and enterprises as separate streams, as we often do with technology and solutions, since generative AI has blurred those lines. Similar toolsets are available to consumers and enterprise subscribers. Google Gemini and Microsoft’s Copilot, two examples. Any improvements to LLMs for enterprise and cloud systems, will benefit consumers too.
Banks and payment platforms are a worried lot too, and they’re increasingly relying on AI solutions to counter the threat of sophisticated malware.
“The integration of AI and machine learning has further increased the complexity of cyberattacks. Cybercriminals can now leverage these technologies to automate tasks, enhance their evasion techniques, and develop customised malware,” Joy Sekhri, who is Vice President for Cyber & Intelligence Solutions for South Asia at Mastercard, explained to us.
HDFC Bank’s head of credit intelligence and control, Manish Agrawal told HT that every credit card transaction is being monitored by AI, and any varying patterns or swipes at known dodgy for unknown merchants are flagged for human intervention. The next steps include blocking transactions and contacting the card holder.