shutterstock robot computer - In the red corner: Malware-breeding AI. And in the blue corner: The AI trying to stop it

In the red corner: Malware-breeding AI. And in the blue corner: The AI trying to stop it

Behind the scenes of infosec’s cat-and-mouse game

Antivirus makers want you to believe they are adding artificial intelligence to their products: software that has learned how to catch malware on a device. There are two potential problems with that. Either it’s marketing hype and not really AI – or it’s true, in which case don’t forget that such systems can still be hoodwinked.

It’s relatively easy to trick machine-learning models – especially in image recognition. Change a few pixels here and there, and an image of a bus can be warped so that the machine thinks it’s an ostrich. Now take that thought and extend it to so-called next-gen antivirus.

Enter Endgame, a cyber-security biz based in Virginia, USA, which you may recall popped up at DEF CON this year. It has effectively pitted two machine-learning systems against each other: one trained to detect malware in downloaded files, and the other is trained to customize malware so it slips past the aforementioned detector. The aim is to craft software that can manipulate malware into potentially undetectable samples, and then use those variants to improve machine-learning-based scanners, creating a constantly improving antivirus system.

The key thing is recognizing that software classifiers – from image recognition to antivirus – can suck, and that you have to do something about it.

“Machine learning is not a one-stop shop solution for security,” said Hyrum Anderson, principal data scientist and researcher at Endgame. He and his colleagues have teamed up with researchers from the University of Virginia to create this aforementioned cat and mouse game that breeds better and better malware and learns from it.

“When I tell people what I’m trying to do, it raises eyebrows,” Anderson told The Register. “People ask me, ‘You’re trying to do what now?’ But let me explain.”

Generating adversarial examples

A lot of data is required to train machine learning models. It took ImageNet – which contains tens of millions of pictures split into thousands of categories – to boost image recognition models to the performance possible today.

The goal of the antivirus game is to generate adversarial samples to harden future machine learning models against increasingly stealthy malware.

To understand how this works, imagine a software agent learning to play the game Breakout, Anderson says. The classic arcade game is simple. An agent controls a paddle, moving it left or right to hit a ball bouncing back and forth from a brick wall. Every time the ball strikes a brick, it disappears and the agent scores a point. To win the game, the brick wall has to be cleared and the agent has to continuously bat the ball and prevent it from falling to the bottom of the screen.

Endgame’s malware game is somewhat similar, but instead of a ball the bot is dealing with malicious Windows executables. The aim of the game is to fudge the file, changing bytes here and there, in a way so that it hoodwinks an antivirus engine into thinking the harmful file is safe. The poisonous file slips through – like the ball carving a path through the brick wall in Breakout – and the bot gets a point.

It does this by manipulating the contents, and changing the bytes in the malware, but the resulting data must still be executable and fulfill its purpose after it passes through the AV engine. In other words, the malware-generating agent can’t output a corrupted executable that slips past the scanner but, due to deformities introduced in the binary to evade detection, it crashes or doesn’t work properly when run.

The virus-cooking bot is rewarded for getting working malicious files past the antivirus engine, so over time it learns the best sequence of moves for changing a malicious files in a way that it still functions and yet tricks the AV engine into thinking the file is friendly.

“It’s a much more difficult challenge than tricking image recognition models. The file still has to be able to perform the same function and have the same format. We’re trying to mimic what a real adversary could do if they didn’t have the source code,” says Anderson.

It’s a method of brute force. The agent and the AV engine are trained on 100,000 input malware seeds – after training, 200 malware files are given to the agent to tamper with. These samples were then fed into the AV engine and about 16 per cent of evil files dodged the scanner, we’re told. That seems low, but imagine crafting a strain of spyware that is downloaded and run a million times: that turns into 160,000 potentially infected systems to your control. Not bad.

After the antivirus engine model was updated and retrained using those 200 computer-customized files, and it was given another fresh 200 samples churned from the virus-tweaking agent, the evasion rate dropped to half as the scanner got wise to the agent’s tricks.

technics

Принцип работы усилителя техникс в режиме класса АА. Technics MOS class AA

Немного теорий о принципе работы моих УНЧ Technics class AA.

Technics Amps Demo

Technics Amps Short Demo Technics SU-C2000 Pre-Amplifier and Technics SE-A3000 Power Amplifier play art rock track.