News Articles

DeepLocker: When malware turns artificial intelligence into a weapon In the future, your face could become the trigger for the execution of malware.

Source: ZD Net, 08/08/2018


AI can be used to automatically detect and combat malware -- but this
does not mean hackers can also use it to their advantage.
More security news
Cybersecurity, in a world full of networked systems, data collection,
Internet of Things (IoT) devices and mobility, has become a race
between white hats and threat actors.
Traditional cybersecurity solutions, such as bolt-on antivirus
software, are no longer enough. Cyberattackers are exploiting every
possible avenue to steal data, infiltrate networks, disrupt critical
systems, rinse bank accounts, and hold businesses to ransom.
The rise of state-sponsored attacks does not help, either.
Security researchers and response teams are often hard-pressed to keep
up with constant attack attempts, as well as vulnerability and patch
management in a time where computing is becoming ever-more sophisticated.
Artificial intelligence (AI) has been touted as a potential solution
which could learn to detect suspicious behavior, stop cyberattackers
in their tracks, and take some of the workload away from human teams.
However, the same technology can also be used by threat actors to
augment their own attack methods.
According to IBM, the `AI era` could result in weaponized artificial
intelligence. In order to study how AI could one day become a new tool
in the arsenal of threat actors, IBM Research has developed an attack
tool powered by artificial intelligence.
Dubbed DeepLocker, the AI-powered malware is `highly targeted and
evasive,` according to the research team.
The malware, carried along by systems such as video conferencing
software, is dormant until it reaches a specific victim, who is
identified through factors including facial recognition, geolocation,
voice recognition, and potentially the analysis of data gleaned from
sources such as online trackers and social media.
Once the target has been acquired, DeepLocker launches its attack.
`You can think of this capability as similar to a sniper attack in
contrast to the `spray and pray` approach of traditional malware,` IBM
says. `It is designed to be stealthy and fly under the radar, avoiding
detection until the very last moment when a specific target has been
recognized.`
DeepLocker`s Deep Neural Network (DNN) model stipulates `trigger
conditions` to execute a payload. If these conditions are not met --
and the target is not found -- then the malware remains locked up,
which IBM says makes the malicious code `almost impossible to reverse
engineer.`
Finding a target, triggering a key, and executing a payload may bring
to mind an `if this, then that` programming model. However, the DNN
AI-model is far more convoluted and difficult to decipher.
To demonstrate DeepLocker`s potential, the security researchers
created a proof-of-concept (PoC) in which WannaCry ransomware was
hidden in a video conferencing application. The malware was not
detected by antivirus engines or sandboxing.
The AI model was then trained to recognize the face of an individual
selected for the test, and once spotted, the trigger condition would
be met and the ransomware executed.
`What makes this AI-powered malware particularly dangerous is that
similar to how nation-state malware works, it could infect millions of
systems without ever being detected, only unleashing its malicious
payload to specified targets which the malware operator defines,` the
research team added.
Thankfully, this kind of cyberthreat has not been actively used --
yet. However, DeepLocker was created in order to understand how AI
could be bolted-on to current malware techniques and to research just
what threats the enterprise and consumers alike may face in the future.


Search
South Africa Immigration Company