Artificial Intelligence is fast gaining prominence in the mainstream. People are increasingly using AI tools to generate automated images and videos. But a recent incident of hacking via AI-generated videos has created concerns around this technology
Some threat actors or those who allegedly work with the Dark Web, are creating AI-generated videos and uploading them on YouTube to lure its viewers into getting hacked
As per a report by IT security intelligence company CloudSEK, there has been a 200-300% increase in the number of AI-generated videos that contain links to stealer malware such as Vidar, RedLine, and Raccoon in the description
Most of the videos are tutorials on how to download cracked versions of popular software such as Adobe Photoshop, Premiere Pro, Autodesk 3ds Max, AutoCAD. Once the viewer watches the video and looks for links to download the said software, they are directed to the description section where the malicious link is pasted
Videos of AI-generated humans are difficult to differentiate from the videos of real humans. People often have more trust in a human sending out certain instructions than a machine. The AI hackers are taking advantage of this human psychology and use AI tools to feature humans that promote the 'malware' packaged as a tutorial
Some of the AI-video platforms pointed out in the report are Synthesia and D-ID
The AI malware can steal sensitive information such as bank account numbers, credit card information, passwords and other private data
The harvested data is then uploaded to the attacker's Command and Control server, providing them with the full authority to misuse the data
In view of the increasing usage of AI tools to generate images such as those depicting snowfall in Delhi, AI-generated ghosts in Old Delhi and AI-generated Indian couples, the concern around information theft using AI tools has deepened