Use of ML and AI in BlackHat Hacking.

The Super Cool Bitter Pill.

An AI or ML, as they’ve been programmed to be in a good sense in Cyber Security, They could also be programmed to perform Cyber attacks and never get caught, Could replicate into clusters and could be undetectable. For Cyber Punks, to develop their own AI and launch attacks is the same. With the fast-moving explosion of data and apps, there are really numerous ways to attack an Endpoint. Some Bad cases, the AI being constructed for defending purposes, could be turned over by hackers and with their malicious knowledge, the AI could be used as an attack vector too.

As cybercriminals begin exploring the possibilities of AI-enhanced malware, “cyber defenders must understand the mechanisms and implications of the malicious use of AI in order to stay ahead of these threats and deploy appropriate defenses.”

It’s all Algorithms and statements which makes an AI and as of now, we don’t have Artificial Intelligence, yet. Algorithms are assumptions made to data. There are many algorithms. TensorFlow, AWS, Torch are dedicated to that collection of algorithms. At the 2017 DEFCON conference, security company Endgame revealed how it created customized malware using Elon Musk’s OpenAI framework to create malware that security engines were unable to detect. Meanwhile, other researchers have predicted machine learning could ultimately be used to “modify code on the fly based on how and what has been detected in the lab,” an extension on polymorphic malware.

Smart and IoT based Botnets

Smart botnets, cluster-based attacks, Swarmbots, i.e., making them Intelligent by making them IoT Devices, making them communicate, Decide and launch a cluster-based yet more powerful attack, has been addressed by the Fortinet. These could vulnerable systems at a Large scale at once. Each of the zombies become more Intelligent with local knowledge sharing, WITHOUT A BOTNET instructing them.

Source: Cloudflare

These swarm technologies use hive nets which promotes learning from its past behaviour. Swarm Technology is said to be a collective behaviour of decentralized and self-organized systems, creating an IoT Botnet.

Though futuristic fiction, some can draw conclusions from the criminal possibilities of swarm technology from Black Mirror’s Hated in The Nation, where thousands of automated bees are compromised for surveillance and physical attacks. — CSOonline.

Social Engineering / Phishing attacks.

One of the more clear utilization of AI is utilizing calculations like Text To Speech, Speech Recognition and Natural Language Processing(NLP) for more brilliant social engineering techniques. All things considered, through repeating neural systems, It would already be able to train such programming Algorithms, in the League of phishing, messages could turn out to be progressively refined and acceptable. In precise, particularly programmed Neural Net could accomplish this task in compromising High-Profile targets on a really large scale at once. The System could be trained to maintain the major acuteness to the Genuinity and to be convincing at the same time.

Source: PCMag.

Subsequently, the system was remarkably effective. In tests involving 90 users, the framework delivered a success rate varying between 30 and 60 per cent, a considerable improvement on manual spear phishing and bulk phishing results.

AI vs AI / ML vs ML.

With AI now part of the cutting edge programmer’s toolbox, safeguards are thinking of novel methods for guarding frameworks. Fortunately, security experts have a somewhat powerful and clear countermeasure available to them, in particular, man-made brainpower itself. Inconvenience is, this will undoubtedly deliver a weapons contest between them. Neither side truly has a decision, as the best way to counter the other is to progressively depend on programmed frameworks.

“For security experts, this is Big Data problem — we’re dealing with tons of data — more than a single human could possibly produce,” said Wallace. “Once you’ve started to deal with an adversary, you have no choice but to use weaponized AI yourself.” — Gizmodo

So our brave new world of AI-enabled hacking awaits, with criminals becoming increasingly capable of targeting vulnerable users and systems. Computer security firms will likewise lean on an AI in a never-ending effort to keep up. Eventually, these tools will escape human comprehension and control, working at lightning fast speeds in an emerging digital ecosystem. It’ll get to a point where both hackers and infosec professionals have no choice but to hit the “go” button on their respective systems, and simply hope for the best. A consequence of AI is that humans are increasingly being kept out of the loop.

Sources: Gizmodo, CSOOnline, Various Google Searches, and My Research.

read original article here