AI to help fight cyber-threats more efficiently and intelligently

Posted on

With cyber-attacks on the rise around the world and increasingly sophisticated technologies at hand, machine learning and artificial intelligence (AI) will prove vital in tackling future threats.

According to Prof. Una-May O’Reilly, Principal Research Scientist at CSAIL, MIT, the goal of AI is to have computation and computers behave as intelligently as humans. “I try to design algorithms that allow computers to be more helpful and intelligent,” she said during her talk on an AI-driven Future during the first day of EmTech MENA in Dubai on Sunday. “The brain is made of neurons, but computers are made of digital gates, so they have different properties to make them intelligent.”

Instead of studying vision or language, she focused on learning, which she called a quintessential aspect of intelligence. “I could not figure out if I could call anything intelligent if it could not learn,” Prof. O’Reilly said. “I went into the study of learning. We typically think of how humans learn – humans are able to classify the objects they encounter, and they can predict too.”

The machine learning community does not learn the same way humans do. It is a very data-driven approach. “Our current techniques rely on historical information,” she said. “We are in a world with a deluge of data, and once a machine learning methodology is given a data set, it will study the properties and be associated with the category of that example. That’s where the machine learning methodology launches off and calls for an algorithm from that data.”

Another part is used during training – the algorithm starts out by proposing some model or classification rule, tests every example against its model and sees if its label matches the true label in the example. “The algorithm goes through a training set and refines a model,” she added. “After training, we deploy models in machine learning, which means we can have a model operating. It’s an extremely powerful paradigm.”

She mentioned another kind of learning, namely biological adaptation. “It’s a form of learning and it’s how organisms learnt to defend themselves,” Ms O’Reilly said. “Learning on an evolutionary timescale relies upon fundamental mechanisms of evolution. We can be inspired by this process and develop algorithms that mimic the fundamental mechanisms of evolution – they’re called genetic algorithms.”

Cyber-security involves good actors, called defenders, who look to protect their assets, their perimeters and prevent the theft of their ability to serve their legitimate clients. “Bad actors” are malicious and seek to thwarts defenders’ attempts to conduct themselves honourably. “I’m very concerned with network attacks,” she explained. “They’re constant, they never go away, they’re constantly evolving, and a lot of dense resources are devoted to them.”


One kind of network attack she touched on is the denial of service attack, with the goal of gobbling up all the resources of the defender to the point where, because he is so busy, he is unable to give attention to his legitimate clients. “Some saturate the bandwidth of the network, others gobble resources of a firewall and others send illegitimate queries to web servers, making them so busy that they cannot serve their legitimate clients,” she added. “It doesn’t matter what volume they are, because if they’re low volume, you may get enough resources stolen that you miss your deadline. They can also be very powerful if they’re high volume, [as] they’re typically distributed.”


Ms O’Reilly, whose research focuses on computational intelligence, spoke of her project Rivals and evolutionary adaptation. “I work with network designers [to help] anticipate what attackers will do, rather than wait for a reaction,” she said. “My goal is to inform the early design of networks in terms of making them more robust and resilient. Through the project, it assumes that an attack type is going to engage with a defence type, with the goal of maximising disruption. We allow our attackers to select and change their targets and change the impact and duration of their various attack strategies.”

The objective is to understand which attacks and defences are superior to others. “We care about the best defender, which results in us coding up a number of different algorithms,” she said. “We run many algorithms many times, which allows us to generate very diverse and many evolutionary arms races between attacks and different defences.”

AI and machine learning are used on different attacks of the networks. “We’ve taken our attention to blockchain and distributed ledgers you can run on blockchain,” Ms O’Reilly explained. “We also look at bugs in smart contracts, which will allow us to eliminate bugs and security vulnerabilities. We look at malware too and look at creating more hardened detectors that know something about the model we trained with machine learning.”

She said AI and machine learning would play a role, in a variety of ways, in helping in innovations in cyber. “We have so many systems out there not designed with defensive measures at all, so we opened up an ever-growing attack surface,” she concluded. “And as long as we maintain that environment, we will never be able to stop this arms race. But we have to pursue new designs for our networks and social media platforms that think about security and this dilemma before they’re designed – the fault is we didn’t design these systems with security in mind, so until we move to platforms where design is incorporating security, then we have this dilemma and arms race.”