December 02, 2016
Entrepreneurs from around the world are digging into artificial intelligence with remarkable energy. Funding in A.I. start-ups has increased more than fourfold to $681 million in 2015, from $145 million in 2011, according to the market research firm CB Insights. The firm estimates that new investments will reach $1.2 billion this year, up 76 percent from last year. The complexity of tasks that smart machines can perform is increasing at an exponential rate. Machine learning can also unlock new use cases. Traditionally, the only way to get a computer to do something was to write down an algorithm explaining how, in small details. But machine-learning algorithms are different: they figure it out on their own, by making inferences from data. And the more data they have, the better they get. Now we don’t have to program computers – they program themselves
Deep learning, modeled on the human brain, is even more complex. Unlike machine learning, deep learning can teach A.I. to ignore all but the important characteristics — a hierarchical structure of the general view that leads to infinite variety. It is deep learning that opened the door to driverless cars, speech-recognition engines and medical-analysis systems that are sometimes better than expert.
A big area where A.I. can be very useful, is information security. The global cybersecurity market was worth $106 billion in 2015 and that will rise to $170 billion by 2020, according to MarketsandMarkets. Having a smart network for yourself is therefore a must. The average cost of a data breach is $3.8 million according to the 2015 Cost of Data Breach Study from the Ponemon Institute. Software runs everything now, from the lightbulbs to autonomous car, and is an increasingly attractive target for attack. Humans can find vulnerabilities but can’t analyze millions of programs as well protect the entire cyber-attack surface that is present in our lives, and doing it entirely manually, is mission impossible. There are many thousands of new malware introduced every day, and nearly all of them are based on extremely small mutations over the past known malware. Researchers estimate the vast majority of new malware are mutated by less than 2% in comparison to past malware.
The majority of current security solutions are completely incapable of detecting most malware . They look at the file, check its signature, and when it doesn’t match anything they know, the advanced solutions run it in sandboxing mode. Heuristics or even classical machine learning is used to get an idea of whether it is malicious or legitimate and even then the detection rates are very low.
In cybersecurity, machines can be trained with deep networks on datasets of millions of files such that invariant representations are learned. Every file modified many times. This makes the training phase much longer, but after that, the trained network is highly resilient to changes and mutations, and detects nearly all new malware variants. The human brain works in a similar way as well: It takes time to learn something, but when it does, we can use it very quickly in prediction mode. Deep learning methods allow to provide not only detection but also prevention (the moment a malicious file is detected, it is already removed as well).
Some experts say that no A.I. that presently exists can emulate even the most basic hacker skills. It’s going to be years before a computer can replace a human, because humans have this amazing power of creativity especially when you look at computer security.
Despite this, Darpa Cyber Grand Challenge, seeks to automate this cyber-attack and defense process. This project searches for the first generations of machines that can discover, prove and fix software flaws in real-time, without any assistance. If successful, at the same time attack competitors in capture-the-flag competition sponsored by the US military’s defense research arm -the winning team will go home with $2 million.
Experts agree that the best solution at the moment is to combine the skills of humans with machines, until we have fully sensitive machines. They are advanced but still have to be instructed by humans. A group of researchers at MIT’s Computer Science and Artificial Intelligence Laboratory has already developed an Artificial Intelligence system, which does not just rely on the artificial intelligence (AI), but also on human input. The system first scans the content with unsupervised machine-learning techniques and then, at the end of the day, presents its findings to human analysts. The human analyst then identifies which events are actual cyber-attacks and which aren’t. This feedback is then incorporated into the machine learning system and is used the next day for analyzing new logs.
Similar methods are used by Arc4dia SNOW team. SNOW’s next-gen technology combines enhanced detection mechanics with a robust suite of constantly evolving analysis tools – offering the most extensive protection to defend against Advanced Persistent Threats (APTs). The human hunters are able to “teach” machines over a tool called SNOWboard. The combined skills of humans with machines makes the hunting of malwares faster and incredibly effective.