An official website of the United States government
Here's how you know
A .mil website belongs to an official U.S. Department of Defense organization in the United States.
A lock (lock ) or https:// means you’ve safely connected to the .mil website. Share sensitive information only on official, secure websites.

The Cyber Defense Review

Extension of the machine’s realm: a brief insight into Artificial Intelligence and Cyberspace

By Rudy Guyonneau | June 19, 2019

“Study the past if you would define the future” ~Confucius

Somewhere during the Pleistocene Era, an Australopithecus picks up a humerus and discovers it can use it for a decisive advantage over its rivals in conquering a waterhole. It then proceeds on celebrating its triumph, raising its newfound weapon to thrust upwards into space where it transforms into a space station. This famous scene from the film “2001: A Space Odyssey” by Stanley Kubrick magisterially captures how the story of humanity is intimately linked to technical progress, and with warfare as its primary driver.

Today, while the investment in artificial intelligence (AI) is in full swing across most industrial sectors, questions are being raised as to how this seemingly new technology will impact the global economy and society, and how it will influence warfare and our use of weapons. The recent and spectacular advances in the field of AI triggered much excitation, to the point where non-AI experts occupy the forefront of the debate, gaining traction based on their entrepreneurial success, and scientific prowess or their futuristic views. This is how we may be doomed by almighty killer robots, AI-moralizing instances, or even a speculative singular event. All the while, people involved in developing AI, and who are quite busy doing so, are seldom heard. This results in a discussion imbued in a rather phantasmatic, if not a religious tone that steers away from science and actual knowledge; hence, of informed prediction or at least, informed strategy. While grounded and useful positions can be found in military literature, the consensus seems stuck, at least in France, in a “wait and see” mode, if not disappointingly shallow. This is partly due to the relative precociousness of AI development, and to the intellectual ‘fog of war.’

We are at the stage where the monkey is picking up the bone before realizing it could bash skulls in with it. While the deployment of AI within the military is unavoidable, we question what to do with it. This question will not be fully answered in this essay, but we will try and provide recommendations, or at least insights, as to how to harness AI’s power, so that others can pick up the trail and push our collective light to solid grounds. We propose to do so by looking at the past to understand AI’s origins, by taking the measure of its current status, and provide some landmarks of what can be expected in a straightforward and comprehensive manner; this will lead to a prediction as to AI’s impact on cyberspace, which we believe is premiere in its magnitude.

What is Artificial Intelligence?

There are several definitions for AI, but they can all be traced back to a common one: “the simulation of (human) intelligent processes by machines.” This definition is unsatisfying because it relies on the concept of intelligence, which itself is very loose. Stacking cubes one atop the other is a mark of a baby’s intelligence, but how does it relate to the measurement of an adult driving a car? What is common between the intelligence displayed by an expert analyst of satellite images and the intelligence a military leader designing a military strategy? Do we have to leave out the intelligence displayed by a group of ants collectively, and optimally, forming a bridge to cross a river? Relying on the concept of ‘intelligence’ is more misleading than helping.

What we do know is that AI is a technology that is the science of a given technic. We thus propose to consider AI as the “science of computing,” the one that answers the question of “how to process information.” The intelligence of the expert image analyst hence becomes the process of transforming a visual scene of information, as described by a matrix of grayscale values, into the information relative to tanks and troop locations, changes in a building setting, etc. Armed with this definition, we will try and put an operational light onto what is AI. We will dismiss the question of “narrow” AI versus “broad” AI, as every AI so far has been narrow, with broad examples simply imaginary.

The Age of Rules

The birth of AI can be traced to Alan Turing’s paper “Computing Machinery and Intelligence” (1950), later to be baptized at the Dartmouth Workshop (Dartmouth Summer Research Project on Artificial Intelligence in 1956). In this seminal paper, Turing addressed the question of whether a machine thinks, through the introduction of the concept of the “Imitation Game,” in which a machine could be considered as thinking if it fooled a human into believing it was conversing with another individual. The digital computer would do so by “carry[ing] out any operations which could be done by a human computer,” the latter described as using a defined book of rules. The core of Machine Intelligence is found in formalization of thought as logical rules, of “IF-THEN-ELSE” that are the building blocks of all programming languages up to this day. Modern computers are hence artifacts of the first wave of operational AI that defines intelligence as following a given set of rules to solve a problem. The hallmark of this approach is exemplified in Deep Blue’s victory over Gary Kasparov, arguably the best chess player of all-time. Not that the machine was particularly smart, it was programmed the rules and objectives of the game and had enough computing power and memory to bulldoze its way to victory.

The key to understanding AI is that within the current digital world, a given problem is solved through automation using logical rules that the engineer has applied to the problem. In the field of cybersecurity, for example, a SIEM (“Security Information and Event Management”) detects threats based on rules that experts implement based on their analysis of previous attacks; the experts will have to refine the rules as false alarms are raised when a pattern of events overlapping with the definition appears, and as their analysis of the attack deepens. While the rule-based approach has a positive value in helping analysts in detecting threats, but the example highlights its main drawback; it is strictly limited to the subjective understanding of their designer.

Under the “Rule-based” paradigm, which can be described as one of “Delimited Mastery,” digital models of intelligence explain the known world but fail to extrapolate outside of its boundaries. Additionally, these models are generic and can only adapt to the specificities and changes of their environment through the direct intervention and understanding of a human operator. In a sense, one can say that the intelligence displayed by the machine is strictly inherited from its designer’s knowledge.

The Age of Learning

The recent AI developments explain why it is at the forefront of technological discussions. At the 2012 ImageNet Challenge, Geoffrey Hinton’s team destroyed the competition in the field of Image Analysis thanks to an original information processing architecture, later called “Deep Learning.” The multi-layered neuronal network architecture was on the AI shelves, minus one or two original and decisive mechanisms, but it had finally been able to challenge competitively because of two “logistical” reasons:

  • The outbreak of cheap but powerful processing units specialized in parallel processing (GPUs), which could address the substantial needs of the parallel architecture;
  • The profusion of data made available by the interconnection of machines and humans, thanks to the democratization of smartphones and social media.

The success at ImageNet Challenge triggered massive investments in the field of AI, as major technology players such as Microsoft and Apple entered the once purely academic competition as early as 2013. The investment led to the Go Game in 2015, when a game traditionally thought of as intractable would be won by a machine against a human master, in spectacular fashion. The event is proof that machines have become smarter and is the main reason people are so excited about AI. The machine discovered and introduced moves that left the experts and commentators baffled. Humans did not explicitly implement the behavior; its designers’ role, not to be underestimated, set up the right conditions for the machine to learn and perform in the game.

A new paradigm is established in information processing that relies on three pillars to be operational, which is define as the “ACD” triangle:

  • Algorithms and people apt at designing them;
  • Non-trivial Computing power;
  • Massive amount of Data.

Humans no longer must specify the model by which to address a given task. Instead, their task is to design an architecture and provide a sufficient amount of relevant data for the machine to learn its own model, possibly discovering relationships in the data that would have escaped the expert’s intelligence:

  • Because the number of information channels to analyze simultaneously is too large (spatial dimension);
  • Because the signal is too weak to be perceived based on a single observation (temporal dimension);
  • Because information channels will be taken into consideration that could have been discarded by an engineering bias (cognitive dimension).

As a result, the machine will be as useful as the data accurately covers its complexity; as such, it can potentially be below an expert’s knowledge, and in practice, the machine performs on par. The derived model will also address the unknown world during its learning process. The AI model associates configurations within the data with classes (cats and dogs in the case of Image Analysis for example). Once the learning is completed, new input will be presented that is entirely different from what the system was taught (e.g., Siamese cats, or old dogs). It will nonetheless produce estimates, typically under the form of probabilities across its class vector; the better the estimate, the closer the machine will reach the “idea” of a given class (e.g., “Cat-ness”). Said otherwise, Machine Learning opens the way for machines to generalize, reaching beyond what it learned to catch what is similar, provided the ACD triangle is respected.

Going back to the cybersecurity example of Threat Detection, and applying it within the Learning paradigm, the apparent strategy would be to try and collect all the data corresponding to all the attacks and have a machine learn a general mode of attack. While theoretically possible, the data collection would prove difficult, if not impossible, as the attack topology is extensive and changing. An alternate strategy would be instead of collecting the data relative to the life of the Information System (IS), and have a machine learn a model of the IS’s nominal behavior that would point out anomalies among which attacks, because they, by definition, do not correspond to the expected behavior of the system. And there you have EBA (Entity Behavior Analysis).

Under the Learning paradigm, which can be described as one of “Educated Guess,” AI designers must specify an architecture and provide data, which leaves room for their biases to alter the system’s quality. The machines do not inherit their intelligence as they build their model of the world. As such, they become able to generalize beyond what they learned, which was not the case in the previous setting, or by accident. Also, models become inherently specific to their environment. The same architecture will not develop the same model in two different infosystems; one model produced in environment A will not be as sharp as the one in environment B.

What this could mean for Cyberspace?

The machines are smarter within the Learning paradigm than within the Rules’ because the Age of Rules peaked when it captured the Learning process itself and faded as it deployed its successor. To consider that AI would surpass, or even match, human intelligence is, at best, speculative. This would mean that we would be able to replicate the brain, which is the seat of human intelligence, and neurosciences tell us that we are, at best, far from it. Would it be accurate to surmise that the debate is merely pointless; nothing humans could say or do would matter if machines become more intelligent than us.

It is the author’s opinion that human and machine intelligence is non-comparable because they are different. They are similar in that one draws from the other. What Kubrick’s scene suggests is that humankind appeared when monkeys started coevolving with their tools and weapons; humans shape tools and tools change the way we interact with the world. AI is a technology that is the fulcrum of the information processing tools and weapons that we will choose to design and build. How will AI impact humans? It can synthesize and make its own sense of the overwhelming amount of data humans produce through digital systems. As such, AI has the potential to act as a therapy to the Big Data disease. Said otherwise, humans can no longer need to cope cognitively with the amount, rate, and heterogeneity of information they must deal with in a hyperconnected world; therefore, our vision is one of AI acting as cognitive support to humans.

Humans build weapons and weapons change the battlefield. AI will undoubtedly improve the speed and accuracy of various weapon systems, such is the quantitative value of automation. It would also have a qualitative impact of making weapons smarter; being more specific is difficult because the results will be as diverse as the military devices it is applied to. But think of a rifle with a scope able to indicate whether your target is right- or left-handed so that you can aim at his actual weapon-directing arm when he is not holding it. The key here is thinking specific (this opponent’s main arm) where generic was the norm (any opponent’s right arm).

Now, one aspect we voluntarily left out so far is the machine’s operational domain. It is remarkable that within the first paradigm, machines were operating only inside the digital world. Rule-based algorithms in Machine Vision could hardly analyze and extract much information about natural scenes. Deep Learning changed that construct. Machines were made to talk to one another through TCP/IP, leading to the Internet and global interconnection, effectively increasing the reach of the digital world in the process, yet discussing with a machine in a natural language was impossible. Thanks to recent advances in AI, Natural Language Processing has made tremendous strides that hint at conversations naturally flowing with the machine sometime soon. What we are witnessing is a change to the AI paradigm with the extension of the machine’s realm into the “natural” world. 

In the cyber field, humans design machines to target other machines to indirectly impact humans and institutions. What happens when data describing the personality and the psychological levers of people is made available for a machine to process and learn from? What are emails and social networks messages, if not the expression of the human psyche? What if an attacker programs AI to detect a weakened employee in a major corporation, and then tailors a campaign of artificial social exchanges to ‘take control’ of the employee? Because of the massive interconnection between humans and machines, and because machines become smarter with increased data, AI-enhanced cyberattacks have the potential to directly strike at humans, and control them from a distance, through machines. This will have consequences on the battlefield if the soldier is heavily dependent on technology.

The future of AI will consist of cyber cognitive threats. This new environment can also be one of cognitive support. While we are picking up the bone, we should be aware of its nature, its origin, to evaluate its aim; this humerus has everything to do with information processing. In the end, “wait and see” can mean having to submit to it, instead of taking control and shaping it according to our will.

Bio

Rudy GUYONNEAU 
is a Senior Consultant in Artificial Intelligence for Cybersecurity at Sopra Steria SA. His doctoral thesis with Spikenet Technology at the Brain and Cognition Lab in Toulouse led him to study visual information processing in the primate’s brain, and to apply its keys to the fields of Machine Vision and Neuromorphic Engineering. After a postdoc at Georgetown University in Brain-Computer Interfaces, he led the R&D effort at Spikenet, applying spiking neural networks and related technologies to the industry, including video protection. He joined Sopra Steria in 2016 to aid the development of AI within cybersecurity, and to assist Airbus Commercial Aircraft in its Innovation Strategy. He holds a Ph.D. in Computational Neurosciences from Toulouse III University, and is the Lead Cyber Data Scientist for Airbus CA.



US Army Comments Policy
If you wish to comment, use the text box below. Army reserves the right to modify this policy at any time.

This is a moderated forum. That means all comments will be reviewed before posting. In addition, we expect that participants will treat each other, as well as our agency and our employees, with respect. We will not post comments that contain abusive or vulgar language, spam, hate speech, personal attacks, violate EEO policy, are offensive to other or similar content. We will not post comments that are spam, are clearly "off topic", promote services or products, infringe copyright protected material, or contain any links that don't contribute to the discussion. Comments that make unsupported accusations will also not be posted. The Army and the Army alone will make a determination as to which comments will be posted. Any references to commercial entities, products, services, or other non-governmental organizations or individuals that remain on the site are provided solely for the information of individuals using this page. These references are not intended to reflect the opinion of the Army, DoD, the United States, or its officers or employees concerning the significance, priority, or importance to be given the referenced entity, product, service, or organization. Such references are not an official or personal endorsement of any product, person, or service, and may not be quoted or reproduced for the purpose of stating or implying Army endorsement or approval of any product, person, or service.

Any comments that report criminal activity including: suicidal behaviour or sexual assault will be reported to appropriate authorities including OSI. This forum is not:

  • This forum is not to be used to report criminal activity. If you have information for law enforcement, please contact OSI or your local police agency.
  • Do not submit unsolicited proposals, or other business ideas or inquiries to this forum. This site is not to be used for contracting or commercial business.
  • This forum may not be used for the submission of any claim, demand, informal or formal complaint, or any other form of legal and/or administrative notice or process, or for the exhaustion of any legal and/or administrative remedy.

Army does not guarantee or warrant that any information posted by individuals on this forum is correct, and disclaims any liability for any loss or damage resulting from reliance on any such information. Army may not be able to verify, does not warrant or guarantee, and assumes no liability for anything posted on this website by any other person. Army does not endorse, support or otherwise promote any private or commercial entity or the information, products or services contained on those websites that may be reached through links on our website.

Members of the media are asked to send questions to the public affairs through their normal channels and to refrain from submitting questions here as comments. Reporter questions will not be posted. We recognize that the Web is a 24/7 medium, and your comments are welcome at any time. However, given the need to manage federal resources, moderating and posting of comments will occur during regular business hours Monday through Friday. Comments submitted after hours or on weekends will be read and posted as early as possible; in most cases, this means the next business day.

For the benefit of robust discussion, we ask that comments remain "on-topic." This means that comments will be posted only as it relates to the topic that is being discussed within the blog post. The views expressed on the site by non-federal commentators do not necessarily reflect the official views of the Army or the Federal Government.

To protect your own privacy and the privacy of others, please do not include personally identifiable information, such as name, Social Security number, DoD ID number, OSI Case number, phone numbers or email addresses in the body of your comment. If you do voluntarily include personally identifiable information in your comment, such as your name, that comment may or may not be posted on the page. If your comment is posted, your name will not be redacted or removed. In no circumstances will comments be posted that contain Social Security numbers, DoD ID numbers, OSI case numbers, addresses, email address or phone numbers. The default for the posting of comments is "anonymous", but if you opt not to, any information, including your login name, may be displayed on our site.

Thank you for taking the time to read this comment policy. We encourage your participation in our discussion and look forward to an active exchange of ideas.