An official website of the United States government
Here's how you know
A .mil website belongs to an official U.S. Department of Defense organization in the United States.
A lock (lock ) or https:// means you’ve safely connected to the .mil website. Share sensitive information only on official, secure websites.

The Cyber Defense Review

The Inevitable Militarization of Artificial Intelligence

By MAJ Michael Kolton | February 08, 2016

Introduction to autonomous weapons

In January 2015, Bill Gates observed robotics and artificial intelligence (AI) are entering a period of rapid advances.[1] AI technologies will fundamentally change how humans move and communicate.[2] These innovations enable autonomous systems to perform tasks or functions on their own. For example, Google, Apple, and Microsoft are competing to transform vehicle transport with self-driving vehicles.[3] In manufacturing, autonomous production enables companies to adapt products to diverse consumer markets.[4] AI helps city governments manage critical infrastructure and essential services.[5] In 2015, an AI system leveraged “deep learning” to teach itself chess and achieve master-level proficient in 72 hours.[6] AI “chatbots” power conversations between humans and machines.[7] Companies like Google and Facebook are designing chatbots that make decisions for users about commercial activities like shopping and travel arrangements.[8] Microsoft AI researcher, Eric Horvitz, expects humanity “to be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life.”[9] Such innovations indelibly impact military affairs.

In its future operating concept, the US Army predicts autonomous or semiautonomous systems will “increase lethality, improve protection, and extend Soldiers’ and units’ reach.”[10] Moreover, the Army expects autonomous systems to antiquate “the need for constant Soldier input required in current systems.”[11] The Army expects AI to augment decision-making on the battlefield. In the not-too-distant future, autonomous weapons will fundamentally change the ways humans fight wars.[12] Over thirty advanced militaries already employ human-supervised autonomous weapons, such as missile defense, counterbattery, and active protection systems.[13] For intelligence, surveillance and reconnaissance (ISR), the US Air Force is developing autonomous systems that collect, process and analyze information and even generate intelligence.[14]

Today, US policy forbids the military from using autonomous weapons to target human beings.[15] DoD policy also mandates humans retain executive control over weapon systems.[16] Nevertheless, AI innovations will soon enable potential autonomous weapons that “once activated, can select and engage targets without further intervention by a human operator.”[17] In July 2015, a group of AI scientist stated autonomous weapons are “feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”[18] Consequently, today’s constraints on autonomous weapons may prove too restrictive as America’s adversaries race forward in the next era of military affairs.

Alarm over autonomous weapons

In 1956, an Army signal corps officer wrote, “Since modern man has changed but little over the centuries, physiologically speaking, the improvement and development of the battle team has centered about new weapons, improved materiel, and better communications – more effective ways to shoot, move, and communicate.”[19] Autonomous systems have already begun changing the ways people move and communicate. Accordingly, some of the world’s top innovators worry AI will dangerously alter the way militaries shoot. In a December 2014 interview, Stephen Hawking said, “The development of full artificial intelligence could spell the end of the human race.”[20] Notwithstanding arguments against militarizing AI, autonomous weapons will prove too enticing for the world’s militaries.

On December 2015, Elon Musk of Tesla, PayPal, and SpaceX fame publicized a new non-profit called OpenAI. With a billion-dollar endowment, the organization plans to pioneer developments in AI and deep learning.[21] Idealistically, OpenAI hopes “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”[22] In this light, they “believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.”[23] The San Francesco non-profit seeks to keep AI from harmful applications.

The announcement of Elon Musk’s OpenAI seemed more remarkable given recent alarm over military applications of AI research. In October 2014, the Massachusetts Institute of Technology (MIT) hosted a symposium about future innovations.[24] During a featured session, Elon Musk warned, “If I were to guess like what our biggest existential threat is, it’s probably [artificial intelligence].”[25] On 28 July 2015, thousands of AI scientists signed an open letter warning about the catastrophic risks from autonomous AI weapons.[26] The scientists expressed:

Unlike nuclear weapons, [AI weapons] require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.[27]

The scientists directly compare AI weapons to nuclear ones. In this way, 2010-2020 is analogous to 1940-1950, the dawn of the atomic age and the nuclear arms race. Admittedly, such analogical reasoning can prove inadequate.[28] Yet, the Manhattan Project and current AI research generate similar ethical debates.

In May 1944, Manhattan Project atomic scientist Niels Bohr wrote a letter to Prime Minister Winston Churchill. The scientist anticipated atomic weapons would fundamentally change human history with “devastating power far beyond any previous possibilities and imagination.”[29] In July 1944, Bohr sent a memorandum to President Franklin Roosevelt expressing concern that atomic weapons would become “a perpetual menace to human security.”[30] On 9 June 1950, Bohr presented a letter to the United Nation recounting the Manhattan Project: “Everyone associated with the atomic energy project was, of course, conscious of the serious problems which would confront humanity once the enterprise was accomplished.”[31] Ultimately, atomic researchers deemed Allied victory paramount.

Like Bohr, AI scientists have implored the international community to preempt the proliferation of autonomous weapons. In their open letter, the signatories call for “a ban on offensive autonomous weapons beyond meaningful human control.”[32] Importantly, the scientists do not call for the elimination of defensive systems.[33] Yet, the scientists fear an AI arms race will extend the battlefield beyond the control of human beings and generate catastrophe.

The Department of Defense (DoD) seemed to share these concerns in a 2012 policy directive. To minimize unintended consequences, then-Deputy Secretary of Defense Ashton Carter ordered “autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”[34] Moreover, autonomous systems are barred from targeting humans.[35] Under this constraint, defense researchers are developing evermore sophisticated autonomous systems.

The need for speed in cybersecurity

The 2014 Quadrennial Homeland Security Review (QHSR) warned, “Cyber threats are growing and pose ever-greater concern to our critical infrastructure systems as they become increasingly interdependent.”[36] The 2014 QHSR expect innovations in cyber capabilities to enable the Department of Homeland Security (DHS) to collect, analyze, and share information “at machine speed to block threats in milliseconds instead of the hours or days required today.”[37] The DoD identifies similar cybersecurity objectives.

In December 2015, the Defense Advanced Research Projects Agency (DARPA) asked innovators to develop “technologies for detecting and responding to cyber-attacks on U.S. critical infrastructure, especially those parts essential to DoD mission effectiveness.”[38] DARPA seeks technologies that provide “early warning of impending attacks, situation awareness, network isolation and threat characterization in response to a widespread and persistent cyber-attack on the power grid and its dependent systems.”[39] DARPA wants AI systems to reduce the country’s recovery time from catastrophic cyber attacks.[40]

DoD’s 2015 Cyber Strategy states, “If and when DoD detects indications of hostile activity within its networks, DoD has quick-response capabilities to close or mitigate vulnerabilities and secure its networks and systems. Network defense operations on DoD networks constitute the vast majority of DoD’s operations in cyberspace.”[41] Yet, Army officers Rock Stevens and Michael Weigand assess, “The Army does not have a single entity that tracks discovered issues from initial report through the remediation process to ensure vulnerability resolution in a timely manner.”[42] Simply put, cybersecurity remains an evolving enterprise.

In general, cyber defense follows a conceptual process of detect-react-respond.[43] Microsoft promotes a four-phase cybersecurity model: protect, detect, respond, and recover.[44] Intel uses a protect-detect-correct model.[45] In its cybersecurity framework, the National Institute of Standards and Technology (NIST) proposes a five-phase loop: Identify, Protect, Detect, Respond, and Recover.[46] This emphasis on continuous, accurate, timely response resembles other military decision-making areas like artillery fires, counterterrorism, and counterinsurgency.[47]

Militaries must detect, react, and respond to cyber threats faster than an adversary can adapt. Cyber capabilities enable the US military and its adversaries to influence operations across the land, air, maritime, and space domains. DHS and DARPA cybersecurity priorities demonstrate the seriousness of threat adaption. Remotely piloted aircraft (RPA), or military drones, offer one manifestation of this technological struggle.

Drones, cybersecurity, and the future of warfare

At the Smithsonian National Air and Space Museum, tourists can visit the first drone to launch a hellfire air-to-surface missile in combat.[48] After flying reconnaissance missions in the Balkans, the General Atomics manufactured MQ-1L Predator #3034 received military upgrades to launch missiles.[49] Just after 9/11, the RPA began striking Al-Qaeda targets in Afghanistan.[50] Since then, RPA have become a potent symbol for twenty-first century warfare.

Author Richard Whittle explains, “The Predator opened the door to what is now a drone revolution because it changed the way people thought about unmanned aircraft…This is a new era in aviation, and we as a society need to figure out how we’re going to cope with it.”[51] Stanford University’s Amy Zegart writes, “Drones are going to revolutionize how nations and non-state actors threaten the use of violence. First, they will make low-cost, high-credibility threats possible.”[52] She further explained, “Artificial intelligence and autonomous aerial refueling could remove human limitations even more, enabling drones to keep other drones flying and keep the pressure on for as long as victory takes.”[53] Thus, RPA are a critical system for twenty-first century warfare.

In a 2013-2038 operating concept, the Air Force states the next generation of RPA “must be multi-mission capable, adverse weather capable, net-centric, interoperable and must employ appropriate levels of autonomy.”[54] For these missions, cybersecurity is a critical for aerial vehicles. DARPA and Boeing have fielded a new computer language for the unmanned AH-6 “Little Bird” helicopter.[55] Researchers claim the proprietary coding language protects the aircraft against cyberthreats. Similarly, in 2015, Raytheon demonstrated a new cybersecurity system to protect drones from hackers.[56] The Navy is also funding research in “Cyber resiliency for real-time operating systems and the aviation warfare environment.”[57] The future of drones and cybersecurity are intertwined.

Retired Navy captain Mike Walls explains, “A ship-launched cruise missile relies on the ship…[to provide] critical, digital information from its own systems to the cruise missile before launch in order for the missile to hit its target. If either or both of the systems fail, the ship or the cruise missile, then the target is not destroyed.”[58] Reliable communication networks, the ship-to-missile link, also ensures decision makers retain a cognitive interface with their weapon.

In December 2011, Iranian military forces claimed to have electronically “ambushed” a RQ-170 Sentinel by hijacking the RPA’s guidance system.[59] They asserted the military “spoofed” the GPS-signal and tricked the aircraft into landing inside Iran.[60] Although American officials acknowledged the aircraft’s loss, US government sources told journalists the Sentinel drone malfunctioned over Iranian territory.[61] A paper presented at a NATO cybersecurity conference argues either explanation for the incident demonstrates RPA “must be capable of autonomously choosing the right strategy in case of a severe fault to uphold the systems security.”[62] Notably, the authors highlight vulnerabilities in RPA communication links and the ground control systems (GCS).[63]

In a 2012 paper from the American Institute of Aeronautics and Astronautics (AIAA), engineers categorize cybersecurity for unmanned aerial vehicles (UAVs) as Control System Security and Application Logic Security.[64] The authors detail three pathways attackers can use to exploit UAV vulnerabilities: Hardware Attack, Wireless Attack, and Sensor Spoofing.[65] In each of these pathways, adversaries use electronic warfare and cyber capabilities to disrupt RPA operations.

In a 2014 RPA vulnerability analysis, German Army officer André Haider explores the potential threats to RPA. Haider emphasizes, “Current [RPA] systems are not yet fully automated or even autonomous and their control is contingent on uninterrupted communications.”[66] The author assesses, “Possible Electronic Warfare (EW) targets for the adversary include the GCS, RPA, satellites and satellite ground segments.”[67] Haider states NATO networks remain well protected; adversaries face a difficult challenge when attempting to gain entry to RPA systems.[68] Yet, he argues adaptive threats have proven capable of infecting the GCS. The major believes future cyberthreats against certain aspects of RPA systems remain high.[69]

After reviewing a broad range of RPA threats, Haider concludes, “Achieving higher levels of automation is a prerequisite in enabling many of the recommendations made in this study; however, what is technically possible is not necessarily desirable.”[70] In the spirit of Elon Musk and Stephen Hawking, Haider argues against automating targeted strikes.[71] On the other hand, Haider believes “automated weapon release should be approved for any target that is actively engaging the RPA.”[72] In this way, Haider ascribes a military drone similar self-defense mandates given to manned aircraft.

Haider’s cautious recommendation for autonomous RPA reveals an emerging imperative for twenty-first century warfare. Adversaries are developing anti-access/area denial capabilities to defeat America’s technological superiority. Cyber capabilities are an integral part of any anti-access/area denial operational approach.[73] In its 2012 Joint Operational Access Concept (JOAC), the Joint Staff writes, “[M]any future enemies will seek to contest space control and cyberspace superiority as means to denying operational access to U.S. joint forces.”[74] As RPA prove lethal on the battlefield, adversaries will innovate to defeat them.

Under current policy, an armed drone requires robust human control to launch a strike. Thus, the communications link between RPA and the GCS, the link between weapon and decision-maker, is critical for the US military. A cyber or electronic attack to undermine this connection is a superb enemy tactic. Given such a threat, militaries will build redundancy. Thus, AI emerges to the forefront; militaries will develop autonomous RPA that can complete missions even if communication links are disrupted.

Conclusion: Is AI redefining the cognitive dimension?

The US military sees the information environment in physical, informational, and cognitive dimensions.[75] Security experts Peter Singer and Allan Friedman explain cyberspace “is defined as much by the cognitive realm as by the physical or digital.”[76] The cognitive dimension includes the sacred responsibility of a military leader to direct lethal force. Brigadier General Jeff Smith argues the cognitive dimension supersedes the information environment and its subordinate cyberspace domain.[77] BG Smith holds the military leader, the human decision-maker, as the centerpiece for military operations. He contends, “[T]he network is the offspring of the leader, provoked by his requirement to exercise influence over operations.”[78] In other words, information communications technologies cannot supplant a leader’s moral obligations.

BG Smith argues the US military must place “cognitive operation at the core of the operational environment: to make the wisdom, judgment, acumen, imagination, instincts, and mental courage…common across all levels of war”[79] Human dominion over decision-making underwrites BG Smith’s thesis. Yet, AI systems enable autonomous weapons that are isolated from BG Smith’s cognitive operation. Instead of enabling command and control,[80] AI systems can supplant it. Powered by AI, man-made networks, “the offspring of the leader,” will no longer need the supervision of human creators. This cognitive separation alarms many researchers because it will revolutionize warfare in uncertain ways. Despite current hopes to prevent an AI arms race, electronic and cyber warfare capabilities among adversaries appear poised to drive autonomous weapons development. Like Niels Bohr’s 1944 letters to Churchill and FDR, the 2015 open letter of AI scientists may prove ineffectual in the twenty-first century security environment.

 

 

Notes

[1]. Peter Holley, “Bill Gates on dangers of artificial intelligence: ‘I don’t understand why some people are not concerned’,” Washington Post, 29 January 2015, https://www.washingtonpost.com/news/the-switch/wp/2015/01/28/bill-gates-on-dangers-of-artificial-intelligence-dont-understand-why-some-people-are-not-concerned/.

[2]. “The dawn of artificial intelligence,” Economist, 9 May 2015, accessed 15 December 2015, http://www.economist.com/news/leaders/21650543-powerful-computers-will-reshape-humanitys-future-how-ensure-promise-outweighs.

[3]. Arjun Kharpal, “Microsoft, Volvo strike deal to make driverless cars,” CNBC, 20 Nov 2015, accessed 15 December 2015, http://www.cnbc.com/2015/11/20/microsoft-volvo-strike-deal-to-make-driverless-cars.html.

[4]. Zvi Feuer and Robert Meshel, “Next Generation Manufacturing: Production Systems that Think,” Siemens PLM Software Blog (13 February 2015), accessed 15 December 2015, http://blog.industrysoftware.automation.siemens.com/blog/2015/02/13/next-generation-manufacturing-production-systems-think/.

[5]. Michael Dixon, “Solving the challenges of fast growing cities,” World Summit of Local and Regional Leaders, 4th UCLG Congress (6 August 2013), accessed 15 December 2015, http://www.rabat2013.uclg.org/news/interview-michael-dixon-general-manager-smarter-cities-ibm-solving-challenges-fast-growing#sthash.jpoqMEuo.dpuf.

[6].  “Deep Learning Machine Teaches Itself Chess in 72 Hours, Plays at International Master Level,” MIT Technology Review (14 September 2015), accessed 27 December 2015, http://www.technologyreview.com/view/541276/deep-learning-machine-teaches-itself-chess-in-72-hours-plays-at-international-master/.

[7]. Chris Baranuik, “How online ‘chatbots’ are already tricking you,” BBC News, 9 June 2014, accessed 27 December 2015, http://www.bbc.com/future/story/20140609-how-online-bots-are-tricking-you.

[8]. Alistair Barr, “Google Plans New, Smarter Messaging App: Users will be able to text friends or a chatbot that will scour the Web and other sources to answer a question,” Wall Street Journal, 22 December 2015, accessed 27 December 2015, http://www.wsj.com/articles/google-plans-new-smarter-messaging-app-1450816899.

[9]. “Out of control AI will not kill us, believes Microsoft Research chief,” BBC News, 28 January 2015, accessed 15 December 2015, http://www.bbc.com/news/technology-31023741.

[10]. TRADOC Pamphlet 525-3-1, The US Army Operating Concept – Winning in a Complex World, 2020-2040 (Washington, DC: Army Staff, 31 October 2014), 40.

[11]. Ibid.

[12]. Michael C. Horowitz and Paul Scharremay, “The Morality of Robotic War,” New York Times, May 2015.

[13]. Paul Scharre and Michael C. Horowitz, “An Introduction to Autonomy In Weapon Systems,” Center for a New American Security (February 2015), 16.

[14]. Phillip Swarts, “RPA systems studied to improve ground-based technology,” Air Force Times, 21 November 2015, accessed 15 December 2015, http://www.airforcetimes.com/story/military/tech/2015/11/21/rpa-systems-studied-improve-ground-based-technology/76061420/.

[15]. Deputy Secretary of Defense, Autonomy in Weapon Systems, DoD Directive 3000.09, Washington, DC: Deputy Secretary of Defense, 21 November 2012, 3.

[16]. Ibid.

[17]. Paul Scharre and Michael C. Horowitz, “An Introduction to Autonomy In Weapon Systems,” Center for a New American Security (February 2015), 16.

[18]. “Autonomous Weapons: An Open Letter From AI & Robotics Researchers,” Future of Life Institute (28 July 2015), accessed 15 December 2015, http://futureoflife.org/open-letter-autonomous-weapons/.

[19]. John Clapper, “Rapport,” Military Review 36, Issues 8 (November 1956), 45.

[20]. Rory Cellan-Jones, “Stephen Hawking warns artificial intelligence could end mankind,” BBC News, 2 December 2014, accessed 15 December 2015, http://www.bbc.com/news/technology-30290540.

[21]. Cade Metz, “Elon Musk Snags Top Google Researcher for New AI Non-Profit,” Wired (11 December 2015), accessed 13 December 2015, http://www.wired.com/2015/12/elon-musk-snags-top-google-researcher-for-new-ai-non-profit/.

[22]. Greg Brockman and Ilya Sutskever, “Introducing OpenAI,” OpenAI (11 December 2015), accessed 13 December 2015, https://openai.com/blog/introducing-openai/.

[23]. Ibid.

[24]. “MIT Aeronautics and Astronautics Centennial Symposium,” MIT Department of Aeronautics and Astronautics (22-24 October 2014), accessed 15 December 2015, http://aeroastro.mit.edu/aeroastro100/centennial-symposium.

[25]. Matt McFarland, “Elon Musk: ‘With artificial intelligence we are summoning the demon.’” Washington Post, 24 October 2014, https://www.washingtonpost.com/news/innovations/wp/2014/10/24/elon-musk-with-artificial-intelligence-we-are-summoning-the-demon/.

[26]. “Autonomous Weapons: An Open Letter From AI & Robotics Researchers,” Future of Life Institute (28 July 2015), accessed 15 December 2015, http://futureoflife.org/open-letter-autonomous-weapons/.

[27]. Ibid.

[28]. Yuen Foong Khong, Analogies at War: Korea, Munich, Dien Bien Phu, and the Vietnam Decisions of 1965 (Princeton, NJ: Princeton Press, 1992), 16.

[29]. Niels Bohr, Letter to Churchill (22 May 1944), accessed 23 December 1944, http://www.atomicheritage.org/key-documents/bohr-letter-churchill.

[30]. Niels Bohr, Memorandum to President Roosevelt (July 1944), accessed 23 December 1944, http://www.atomicarchive.com/Docs/ManhattanProject/Bohrmemo.shtml.

[31]. Niels Bohr, “Letter to the UN, 9 June 1950,” Atomic Heritage Foundation (2015), accessed 15 December 2015, http://www.atomicheritage.org/key-documents/bohr-letter-un.

[32].“Autonomous Weapons: An Open Letter From AI & Robotics Researchers,” Future of Life Institute.

[33]. Ibid.

[34]. Deputy Secretary of Defense, Autonomy in Weapon Systems, DoD Directive 3000.09, Washington, DC: Deputy Secretary of Defense, 21 November 2012, 2.

[35]. Ibid., 3.

[36]. 2014 Quadrennial Homeland Security Review (QHSR), Department of Homeland Security, 18 June 2014, 41.

[37]. Ibid., 42.

[38]. “Rapid Attack Detection, Isolation and Characterization (RADICS) Proposers Day,” Defense Advanced Research Projects Agency (14 December 2015), accessed 15 December 2015, https://www.fbo.gov/index?s=opportunity&mode=form&id=6b2443ae05af0b645a0bd8123ae8f516&tab=core&_cview=1.

[39]. Ibid.

[40]. Jordan Pearson, “Here’s DARPA’s Proposed Plan to Recover from a Massive Power Grid Hack,” Vice News, 14 December 2015, accessed 15 December 2015, http://motherboard.vice.com/en_uk/read/heres-darpas-proposed-plan-to-recover-from-a-massive-power-grid-hack.

[41]. Department of Defense, Cyber Strategy (Washington DC: April 2015), 4.

[42]. Rock Stevens and Michael Weigand, “Army Vulnerability Response Program: A Critical Need in the Defense of our Nation,” Cyber Defense Review (23 Oct 2015), accessed 15 December 2015

[43]. Neil Robinson, Agnieszka Walczak, Sophie-Charlotte Brune, Alain Esterle, Pablo Rodriguez, “Stocktaking study of military cyber defence capabilities in the European Union (milCyberCAP),” RAND Corp (2013), 6.

[44]. “Microsoft Technology solutions for cybersecurity,” Microsoft (2010), accessed 15 December 2015, http://download.microsoft.com/download/D/3/0/D30E9D65-7330-4DD3-B6A7-28BAE8381AE4/CybersecurityTechnology.pdf.

[45]. “Intel Security Unveils New Strategy to Provide Better Protection, Faster Detection and Streamlined Correction,” Intel Security (27 October 2015), accessed 27 December 2015, http://www.mcafee.com/us/about/news/2015/q4/20151027-01.aspx.

[46]. “Framework for Improving Critical Infrastructure Cybersecurity: Version 1.0,” National Institute of Standards and Technology (12 February 2014), 17, accessed 7 April 2015, http://www.nist.gov/cyberframework/upload/cybersecurity-framework-021214-final.pdf.

[47]. Jimmy A. Gomez, “The Targeting Process: D3A and F3EAD,” Small Wars Journal (2011), 1.

[48].  “General Atomics Aeronautical Systems, Inc., MQ-1L Predator A,” National Air and Space Museum (2003), accessed 27 December 2015, http://airandspace.si.edu/collections/artifact.cfm?object=nasm_A20040180000.

[49]. Ibid.

[50]. Mark Bowden, “How the Predator Drone Changed the Character of War,” Smithsonian Magazine (November 2013), accessed 27 December 2015, http://www.smithsonianmag.com/history/how-the-predator-drone-changed-the-character-of-war-3794671/#xl3uO0jqAdbi2vFP.99.

[51]. Wade Goodwyn, “How Drones Changed Modern Warfare,” National Public Radio (25 September 2015), accessed 27 December 2015, http://www.npr.org/2014/09/21/350316088/how-drones-changed-modern-warfare.

[52]. Amy Zegart, “The Coming Revolution of Drone Warfare,” Wall Street Journal, 18 March 2015, accessed 27 December 2015, http://www.wsj.com/articles/amy-zegart-the-coming-revolution-of-drone-warfare-1426720364.

[53]. Ibid.

[54]. RPA Vector: Vision and Enabling Concepts 2013-2038, (Washington DC: US Air Force, 17 February 2014), iii.

[55]. Aliya Sternstein, “Pentagon On Path To Launch Hacker-Proof Boeing Drone by 2018,” Nextgov, 11 March 2015, accessed 27 December 2015, http://www.nextgov.com/cybersecurity/2015/03/pentagon-path-launch-hacker-proof-boeing-drone-2018/107250/.

[56]. “Armored against threats: Military Systems resist cyber threats,” Raytheon in the Washington Post, 19 October 2015, accessed 27 December 2015, http://www.washingtonpost.com/sf/brand-connect/wp/enterprise/armored-against-attack/.

[57]. Aliya Sternstein, “The Navy Is Trying To Hack-Proof Its Drones,” Defense One, 20 May 2015, accessed 27 December 2015, http://www.defenseone.com/technology/2015/05/navy-trying-hack-proof-its-drones/113311/.

[58]. Ibid.

[59]. Scott Peterson and Payam Faramarzi, “Exclusive: Iran hijacked US drone, says Iranian engineer,” Christian Science Monitor, 15 December 2011, accessed 27 December 2015, http://www.csmonitor.com/World/Middle-East/2011/1215/Exclusive-Iran-hijacked-US-drone-says-Iranian-engineer-Video.

[60]. Kim Hartmann and Christoph Steup, “The Vulnerability of UAVs to Cyber Attacks – An Approach to the Risk Assessment,” in 5th International Conference on Cyber Conflict, edited by K. Podins, J. Stinissen, and M. Maybaum (Tallinn, Estonia: NATO CCD COE Publications, 2013), 7.

[61]. “Iran says it won’t return U.S. drone,” USA Todau, 11 December 2011, accessed 27 December 2011, http://usatoday30.usatoday.com/news/world/story/2011-12-11/iran-us-drone/51795672/1.

[62]. Ibid., 8.

[63]. Ibid., 6.

[64]. Alan Kim, Brandon Wampler, James Goppert, Inseok Hwang and Hal Aldridge, “Cyber Attack Vulnerabilities Analysis for Unmanned Aerial Vehicles,” Infotech@Aerospace Conference (June 2012), 7, doi: 10.2514/MIAA12.

[65]. Ibid., 6.

[66]. André Haider, “Remotely Piloted Aircraft Systems in Contested Environments: A Vulnerability Analysis,” Joint Air Power Competence Center (September 2014), 75, accessed 27 December 2015, http://www.japcc.org/wp-content/uploads/2015/03/JAPCC-RPAS-Operations-in-Contested-Environments.pdf.

[67]. Ibid., 5.

[68]. Ibid., 46.

[69]. Ibid.

[70]. Haider, 5.

[71]. Ibid.

[72]. Ibid.

[73]. John Gordon IV and John Matsumura, “The Army’s Role in Overcoming Anti-Access and Area Denial Challenges,” Rand Corporation (2013), 18.

[74]. Joint Operational Access Concept (JOAC), (Washington, DC: Joint Staff, 17 January 2012), 18.

[75].  JP 3-12R, Cyberspace Operations (Washington, DC: Joint Staff, 5 February 2013), I-5.

[76]. Peter W. Singer and Allan Friedman, Cybersecurity and Cyberwar: What Everyone Needs to Know (Kindle Edition: Oxford University Press, 2013), 14.

[77]. Franklin D. Kramer, Starr, Stuart H, and Wentz, Larry, Cyberpower and National Security (Kindle Edition: Potomac Books, 2009), 301.

[78]. Jeff Smith, “A Unified Field Theory for Full-Spectrum Operations: Cyberpower and the Cognitive Domain,” in Military Perspectives on Cyberpower, edited by Larry K. Wentz Charles L. Barry, and Stuart H. Starr (Washington DC: The Center for Technology and National Security Policy at the National Defense University, July 2009), 41.

[79]. Ibid., 70.

[80]. Ibid., 61.



US Army Comments Policy
If you wish to comment, use the text box below. Army reserves the right to modify this policy at any time.

This is a moderated forum. That means all comments will be reviewed before posting. In addition, we expect that participants will treat each other, as well as our agency and our employees, with respect. We will not post comments that contain abusive or vulgar language, spam, hate speech, personal attacks, violate EEO policy, are offensive to other or similar content. We will not post comments that are spam, are clearly "off topic", promote services or products, infringe copyright protected material, or contain any links that don't contribute to the discussion. Comments that make unsupported accusations will also not be posted. The Army and the Army alone will make a determination as to which comments will be posted. Any references to commercial entities, products, services, or other non-governmental organizations or individuals that remain on the site are provided solely for the information of individuals using this page. These references are not intended to reflect the opinion of the Army, DoD, the United States, or its officers or employees concerning the significance, priority, or importance to be given the referenced entity, product, service, or organization. Such references are not an official or personal endorsement of any product, person, or service, and may not be quoted or reproduced for the purpose of stating or implying Army endorsement or approval of any product, person, or service.

Any comments that report criminal activity including: suicidal behaviour or sexual assault will be reported to appropriate authorities including OSI. This forum is not:

  • This forum is not to be used to report criminal activity. If you have information for law enforcement, please contact OSI or your local police agency.
  • Do not submit unsolicited proposals, or other business ideas or inquiries to this forum. This site is not to be used for contracting or commercial business.
  • This forum may not be used for the submission of any claim, demand, informal or formal complaint, or any other form of legal and/or administrative notice or process, or for the exhaustion of any legal and/or administrative remedy.

Army does not guarantee or warrant that any information posted by individuals on this forum is correct, and disclaims any liability for any loss or damage resulting from reliance on any such information. Army may not be able to verify, does not warrant or guarantee, and assumes no liability for anything posted on this website by any other person. Army does not endorse, support or otherwise promote any private or commercial entity or the information, products or services contained on those websites that may be reached through links on our website.

Members of the media are asked to send questions to the public affairs through their normal channels and to refrain from submitting questions here as comments. Reporter questions will not be posted. We recognize that the Web is a 24/7 medium, and your comments are welcome at any time. However, given the need to manage federal resources, moderating and posting of comments will occur during regular business hours Monday through Friday. Comments submitted after hours or on weekends will be read and posted as early as possible; in most cases, this means the next business day.

For the benefit of robust discussion, we ask that comments remain "on-topic." This means that comments will be posted only as it relates to the topic that is being discussed within the blog post. The views expressed on the site by non-federal commentators do not necessarily reflect the official views of the Army or the Federal Government.

To protect your own privacy and the privacy of others, please do not include personally identifiable information, such as name, Social Security number, DoD ID number, OSI Case number, phone numbers or email addresses in the body of your comment. If you do voluntarily include personally identifiable information in your comment, such as your name, that comment may or may not be posted on the page. If your comment is posted, your name will not be redacted or removed. In no circumstances will comments be posted that contain Social Security numbers, DoD ID numbers, OSI case numbers, addresses, email address or phone numbers. The default for the posting of comments is "anonymous", but if you opt not to, any information, including your login name, may be displayed on our site.

Thank you for taking the time to read this comment policy. We encourage your participation in our discussion and look forward to an active exchange of ideas.