Jump to content

Draft:Real-time Adversarial Intelligence and Decision-making (RAID)

From Wikipedia, the free encyclopedia



Real-time Adversarial Intelligence and Decision-making (RAID) was a DARPA research program, and the eponymous prototype software system. The program developed computational methods to infer the intent and future actions of an adversary. It did so based on partial and potentially deceptive observations about the prior events on the battlefield. [1] [2][3]


The RAID program produced a technology called LG-RAID that found applications in the United States Army, United States Marine Corps, and other military organizations of the United States[4] [5] [6]

History

[edit]

A history of DARPA contributions to knowledge representation and reasoning depicts RAID as one of the DARPA programs in Cognitive Systems area of the 2000s, along with others such as the JAGUAR program[7] [8]

Related programs of the same period, intended to develop tools for automation of military decision-making, were Deep Green and CADET[9]

The DARPA Real-time Adversarial Intelligence and Decision-making (RAID) program started in 2004 and ended in 2008.[3]

After the RAID program ended, an output of the program called the LG-RAID technology continued its development in follow-on projects. By 2021, the United States Army, Navy, Air Force, DARPA and Missile Defense Agency funded a total of 18 projects that researched the use of LG-RAID technology in planning, wargaming, predicting enemy actions, and estimating results of a military operation.[6][10]


Operation

[edit]

RAID was intended to be used by the staff or commander and staff of the United States Army unit such as a reinforced company, battalion, or Brigade combat team, during the execution of a military operation. The program focused on tactical combat of infantry (supported by armor and air platforms) against a guerrilla-like enemy force in an urban terrain.[11] [12][7]


RAID software resided on a laptop computer. Using the computer, the Blue unit’s commander or the staff officers entered the input to RAID, or alternatively this input arrived to RAID from upstream computer systems. Large fraction of this input dynamically changes and arrives at unpredictable moments as the combat mission is being executed. The input consisted of:[11]

  • the Blue force composition;
  • the Blue mission plan;
  • 3D map of the urban area;
  • known concentrations of noncombatants such as markets;
  • culturally sensitive areas such as worship houses;
  • continuous updates on the locations and status of the Blue force;
  • Blue forces’ reports (electronic, verbal or textual) regarding the observed positions and strength of the * Red force, or fires received from the Red force.

Taking this input, RAID’s algorithms automatically produced the following outputs:[13]

  • estimated actual locations and strength of the Red force (note that these are normally concealed and are not observed by the Blue force);
  • the current intent of the Red force;
  • potential deceptions that the Red force may be performing;
  • the Red force’s future locations (as a function of time),
  • anticipated (as a function of time) future Red fires the Blue force.
  • recommendations (recommended Course of Action) to the Blue force on how to prevent or to parry the anticipated actions of the Red force.

Technologies

[edit]

The RAID program explored several algorithmic approaches.[13] One group of algorithms estimated the “human” aspects of battlefield behaviors with a cognitive model, using a Bayesian belief network. [1]

Another group of algorithms estimated probable Red deceptions.[1][12][7]

Another algorithm performed fast heuristic game solving.[14] This algorithm eventually led to a technology called LG-RAID.[15] [16][7]

Evaluation

[edit]

The RAID program performed a number of evaluation experiments. Some of the series of experiments consisted of multiple wargames executed by live (human) Red and Blue commanders, but in a simulated computer wargaming environment called OneSAF [17] In half of wargames, the Blue commander received the support of a human team of competent assistants (staff) whose responsibilities included producing estimates of the Red locations and intended future actions. These wargames constituted the control group. In the other half of wargames, Blue commander operated without a human staff. Instead, he obtained a similar support from the RAID tool. These wargames constituted the test group. In these series of experiments, RAID generally outperformed humans. RAID was more accurate in estimating the current and future locations of Red forces. When a commander used RAID’s suggestions, he won a highter percentage of battles than when he was assisted by human staff.[2][11][7]

RAID system was also used in realistic military exercises via the interface of the system called FBCB2 Force XXI Battle Command Brigade and Below.[18]

Another series of evaluations focused on the suitability of the tool to the cognitive capabilities of the users; it identified several important requirements for improvements of the user interfaces [15]

Legacy

[edit]

During and after the RAID program, military organizations of the United States (Army, Navy, Air Force, DARPA and Missile Defense Agency) initiated a number of programs (a total of 18 by 2021) that used a technology developed in the RAID program, called LG-RAID, in planning, wargaming, predicting enemy actions, and estimating results of a military operation.[4] [19] [20] The technology was also integrated with several commercial and government-owned systems used for military analysis, wargaming and decision-making.[21][22][23]

Criticisms

[edit]

The RAID program was questioned because “...machine intelligence may not be the perfect match for the realm of war for the very reason that it remains a human realm, even with machines fighting in it,“ and because it may tempt the commander to micromanage subordinates.[24]

A concern was expressed about the possibility of using technologies like RAID in decisions pertaining to nuclear conflicts, where artificial intelligence might mislead a decision-maker into an incorrect assessment of risks.[25]

Similarly, is was hypothesized that AI-enabled tools like RAID will be destabilizing if the humans will trust the AI as a panacea for the cognitive fallibility of human analysis.[26]


Category:Artificial intelligence engineering Category:Automated planning and scheduling Category:Military_intelligence Category:Command and control Category:Military exercises and wargames Category:Military plans

References

[edit]
  1. ^ a b c Johan Schubert, J., Brynielsson, J., Nilsson,M., Svenmarck, P., “Artificial Intelligence for Decision Support in Command and Control Systems”, 23rd International Command and Control Research & Technology Symposium, Pensacoal, FL, USA, November 2018. Online at https://foi.se/download/18.41db20b3168815026e010/1548412090368/Artificial-intelligence-decision_FOI-S--5904--SE.pdf
  2. ^ a b The Economist, “Artificial intelligence is changing every aspect of war,” September 7, 2019
  3. ^ a b "Real-time Adversarial Intelligence and Decision-making (RAID) - Federal Grant".
  4. ^ a b Stevens, Jonathan, Ms Latika Eifert, Stephen R. Serge, and Sean Mondesire. "Training Effectiveness Evaluation of Lightweight Game-based Constructive Simulation." ModSim Conference, 2016. Online at https://www.modsimworld.org/papers/2016/Training_Effectiveness_Evaluation_of_Lightweight_Game-based_Constructive_Simulation.pdf
  5. ^ https://www.baesystems.com/en/story/bae-systems-prototype-selected-for-us-marine-corps-wargaming-and-analysis-center
  6. ^ a b "Firm | SBIR".
  7. ^ a b c d e Fikes, R, Garvey, T., “Knowledge Representation and Reasoning — A History of DARPA Leadership,” AI Magazine, Vol. 41 No. 2: Summer 2020. DOI: https://doi.org/10.1609/aimag.v41i2.5295
  8. ^ "Lockheed Martin Technology to Help Streamline Air Operations Centers".
  9. ^ Bråthen, K. (2022) “Krigsspill i operasjonsplanlegging: Hva kan datasimuleringer bidra med?”, Scandinavian Journal of Military Studies, 5(1), p. 309–322. Available at: https://doi.org/10.31374/sjms.129
  10. ^ https://navystp.com/vtm/open_file?type=quad&id=N00178-17-C-7000
  11. ^ a b c Kott, Alexander, Rajdeep Singh, William M. McEneaney, and Wes Milks. "Hypothesis-driven information fusion in adversarial, deceptive environments." Information Fusion 12, no. 2 (2011): 131-144. Online at https://www.sciencedirect.com/science/article/abs/pii/S1566253510000771
  12. ^ a b D. H. Hagos and D. B. Rawat, "Neuro-Symbolic AI for Military Applications," in IEEE Transactions on Artificial Intelligence, vol. 5, no. 12, pp. 6012-6026, Dec. 2024, doi: 10.1109/TAI.2024.3444746. Online at https://ieeexplore.ieee.org/document/10638797
  13. ^ a b Tekin, E., “Assessing Artificial Intelligence’s Military Application in Urban War: A Study of the Israel Defense Forces Operations Since 2021,” Thesis, The George Washington University, May 2024. Online at https://scholarspace.library.gwu.edu/downloads/sq87bv679?disposition=inline&locale=en
  14. ^ Indo-Pacific Defence Forum,”U.S. strategy seeks to promote an international environment that supports AI research and development”, June 15, 2020. Online at https://ipdefenseforum.com/2020/06/artificial-intelligence/
  15. ^ a b Serge, S. A., J. A. Stevens and L. Eifert, "Make it usable: Highlighting the importance of improving the intuitiveness and usability of a computer-based training simulation," 2015 Winter Simulation Conference (WSC), Huntington Beach, CA, USA, 2015, pp. 1056-1067. Online at https://www.informs-sim.org/wsc15papers/124.pdf
  16. ^ "BAE Systems' prototype selected for U.S. Marine Corps Wargaming and Analysis Center - Military Embedded Systems".
  17. ^ "OneSAF Description".
  18. ^ "Stilman to apply DARPA RAID technology to Army FBCB2 battle-management program". 21 November 2007.
  19. ^ "BAE Systems' prototype selected for U.S. Marine Corps Wargaming and Analysis Center - Military Embedded Systems".
  20. ^ "Firm | SBIR".
  21. ^ https://www.palantir.com/assets/xrfr7uokpv1b/1JY2lIPKMepo0RqqYJhYEJ/1e2119d4b10eac935f608f26bbdfdc10/Stilman_Palantir_LG-RAID_Munition__1_.pdf
  22. ^ Breeden, J., “Adding generative AI to wargame training can improve realism, but not without risk”, NextGov, February 3, 2024. Online at https://www.nextgov.com/artificial-intelligence/2024/02/adding-generative-ai-wargame-training-can-improve-realism-not-without-risk/394121/
  23. ^ "About 4".
  24. ^ P. W. Singer, “Tactical Generals: Leaders, Technology, and the Perils of Battlefield Micromanagement,” Australian Army Journal, v.6, n.3, November 2009. Online at https://researchcentre.army.gov.au/library/australian-army-journal-aaj/volume-6-number-3-adaptive-army
  25. ^ Rautenbach, P., “Machine Learning and Nuclear Command: How the technical flaws of automated systems and a changing human-machine relationship could impact the risk of inadvertent nuclear use,” Report, Cambridge Existential Risks Initiative, November 9, 2022. Online at https://forum.effectivealtruism.org/posts/BGFk3fZF36i7kpwWM/artificial-intelligence-and-nuclear-command-control-and-1
  26. ^ Johnson, J. (2020). Delegating strategic decision-making to machines: Dr. Strangelove Redux? Journal of Strategic Studies, 45(3), 439–477. Online at https://doi.org/10.1080/01402390.2020.1759038