Jump to ratings and reviews
Rate this book

The AI Wave in Defence Innovation

Rate this book
An international and interdisciplinary perspective on the adoption and governance of artificial intelligence (AI) and machine learning (ML) in defence and military innovation by major and middle powers. Advancements in AI and ML pose pressing questions related to evolving conceptions of military power, compliance with international humanitarian law, peace promotion, strategic stability, arms control, future operational environments, and technology races. To navigate the breadth of this AI and international security agenda, the contributors to this book include experts on AI, technology governance, and defence innovation to assess military AI strategic perspectives from major and middle AI powers alike. These include views of how the United States, China, Japan, South Korea, the European Union, and Russia see AI/ML as a technology with the potential to reshape military affairs and power structures in the broader international system. This diverse set of views aims to help elucidate key similarities and differences between AI powers in the evolving strategic context. A valuable read for scholars of security studies, public policy, and STS studies with an interest in the impacts of AI and ML technologies.

264 pages, Paperback

Published April 21, 2023

3 people want to read

About the author

Michael Raska

8 books2 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
0 (0%)
4 stars
1 (100%)
3 stars
0 (0%)
2 stars
0 (0%)
1 star
0 (0%)
Displaying 1 of 1 review
Profile Image for Dr. Phoenix.
213 reviews588 followers
January 13, 2024
A mixed bag

This was an interesting experience. Although the vast majority of chapters were useful for the research project ( a chapter on Military AI, and a potential forthcoming title with Palgrave MacMillan), other chapters were less useful and in one case, sleep-inducing.

One of the advantages of reading a chapter you do not enjoy is to know how not to repeat such errors in one's future efforts.

Some of these chapters stood out and were extremely helpful, informative, and even enjoyable. The chapters on artificial intelligence in warfare provided some unique perspectives as did the chapter on convergent technologies and risk assessment.

The chapters concerning Russian use of AI, were particularly useful, although the second chapter on Russian use, echoed much of the same material presented in the chapter preceding it.

The chapter on Chinese Military AI development was an outstanding analysis and a fascinating and informative glance into Sino-AI development.

The chapter on AI development in China and South Korea was most useful and well-written, however a bit too much emphasis on Japan at the expense of Korean development and adoption.
The Chapter on EU-NATO AI development and adoption described the rater-bungled and mismatched attempts of these organizations to appropriately engage AI development in a meaningful way. Given the large number of stakeholders all with different budgets and requirements this made absolute sense. Consensus is not a strong suit in multipolar organizations.
Chapter 5 covered the use of US military AI development and was well-researched and presented albeit with a clear Western bias.
What didn't I like or enjoy?
P.5 At the bottom of the page, the sentence makes no sense. The editors should have caught this syntactical gaffe: "Finally, Stanley-Lockman's chapter examines the relevant US actors and [sic - how?] their efforts govern and exploit AI in accordance with broader US goals in the international system.
Chapter 1: The author announces 3 models and presents these in disorder. Too much emphasis on the writings of Clausewitz. We get it Clausewitz is a well-known and highly respected tactician and strategist, but here he was squeezed like a lemon.
P.14 Seems to conflate Clausewitz's terminology “fog of war" with "friction" two synonymous terms. Too much reliance on the writings of Mashur.

P41. In the citation by Nichols, there is a misspelling in line 3. It should read "use" not "us"
P. 42 Bullet point Spectrum Management: "...in which some frequencies become [sic] while others remain unused." (This should perhaps read saturated, or similar I believe).
P.75 The suggestions provided here are overtly bureaucratic and burdensome. Just reading them is tiring and confusing, I cannot begin to imagine what adopting and implementing them would entail.
P. 76. The author quite absurdly speaks of "creating a fair playing field." This is not a major concern in armed conflict, where asymmetry is equivalent to successful operations and superior tactical advantage.
PP. 93/94 Use of Acronyms without appropriate details of what they mean.
P. 95 Referring to COVID as a “positive disruptor" contradicts the entire concept of human rights and ethical responsibility and smacks of WEF manipulation.
P.97 "Russia's war of aggression." Loaded and biased language. Some consider that NATO and Western-directed provocation and defense of sovereign integrity were the primary causes of Russia's invasion. Nothing remotely objective with such a pronouncement by the author.
P.112 Lockman relies heavily on external sources for her analysis. While supporting material is always good, overreliance tends to diminish the writing of the author and other credibility. I had difficulty discerning the tree for the forest.
P.121 Lockman advances the US position for developing AI based upon "democratic values" is preferable, to safeguard against authoritarian use and adoption. This restrictive view automatically assumes that liberal democracies are preferable to authoritarian regimes. While there are certainly reprehensible authoritarian regimes, some states simply cannot properly function as liberal democracies and authoritarian-type control is a preferable system. Such is the case for paternal regimes. To assert that democracy is the ideal system for all is ludicrous, ethnocentric, and short-sighted. Russia and The United Arab Emirates are considered authoritarian regimes by the West. When comparing Western moral decay with traditional values, morally and spiritually, for example, these regimes are far more stable. They are safer, cleaner, and healthier than their current US/EU counterparts.
Chapter 6 P. 136. There is a missing conjunction at the end of the page. "Either way, evaluation is characterized by uncertainties, costs, and failures, [sic - more?] than successes."
Chapter 8 displays a clear anti-Russian bias and lacks credibility and legitimacy. The author has close ties to the US government and lacks any as much. objectivity. A glance at the author's biography confirms this. The author, although erudite and well-informed, conveniently ignores the reality on the ground. The author severely underestimates Russian AI capability due to analytical blindness and produces a cursory analysis that does not account for the secretive nature of their projects. Additionally, the author relies heavily on Open-source (OS) intelligence for the development of his chapter, and obviously, the Russian military does not readily publish vital information and sensitive intelligence. This chapter was a taxing read.
P.186 Borderline Russophobia, is highly disappointing and academically inadequate. an ethnocentric approach that adopts a specific narrative without informing the reader/ Loaded language, such as "perceived encroachment" when this is an empirical reality. or "Russian military's bungled advance into Ukraine..." Just Wow.
P.187. Syntax error - "A military that can master such concepts can impact how societies think and respond to key developments like international crises, potentially raising the stakes [sic - for?] MOD experts like Ilnirsky and others."
While I have always stressed and expressed the importance of building AI with ethics in mind and permeating the process from research and development (R&D) through development implementation evaluation and on throughout the lifecycle of the technology or platform, ethical considerations should not stand in isolation.
Some of these titles, particularly those dedicated to the Australian use of AI, though well-researched, were excessively tedious. Ironically, while I was reading all the noble sentiments concerning the respect for human rights espoused by the Australian government entities, I could not help but reflect on their blatant disregard for said rights and their draconian measures during the COVID period. How is it that those who profess most loudly the importance of human rights are the worst violators? Anyway, by the time I had reached this final chapter in the book, I had had more than my fill of ethics.
Another important point related to the chapter on Australian AI is the fact that they tend to be more concerned with ethical evaluation than the actual process of development and production. Very little balance.
Despite the number of criticisms and observations presented above. I would still recommend the book as one of the few comprehensive sources of information on the Military currently available. I would recommend it to anyone interested in military use of AI. One this that was missing was an analysis of future trends and developments. Given the nascent nature of this technology, I suspect there will be other similar titles forthcoming very soon. In closing I wish to emphasize that I am by no means a blind Russophile, but nor do I suffer from Russophobia like so many in the West and their unfounded fear of the Slavic Bodgeyman.


Displaying 1 of 1 review

Can't find what you're looking for?

Get help and learn more about the design.