What do you think?
Rate this book
360 pages, Paperback
Published April 23, 2024
This book is composed by a series of chapters written by different researchers or philosophers, each as a self contained scientific paper. They deal with the impact of AI on modern research and on the limits of the current AI implementations. There is a fifteen pages long introduction and already here we can see the main problem affecting the book, like most of the other publications about AI. There has been a lot of progress in many fields. Not only in the knowledge and in the sophistication of the insights that each branch of science can give, but also in the methodologies, the tools and the frameworks. All the progresses combined with the internet and all the telecommunication technologies are bringing dramatic changes. There is the strong tendency to give too much credit to the role of AI in this evolution. In some points the arguments in the book follow the same false claims that the tech industry has been spreading in the last few years. Often it is not the fault of the authors themselves, they have to cite the current literature and this unfortunately shows the power that the tech corporations have on the academic world and how much they can distort the current research. The false start is made even more visible when it reports with little criticism the claim that some AI systems have now become capable of qualitative reasoning. They overlook that detecting some qualities and using them as features is way far from a reasoning process that is able to understand qualities with all the implications and the correlations that come with them, way far from a reasoning process that is able execute complex, multi step inference that takes into account the full complex picture that qualities with all their correlations can create.
The first chapter, or paper, begins with some reasonable arguments. Debunking some myths and pointing out some limits of what is called AI. It begins to miss the point when it gets to the adoption of AI by the academic world. Here the main, overlooked problem, rather than AI is the adoption of common frameworks developed by big tech and the systematic use of basic, standardised, calculation and simulation functions. The risk is that of reducing fundamental research to the iterative application of the same methodologies, and starting assumptions, adopted by most of the research groups around the world. This allows the big tech corporations to steer academic research in the directions more suitable to their needs. It may bring forward science and technology, but it will also leave behind many unexplored paths.
Then it falls even further, dragged into the same big tech misrepresentations by mixing AI with the danger of advanced sociological studies, powered by big data. Talking about data colonialism it repeats the same stories about biased data and AI applications that big tech publicised for a long time. The author does not realise that the problems about bias are used to cover up problems that are a lot more serious. Like information asymmetry, that alters dramatically the balance of power between the corporation and the individual. Moreover, often biased algorithms are used as an excuse to cover up blatant discrimination.
At this point I have to add a disclaimer and warn the reader that I may be a bit biased. The arguments that led to my disagreement with the authors start from the ideas that I expressed in my own book. Therefore it is not a surprise that I am even more sceptical when it talks of the impact of ChatGPT and other AI tools on the future jobs. The evolution of traditional software engineering is already having a much bigger impact and ChatGPT depends too much on the thousands of underpaid mechanical turks working from low cost African countries to be a guaranteed long term development. I am even more sceptical of the claim that:
"current LLMs are cheaper and already produce comparable output compared to human labellers ..."
The second paper is a bit irrelevant, but it is worth reading. It is a theoretical argument about the limits of the mathematics at the base of machine learning and AI. It explains why machine learning model cannot perfectly learn all the knowledge available from the input data. It argues that it happens because the learning process is based on iterative approximations that will never converge to the final result.
The argument maybe right, but since we do not know how human mind works, we do not know if our own mind has the same limitations. We cannot say whether imperfect system can or cannot achieve an advanced level of intelligence. Such an achievement might depend more on the architecture of the specific model, but this is a different topic, a difficult one, since current AI models are even more primitive than the neural systems of most of the animals that roamed the Earth five hundred million years ago.
The third paper provides a brief review of the historical development of AI, beginning from the early '50s, in order to better understand what is today called AI. It is well balanced in comparison of many other chapters in the book. There are many interesting insights and it points out some of the modern myths. However even this chapter, in the end, fails to point out how the current literature creates a lot of confusion over what is and what is not AI.
Then follows a string of papers with little to mention. They follow the current line that describes all the progress in the scientific world as new AI tools.
A paper deals with the development of early warning systems. It also describes the inclusion of other animals, as sensors that add their own kind of intelligence, making the systems even more complicated. However considering that this is the evolution of old anomaly detection algorithms coupled with systems that expanded due to the infrastructure that modern technologies can provide the contribution provided by AI is negligible.
A paper provides another historical review, but this one gives too much credit to the early claims that started during the late '50s and promised that thinking machines would be coming soon. The scepticism that followed was described as a result of the disappointment caused by the early failures. But in reality in the academic world nobody took those claims seriously, they were publicised by the media in order to grab the attention of the public. The following period was called the winter of AI by the same media that gave too much credit to the early claims.
A paper provides a theoretical discussion about patterns. First in the academic world and then on their application in the design of AI system. After that there it talks about security systems as an example of a real world implementation of the theories. Here it shows how, when it comes to real life implementations, AI is really at an early stage, full of flaws and able to provide only some very specialised functions that require humans to intervene in the low level design.
A paper about the adoption of AI in the healthcare sector cites all the big promises made by the tech industry. Then it acknowledges that those promises are very far from being fulfilled, the actual adoption rate is tiny. Thereafter it list the usual excuses for the lack of working implementations, pretending not to see that as of now conventional approaches are simply more advanced and more effective to address the current problems.
From the ninth paper onwards the quality quickly degrades. Many papers are supposed to investigate the impact of AI on current and future research, but in reality what they show is the grip that the private corporations have on the academic world. Here the drumming of the usual claims made by big tech that dominate mainstream media, increases noticeably. Also in big evidence appear the same fears, about secondary issues, that distract the attention from the serious problems. It really degrades badly in the final paragraphs where the interviewee, Sybille Krämer, plays with rhetoric by reversing the truth:
In fact, critical humanists like to focus on the ideologizations and
mythicizations, ....
It is not the intelligence and rationality of machines that we have to fear, but the irrationality of people.