Jump to ratings and reviews
Rate this book

Beyond Quantity: Research with Subsymbolic AI

Rate this book
How do artificial neural networks and other forms of artificial intelligence interfere with methods and practices in the sciences? Which interdisciplinary epistemological challenges arise when we think about the use of AI beyond its dependency on big data? Not only the natural sciences, but also the social sciences and the humanities seem to be increasingly affected by current approaches of subsymbolic AI, which masters problems of quality (fuzziness, uncertainty) in a hitherto unknown way. But what are the conditions, implications, and effects of these (potential) epistemic transformations and how must research on AI be configured to address them adequately?

360 pages, Paperback

Published April 23, 2024

2 people are currently reading
2 people want to read

About the author

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
0 (0%)
4 stars
0 (0%)
3 stars
0 (0%)
2 stars
1 (100%)
1 star
0 (0%)
Displaying 1 of 1 review
Author 2 books
April 15, 2025

This book is composed by a series of chapters written by different researchers or philosophers, each as a self contained scientific paper. They deal with the impact of AI on modern research and on the limits of the current AI implementations. There is a fifteen pages long introduction and already here we can see the main problem affecting the book, like most of the other publications about AI. There has been a lot of progress in many fields. Not only in the knowledge and in the sophistication of the insights that each branch of science can give, but also in the methodologies, the tools and the frameworks. All the progresses combined with the internet and all the telecommunication technologies are bringing dramatic changes. There is the strong tendency to give too much credit to the role of AI in this evolution. In some points the arguments in the book follow the same false claims that the tech industry has been spreading in the last few years. Often it is not the fault of the authors themselves, they have to cite the current literature and this unfortunately shows the power that the tech corporations have on the academic world and how much they can distort the current research. The false start is made even more visible when it reports with little criticism the claim that some AI systems have now become capable of qualitative reasoning. They overlook that detecting some qualities and using them as features is way far from a reasoning process that is able to understand qualities with all the implications and the correlations that come with them, way far from a reasoning process that is able execute complex, multi step inference that takes into account the full complex picture that qualities with all their correlations can create.


___________________________________________

The first chapter, or paper, begins with some reasonable arguments. Debunking some myths and pointing out some limits of what is called AI. It begins to miss the point when it gets to the adoption of AI by the academic world. Here the main, overlooked problem, rather than AI is the adoption of common frameworks developed by big tech and the systematic use of basic, standardised, calculation and simulation functions. The risk is that of reducing fundamental research to the iterative application of the same methodologies, and starting assumptions, adopted by most of the research groups around the world. This allows the big tech corporations to steer academic research in the directions more suitable to their needs. It may bring forward science and technology, but it will also leave behind many unexplored paths.


Then it falls even further, dragged into the same big tech misrepresentations by mixing AI with the danger of advanced sociological studies, powered by big data. Talking about data colonialism it repeats the same stories about biased data and AI applications that big tech publicised for a long time. The author does not realise that the problems about bias are used to cover up problems that are a lot more serious. Like information asymmetry, that alters dramatically the balance of power between the corporation and the individual. Moreover, often biased algorithms are used as an excuse to cover up blatant discrimination.


At this point I have to add a disclaimer and warn the reader that I may be a bit biased. The arguments that led to my disagreement with the authors start from the ideas that I expressed in my own book. Therefore it is not a surprise that I am even more sceptical when it talks of the impact of ChatGPT and other AI tools on the future jobs. The evolution of traditional software engineering is already having a much bigger impact and ChatGPT depends too much on the thousands of underpaid mechanical turks working from low cost African countries to be a guaranteed long term development. I am even more sceptical of the claim that:

"current LLMs are cheaper and already produce comparable output compared to human labellers ..."

does the author take into account the cost of the energy needed to power the cluster of computers that run an artificial neural network made up by billions of parameters? What about the energy required to train them? What about the energy required to gather the humongous amount of data that is needed for the training? What about the cost of manufacturing the computers and all the data storage? LLMs today are cheap because they are subsidised.
___________________________________________

The second paper is a bit irrelevant, but it is worth reading. It is a theoretical argument about the limits of the mathematics at the base of machine learning and AI. It explains why machine learning model cannot perfectly learn all the knowledge available from the input data. It argues that it happens because the learning process is based on iterative approximations that will never converge to the final result.


The argument maybe right, but since we do not know how human mind works, we do not know if our own mind has the same limitations. We cannot say whether imperfect system can or cannot achieve an advanced level of intelligence. Such an achievement might depend more on the architecture of the specific model, but this is a different topic, a difficult one, since current AI models are even more primitive than the neural systems of most of the animals that roamed the Earth five hundred million years ago.


___________________________________________

The third paper provides a brief review of the historical development of AI, beginning from the early '50s, in order to better understand what is today called AI. It is well balanced in comparison of many other chapters in the book. There are many interesting insights and it points out some of the modern myths. However even this chapter, in the end, fails to point out how the current literature creates a lot of confusion over what is and what is not AI.


___________________________________________

Then follows a string of papers with little to mention. They follow the current line that describes all the progress in the scientific world as new AI tools.


A paper deals with the development of early warning systems. It also describes the inclusion of other animals, as sensors that add their own kind of intelligence, making the systems even more complicated. However considering that this is the evolution of old anomaly detection algorithms coupled with systems that expanded due to the infrastructure that modern technologies can provide the contribution provided by AI is negligible.


A paper provides another historical review, but this one gives too much credit to the early claims that started during the late '50s and promised that thinking machines would be coming soon. The scepticism that followed was described as a result of the disappointment caused by the early failures. But in reality in the academic world nobody took those claims seriously, they were publicised by the media in order to grab the attention of the public. The following period was called the winter of AI by the same media that gave too much credit to the early claims.


A paper provides a theoretical discussion about patterns. First in the academic world and then on their application in the design of AI system. After that there it talks about security systems as an example of a real world implementation of the theories. Here it shows how, when it comes to real life implementations, AI is really at an early stage, full of flaws and able to provide only some very specialised functions that require humans to intervene in the low level design.


A paper about the adoption of AI in the healthcare sector cites all the big promises made by the tech industry. Then it acknowledges that those promises are very far from being fulfilled, the actual adoption rate is tiny. Thereafter it list the usual excuses for the lack of working implementations, pretending not to see that as of now conventional approaches are simply more advanced and more effective to address the current problems.


___________________________________________

Then comes the eighth paper, its title is "Subsymbolic, hybrid and explainable AI". It is one of the few papers whose subject is in theme with the title of the book. Although it starts from a practical example, it is highly theoretical. Without the usual fantastic stories it explores a possible development path for the evolution of the current technologies, advocating a hybrid approach that mixes up different machine learning models and algorithms. It gives a glimpse of future solutions that are a bit different from the ChatGPT style neural networks, with billions of parameters, that work like big monolithic black boxes, take one input and give one output, providing little or no understanding of how that output was actually produced.

The proposed hybrid approach requires a lot more effort to design proper architectures, but I agree with the author who says that it gives a better overview of what the tools actually understand and how they build their knowledge, In the end it might be an effort that fiddling into the inner parts could provide new insights and new ideas.
___________________________________________

From the ninth paper onwards the quality quickly degrades. Many papers are supposed to investigate the impact of AI on current and future research, but in reality what they show is the grip that the private corporations have on the academic world. Here the drumming of the usual claims made by big tech that dominate mainstream media, increases noticeably. Also in big evidence appear the same fears, about secondary issues, that distract the attention from the serious problems. It really degrades badly in the final paragraphs where the interviewee, Sybille Krämer, plays with rhetoric by reversing the truth:

In fact, critical humanists like to focus on the ideologizations and
mythicizations, ....

It is not the intelligence and rationality of machines that we have to fear, but the irrationality of people.

Is it not, that, the game played by Big Tech and their pundits?
___________________________________________

At the end, the book left me with the feeling that the information that dominates the media and the literature today need a sobering message. Electronic computation is not AI. Digital infrastructure is not AI. Statistical functions are not AI.
___________________________________________

The papers worth reading in the book are a minority. The averages would bring down the rating to one star, but I am bringing the rating to two stars to save those papers that deserve some attention.
Displaying 1 of 1 review

Can't find what you're looking for?

Get help and learn more about the design.