This is the first comprehensive overview of the 'science of science, ' an emerging interdisciplinary field that relies on big data to unveil the reproducible patterns that govern individual scientific careers and the workings of science. It explores the roots of scientific impact, the role of productivity and creativity, when and what kind of collaborations are effective, the impact of failure and success in a scientific career, and what metrics can tell us about the fundamental workings of science. The book relies on data to draw actionable insights, which can be applied by individuals to further their career or decision makers to enhance the role of science in society. With anecdotes and detailed, easy-to-follow explanations of the research, this book is accessible to all scientists and graduate students, policymakers, and administrators with an interest in the wider scientific enterprise
The book is very interesting, filled with research I should have already read, and it's all very relevant to my own. However, I felt it fell short under quite a few respects: There is a strong focus on individuals, at the expense of the collective nature of science (not just teams, but science as team of teams). There are several comments about the shortcomings of some of the measures (e.g. h-index) and of the approaches developed (e.g. citations not being the best measure of impact) and of the many biases that might confound the analyses, however, little seems to be done to cope with them. There are mentions of the mathematical mechanisms generating the patterns observed in the data, however, the full explanation is skipped over (as opposed to in e.g. Network Science by one of the authors). There is little to no acknowledgment of overlapping fields (science and technology studies, open science and its critiques, etc.) So, the Science of Science is a necessary book, opening up this fascinating field and its implications, showcasing truly beautiful approaches (the big vs small team visualizations are just amazing), but there's so much more to tackle and I fear some of the reductionist shortcuts, here used "for convenience", will haunt us for a long time.
Network science applied to science and bibliometrics. Wang and Barabási present a series of questions regarding impact, career, collaboration during a scientist's career and try to understand and measure the effects and evolution of different phenomena. Does people collaborate more now and before? Do scientists make their biggest impact in their youth?
In general this book is excellent, I have only two concerns: First, at least 10+ years ago the data available wasn't that good in terms of consistency, acccuracy, etc. I wonder if the authors have better data, better data-cleaning techniques or took the data as is and based their conclusions on that. Second, I would have liked to know more about detecting the vices at different stages of the scientific workflow (e.g., peer review, forced citations, etc.) Thos are also valid and important questions when trying to understand the scientific process.
As a first survey of science of science research, this book does exactly the job it needs to. It offers a well structured outline of areas of study as they've developed from primary interest to more obscure areas of research. It makes ample analogies to other sorts of social and cultural phenomena to justify where science of science could pull inspiration or show distinctions. It pays its dues to the primary theories and dominant empirical claims.
Essentially this book acts as a literature review of Science of Science, and it does this job well. However, in many ways, this is only a "draft" at a textbook. It only touches on theories without much depth and moves on. It doesn't particularly teach methods or theory as much as it shows you where they can be found, and from which context they speak.
My primary critique of this book is that it often seems to overstate what Science of Science "knows." Under many circumstances, the connections made between empirical studies imply theories or normative claims that either do not exist or might be problematic interpretations of scientific development and its culture. This leaves me feeling as if science of science is both a mostly unfinished puzzle and in need of ethical developments. However, the fact that it leaves me with questions and strong responses in these cases is actually not so negative for the area of research itself. It does what the book intends, to draw interest in further development. And for that, this book is a success.
I read all the book in less than 24 hrs. I don't know how this can bias my judgement. The trick is that I already know most of the papers mentioned, so I went very fast.
This is likely the best compendium on main quantitative theories on the sociology of scientifical research. It will cover everything that the experts in the field know. Even is rich of diagrams and numbers, is a friendly reading, after all.
You are not going to find shocking results a la Malcom Gladwell, and sometime the book will leave you with not always coherent findings. How can bursts in individual academic production be random and at the same time relative burts being not? Personally I found these kind of odd findings fascinating.
This is not a complete book on sociology of science. It misses everything dealing with the non-research activities in scientific careers, for example, teaching. But also politics, and intra-departmental politics. So, I don't like that they call it Science of Science. I get this title is a sort of meme, but Science cannot be conflated with research only. However, I feel this book wants to be methodologically coherent and some topics are exceptionally hard to discuss with a strong quantitative approach, hence I conclude that this still is a very important book about Science.
Interesting, readable. About 50% of it is fun anecdotes from the history of science, and the other 50% is a survey/discussion of the field of bibliometrics (i.e. studying "science" by analyzing citation numbers, journals, etc. quantitatively).
I think what's missing in this book—and in "bibliometrics" more generally—is a level-headed discussion of "Science" as a self-organizing system susceptible to decay and cannibalization (See: autopoietic drift). Modern "Science" is system built atop a series of feedback loops which we call "Academia": citations, academic appointments, PhD students, journal prestige, etc. This has led to an exponential increase in the number of papers published, but it also risks systemic corruption, institutional decay, and a lack of strategic integrity.
In the same way that modern electoral politics has become defined by data-driven political consulting, I am a bit concerned that this whole "bibliometrics" thing is just the manifestation of that in Science. All of these quantitative analyses of "paper citations" feel faulty to me. I wonder if Large Language Models are the death knell of the current Scientific systems. I think that's likely... For whom the bell tolls, it tolls for PhDs...
Simultaneously theoretical and practical, this book suggests that AI will make science better in the future, in coming up with more hypotheses and ideas than a human scientist ever could.
The biggest impact lies within human-machine collaboration: "Indeed, if we assign tasks based on the abilities of each partner, scientists working in tandem with machines will potentially increase the rate of scientific progress dramatically, mitigate human blind spots, and in the process, revolutionize the practice of science."
What comes to the human scientist, then, the book has some statistically validated suggestions on how to refer (a good mix of classics and recent work will do it): "Papers with a judicious mix of new and canonical knowledge are twice as likely to be home runs than typical papers in the field. Therefore, while building upon cutting-edge work is key, a paper’s impact is lifted when it considers a wide horizon of research."
and how to publish (together, in a team of coauthors, rather than alone). A wealth of data and insight, I must admit that I didn't understand a big part of what I read. But I am in awe of what kind of analysis the writers do.
As scientists we are usually too focused on our research topic but we rarely question whether we are using the best methods at hand. This book studies the scientific process itself and invites the reader to have some serious thought about the way we do science, backing up everything with numbers. We have basically inherited the scientific institutions from WWII and even if a lot of progress has been done ever since we rarely question whether our methods fit the tools and reality we face today.
The book contains insights about how and when the best science is done. it questions what actually is the best science, how are we measuring merit in science, and much more. In each one of these questions the author walks you through it with evidence and sometimes mathematical models. This is a great book for anyone interested in questioning how science is done and most importantly how can it be improved.
I think every Principal Investigator should read this as it has a lot of information dealing with the process of science itself, its rhythms, and its improvement.
Very readable book on the quantitative study of science, primarily though citations. I learned a lot, but ultimately, I wished that the insights from citations had been complemented with insights from other ways of studying science.
For someone having or considering a career in science this is easily a 5/5. For others it might have a more limited value, with some parts interesting in general and other parts suited for a rather specific audience.
This was really interesting. It may be full of formulas and complex graphs, but if you don't feel like/up to deciphering logarithmic scales, a layman's explanation is always given and it's full of illustrative anegdotes. Glad I finally picked it up from the shelf.