What do you think?
Rate this book


240 pages, Paperback
Published September 24, 2024
Seeing the power and versatility of LLMs, some people claim that they are the first steps toward AGI. I don’t agree, and not only because we don’t have a good definition of AGI. LLMs have plenty of shortcomings: they are generally bad at reasoning and planning, can’t (on their own) really do math, and, perhaps most importantly, are extremely unreliable. Even the best LLMs hallucinate frequently in some situations. This makes them unsuitable on their own in many situations. However, it is possible to build systems around LLMs that amplify their power in various ways.
In fact, I think it would be best if we all simply stopped talking about AGI. It is leading us astray from the more important questions, which tend to focus on particular applications of AI technology and their consequences for society.