Can humans and artificial intelligences share concepts and communicate? Making AI Intelligible shows that philosophical work on the metaphysics of meaning can help answer these questions. Herman Cappelen and Josh Dever use the externalist tradition in philosophy to create models of how AIs and humans can understand each other. In doing so, they illustrate ways in which that philosophical tradition can be improved.
The questions addressed in the book are not only theoretically interesting, but the answers have pressing practical implications. Many important decisions about human life are now influenced by AI. In giving that power to AI, we presuppose that AIs can track features of the world that we care about (for example, creditworthiness, recidivism, cancer, and combatants). If AIs can share our concepts, that will go some way towards justifying this reliance on AI. This ground-breaking study offers insight into how to take some first steps towards achieving Interpretable AI.
Good read, outlines various externalist theories of meaning (Kripke's causal chain theory of predication, Francois Recanati's Names as Mental Files theory, the Act-Theoretic theory of predication of Soames and Hanks, and teleofunctionalism) in order to explain how AI can be attributed to have content. A page in the attempt to develop Explainable AI. Surprisingly accessible with a basic background in philosophy of language, although I perhaps do not have the context for understanding what issue they are exactly trying to address.
Sums up pretty much why AI and philosophy don't and should not dwelve that much! The first part of the book is far more interesting. The second half is completely uninteresting.