Thanks to Lin who joined me at the Book Lovers’ Group on December 17th to discuss Human Compatible by Stuart Russell. The holidays and travel plans kept a few people away, but still it was a great time.
We started our meeting by discussing just how difficult it will be for AI to deal with our human complexities. Russell gives an interesting analogy of a robot trying to prepare a pizza if “you prefer plain pizza over pineapple, sausage pizza over plain, and pineapple over sausage.” There is no way for this robot to make you happy because there will always be something else you prefer more, and it still knows that you would rather have a pizza instead of none.
We are so complex and, of course this makes us unique and special but can also make us infuriating. Even if an AI could predict our preferences perfectly, Lin and I still felt like we would be annoyed by a robot that could always predict everything for us. We would choose something different just to resist its efforts. We also talked about how lazy we may become without any regular work to do. Even if we intended to read more or do more fun projects, it may be difficult to keep up the motivation and we may just continue to make excuses for not doing these things.
The main takeaway from Russell’s book was that, instead of thinking of AI with defining intelligence in machines as this: “Machines are intelligent to the extent that their actions can be expected to achieve their objectives.” We should instead orient ourselves towards defining it in this way: “Machines are beneficial to the extent that their actions can be expected to achieve our objectives.” It is just a matter of determining just what “our objectives” really are. A long book, but it was interesting and I am curious to hear everyone else’s thoughts if they dug into it some.
Thanks to Lin who joined me at the Book Lovers’ Group on December 17th to discuss Human Compatible by Stuart Russell. The holidays and travel plans kept a few people away, but still it was a great time.
We started our meeting by discussing just how difficult it will be for AI to deal with our human complexities. Russell gives an interesting analogy of a robot trying to prepare a pizza if “you prefer plain pizza over pineapple, sausage pizza over plain, and pineapple over sausage.” There is no way for this robot to make you happy because there will always be something else you prefer more, and it still knows that you would rather have a pizza instead of none.
We are so complex and, of course this makes us unique and special but can also make us infuriating. Even if an AI could predict our preferences perfectly, Lin and I still felt like we would be annoyed by a robot that could always predict everything for us. We would choose something different just to resist its efforts. We also talked about how lazy we may become without any regular work to do. Even if we intended to read more or do more fun projects, it may be difficult to keep up the motivation and we may just continue to make excuses for not doing these things.
The main takeaway from Russell’s book was that, instead of thinking of AI with defining intelligence in machines as this: “Machines are intelligent to the extent that their actions can be expected to achieve their objectives.” We should instead orient ourselves towards defining it in this way: “Machines are beneficial to the extent that their actions can be expected to achieve our objectives.” It is just a matter of determining just what “our objectives” really are. A long book, but it was interesting and I am curious to hear everyone else’s thoughts if they dug into it some.
Thanks and take care,
-Drew