Like Mooki, the hero of Spike Lee's film "Do the Right Thing," artificially intelligent systems have a hard time knowing what to do in all circumstances. Classical theories of perfect rationality prescribe the "right thing" for any occasion, but no finite agent can compute their prescriptions fast enough. In Do the Right Thing, the authors argue that a new theoretical foundation for artificial intelligence can be constructed in which rationality is a property of "programs" within a finite architecture, and their behavior over time in the task environment, rather than a property of individual decisions.
Do the Right Thing suggests that the rich structure that seems to be exhibited by humans, and ought to be exhibited by AI systems, is a necessary result of the pressure for optimal behavior operating within a system of strictly limited resources. It provides an outline for the design of new intelligent systems and describes theoretical and practical tools for bringing about intelligent behavior in finite machines. The tools are applied to game planning and realtime problem solving, with surprising results.
the game playing and problem solving are the interesting things
the metareasoning and all the rest of it seems 'fuzzy'
...........
He's someone who you might not agree with 30% to 60% of the time, but once in a while he does some interesting concerns, over a lot of his writings and films
wiki
He is on the Scientific Advisory Board for the Future of Life Institute and the advisory board of the Centre for the Study of Existential Risk.
In 2017 he collaborated with the Future of Life Institute to produce a video, Slaughterbots, about swarms of drones assassinating political opponents.....
....and presented this to a United Nations meeting about the Convention on Certain Conventional Weapons.
In 2018 he contributed an interview to the documentary Do You Trust This Computer?
Russell gave the 2021 Reith Lectures, broadcast on BBC Radio 4, on Living with Artificial Intelligence with lectures on:
"The Biggest Event in Human History" "AI in warfare" "AI in the economy" "AI: A Future for Humans"
..............
considering how childish some of the futurists like Schwab are with the World Economic Forum, it's odd that Russell was the vice chair on the WEF's Council on AI and Robotics....
Schwab goes on with some pretty weird fantasies in his writings, like maybe we should have nanobots distributing antidepressants throughout the atmosphere, infecting humans with 'happy happy joy joy'..... I shit you not....
Samuel P. Huntington was right about the Davos crowd of out of touch elites and globalistic weirdness by them
..........
I still think this book's main premise is bullshit though and it's the 1 star part of the book
"In Do the Right Thing, the authors argue that a new theoretical foundation for artificial intelligence can be constructed in which rationality is a property of 'programs' within a finite architecture, and their behavior over time in the task environment, rather than a property of individual decisions."
..........
I'm a pessimist with AI, software programming bugs, the abilities of programmers, the integrity of code, and the NP problem...
I think he's basically trying to design a shitty mousetrap, for a shitty robot AI mouse...
basically fixes that don't fix much and Franken-mice
.............
"rationality is a property of programs"
Yeah.... you keep believing in the sunny side of the street, Pal!
..............
there's an odd review of the book by Simon Parsons at the Advanced Computation Laboratory, with some Imperial College Cancer Research Fund/Foundation thingie
quote
Before I get too dewy-eyed about “Do the right thing”, or at least give the impression that I am, I should point out that I have at least one reservation about the approach.
That is that the kind of deliberation that the method applies in order to choose the best alternatives is based upon classical decision theory. To me this seems rather counterintuitive.
Granted classical decision theory, in which every alternative is assigned a probability of happening and has its utility of occurrence assessed, is a well-established model for making “rational” decisions under conditions of uncertainty, but it is also well established that it has a number of problems which largely relate to the establishment of these numerical probabilities and utilities.
Indeed, establishing the numbers is in many ways very similar to the problem of deciding what the best node to expand is, it is very easy to do if you assume an amniscient agent with unlimited time (or, alternatively compile the answer into your intelligent system), but it is very difficult to achieve on the fly with limited resources.
Having said this, it should be noted that Russell and Welfald do acknowledge that there is a problem, considering ways in which their method can be augmented to learn the best parameters when given an initial distribution that enables it to muddle its way, albeit less that perfectly, to some kind of initial solution.
This is a thoroughly sensible solution, and cannot really be faulted.
I would rather see an approach that admitted the flaws in classical decision theory and tried to overcome them, but then perhaps I should go out and persue that line of work myself.
.............
oddly it's the chicken and egg thing
we're relying on code doing weird shit with decision theory to fix the crazy shit people want to do with AI in the first place!
...........
It feels like Russell, points at AI and says, it's dangerous and flaky!
But i think i can write code that's decision theory oh the flaws are TRIVIAL but trust me..... BELIEVE in ME...... I can fix all the flaky and dangerous CODE with my non-flaky and very-very-very-SAFE CODE