Status Updates From Situational Awareness: The ...
Situational Awareness: The Decade Ahead by
Status Updates Showing 1-30 of 164
Anthony William
is starting
I just started but this guy is a genius. Nothing less.
— Sep 29, 2025 01:37PM
Add a comment
Michael Chenchard
is finished
These are great and honorable people. But they are just people. Soon, the AIs will be running the world, but we’re in for
one last rodeo. May their final stewardship bring honor to
mankind.
— Sep 01, 2025 07:41PM
Add a comment
one last rodeo. May their final stewardship bring honor to
mankind.
Michael Chenchard
is on page 150 of 165
In particular, we may want to “spend some of our lead”
to have time to solve safety challenges, but Western labs will
need to coordinate to do so. (And of course, private labs will
have already had their AGI weights stolen, so their safety pre-
situational awareness 151
cautions won’t even matter; we’ll be at the mercy of the CCP’s
and North Korea’s safety precautions.)
— Sep 01, 2025 07:24PM
Add a comment
to have time to solve safety challenges, but Western labs will
need to coordinate to do so. (And of course, private labs will
have already had their AGI weights stolen, so their safety pre-
situational awareness 151
cautions won’t even matter; we’ll be at the mercy of the CCP’s
and North Korea’s safety precautions.)
Michael Chenchard
is on page 147 of 165
its funny when he says "random nonprofit board" like why not just say OpenAI lol
— Sep 01, 2025 07:14PM
Add a comment
Michael Chenchard
is on page 142 of 165
It is a delusion of those who have unconsciously internalized
our brief respite from history that this will not summon more
primordial forces. Like many scientists before us, the great
minds of San Francisco hope that they can control the destiny
of the demon they are birthing. Right now, they still can; for
they are among the few with situational awareness, who understand what they are building.
— Sep 01, 2025 07:00PM
Add a comment
our brief respite from history that this will not summon more
primordial forces. Like many scientists before us, the great
minds of San Francisco hope that they can control the destiny
of the demon they are birthing. Right now, they still can; for
they are among the few with situational awareness, who understand what they are building.
Michael Chenchard
is on page 140 of 165
There’s already an eerie convergence of AGI timelines (~2027?)
and Taiwan watchers’ Taiwan invasion timelines (China ready
to invade Taiwan by 2027?)—a convergence that will surely
only heighten as the world wakes up to AGI. (Imagine if in
1960, the vast majority of the world’s uranium deposits were
somehow concentrated in Berlin.)
— Sep 01, 2025 06:57PM
Add a comment
and Taiwan watchers’ Taiwan invasion timelines (China ready
to invade Taiwan by 2027?)—a convergence that will surely
only heighten as the world wakes up to AGI. (Imagine if in
1960, the vast majority of the world’s uranium deposits were
somehow concentrated in Berlin.)
Michael Chenchard
is on page 140 of 165
America’s lead on AGI won’t secure peace and freedom by just
building the best AI girlfriend apps. It’s not pretty—but we
must build AI for American defense.
— Sep 01, 2025 06:56PM
Add a comment
building the best AI girlfriend apps. It’s not pretty—but we
must build AI for American defense.
Michael Chenchard
is on page 137 of 165
Some hope for some sort of international treaty on safety. This
seems fanciful to me. (How have those climate treaties
gone? That seems like a dramatically easier problem compared
to this.)
— Sep 01, 2025 06:50PM
Add a comment
seems fanciful to me. (How have those climate treaties
gone? That seems like a dramatically easier problem compared
to this.)
Michael Chenchard
is on page 133 of 165
To date, US tech companies have made a much bigger bet on
AI and scaling than any Chinese efforts; consequently, we are
well ahead. But counting out China now is a bit like counting out Google in the AI race when ChatGPT came out in late
2022. Google hadn’t yet focused their efforts in an intense AI
bet, and it looked as though OpenAI was far ahead—but
— Sep 01, 2025 06:38PM
Add a comment
AI and scaling than any Chinese efforts; consequently, we are
well ahead. But counting out China now is a bit like counting out Google in the AI race when ChatGPT came out in late
2022. Google hadn’t yet focused their efforts in an intense AI
bet, and it looked as though OpenAI was far ahead—but
Michael Chenchard
is on page 132 of 165
But if there’s one thing China can do better than the US
it’s building stuff.
— Sep 01, 2025 06:36PM
Add a comment
it’s building stuff.
Michael Chenchard
is on page 125 of 165
We’re not on track for superdefense, for
an airgapped cluster or any of that; I’m not sure we would
even realize if a model self-exfiltrated.
— Sep 01, 2025 04:01PM
Add a comment
an airgapped cluster or any of that; I’m not sure we would
even realize if a model self-exfiltrated.
Michael Chenchard
is on page 120 of 165
For example, if we deliberately plant backdoors
or misalignments into models, would our safety training would
have caught and gotten rid of them? (Early work suggests that
“sleeper agents” can survive through safety training, for example.)
— Sep 01, 2025 03:43PM
Add a comment
or misalignments into models, would our safety training would
have caught and gotten rid of them? (Early work suggests that
“sleeper agents” can survive through safety training, for example.)
Michael Chenchard
is on page 106 of 165
If we do rapidly
transition from from AGI to superintelligence,
typo
— Sep 01, 2025 02:18PM
Add a comment
transition from from AGI to superintelligence,
typo
Michael Chenchard
is on page 103 of 165
the story of Szilard, Fermi and Bothe is amazing.
— Sep 01, 2025 02:13PM
Add a comment
Michael Chenchard
is on page 96 of 165
I sometimes joke that AI lab algorithmic advances are not shared with
the American research community, but
they are being shared with the Chinese
research community!
— Sep 01, 2025 01:33PM
Add a comment
the American research community, but
they are being shared with the Chinese
research community!
Michael Chenchard
is on page 94 of 165
Google DeepMind Frontier Safety Framework outlines security levels 0, 1, 2, 3, and 4 (~1.5 being what you’d
need to defend against well-resourced terrorist groups or cybercriminals, 3 being what you’d need to defend against the
North Koreas of the world, and 4 being what you’d need to
have even a shot of defending against priority efforts by the
most capable state actors).72 They admit to being at level 0.
— Sep 01, 2025 01:21PM
Add a comment
need to defend against well-resourced terrorist groups or cybercriminals, 3 being what you’d need to defend against the
North Koreas of the world, and 4 being what you’d need to
have even a shot of defending against priority efforts by the
most capable state actors).72 They admit to being at level 0.
Michael Chenchard
is on page 92 of 165
Too many smart people underrate espionage.
— Sep 01, 2025 01:13PM
Add a comment
Michael Chenchard
is on page 90 of 165
AGI-level
security for algorithmic secrets is necessary years before AGI level security for weights.
— Sep 01, 2025 01:09PM
Add a comment
security for algorithmic secrets is necessary years before AGI level security for weights.
Michael Chenchard
is on page 87 of 165
If American business is unshackled, America can build like none other (at least in red states).
— Sep 01, 2025 01:02PM
Add a comment



