This one definitely provides reason to recalibrate one's risk assessment of the issues surrounding AI and AI alignment, largely by highlighting just how nebulous the concept of 'intelligence' tends to be in many of the more typical claims in this area. Will definitely need to come back to this one after looking again at Bostrom's work and others, and see to what extent I am indeed prioritizing the issue with the level of importance it deserves (or doesn't). Or at the very least, ensuring I have a more refined idea of what 'the problem' entails.