@hzulla@infosec.exchange
https://www.bloodinthemachine.com/p/ai-disagreements
"I did learn a great deal, and there was much that was eye-opening. For one thing, I saw the extent to which some people really, truly, and deeply believe that the AI models like those being developed by OpenAI and Anthropic are just years away from destroying the human race. I had often wondered how much of this concern was performative, a useful narrative for generating meaning at work or spinning up hype about a commercial product—and there are clearly many operators in Silicon Valley, even attendees at this very conference, who are sharply aware of this particular utility, and able to harness it for that end. But there was ample evidence of true belief, even mania, that is not easily feigned. There was one session where people sat in a circle, mourning the coming loss of humanity, in which tears were shed."
"At one point in his talk, at the prompting of a question I had sent into the queue, the speaker asked everyone in the room to raise their hand to indicate whether or not they believed AI was on the brink of destroying humanity—about half the room believed on our current path, destruction was imminent. This was no fluke. In the next three talks I attended, some variation of “well by then we’re already dead” or “then everyone dies” was uttered by at least one of the speakers."