OpenAI No Longer Takes Safety Seriously

Readers reacted positively to my 17 May post on AI generated news stories. Today, Peter N. Salib of Lawfare Media posted a piece that I thought readers here might find interesting: OpenAI No Longer Takes Safety Seriously It starts like this:
Until last week, OpenAI had a team dedicated to making sure its products did not destroy humanity. The team was called Superalignment. It was co-headed by Ilya Sutskever, OpenAI’s co-founder and chief scientist, and Jan Leike, a pioneering AI researcher in his own right. Last week, both Sutskever and Leike resigned from the company (Leike explicitly in protest), and OpenAI disbanded Superalignment.
This is very bad news.

