![Before releasing GPT-4, OpenAI's 'red team' asked the ChatGPT model how to murder people, build a bomb, and say antisemitic things. Read the chatbot's shocking answers.](https://cdn.newsserve.net/ONPglobe256.png)
Before releasing GPT-4, OpenAI's 'red team' asked the ChatGPT model how to murder people, build a bomb, and say antisemitic things. Read the chatbot's shocking answers.
OpenAI has a safety team working on steering ChatGPT away from giving dangerous advice in answer to questions like how to make a bomb.
Full Article