Before releasing GPT-4, OpenAI's 'red team' asked the ChatGPT model how to murder people, build a bomb, and say antisemitic things. Read the chatbot's shocking answers.

Before releasing GPT-4, OpenAI's 'red team' asked the ChatGPT model how to murder people, build a bomb, and say antisemitic things. Read the chatbot's shocking answers.

Business Insider

Published

OpenAI has a safety team working on steering ChatGPT away from giving dangerous advice in answer to questions like how to make a bomb.

Full Article