GPT-3 ‘prompt injection’ attack causes bad bot manners

OpenAI’s popular natural language model GPT-3 has a problem: It can be tricked into behaving badly by doing little more than telling it to ignore its previous orders.

Read full article on The Register