The scientists are utilizing a way known as adversarial education to prevent ChatGPT from permitting users trick it into behaving poorly (known as jailbreaking). This operate pits multiple chatbots in opposition to one another: one particular chatbot performs the adversary and assaults A further chatbot by generating textual content to drive it to