Researchers find 'universal' jailbreak prompts for multiple AI chat models

A study claims to have discovered a relatively simple addition to prompt questions that can trick a many of the most popular LLMs into providing forbidden answers.

Article Link: https://cms.cyberriskalliance.com/news/researchers-find-universal-jailbreak-prompts-for-multiple-ai-chat-models