The 'Bad Likert Judge' jailbreak technique exploits AI models by using psychometric scales to bypass safety filters, increasing attack success rates by over 60% and raising critical concerns about LLM vulnerabilities.
Check out the transcript here: Easy English AI News