New AI Jailbreak Method 'Bad Likert Judge' Boosts Attack Success Rates by Over 60%
New AI Jailbreak Method 'Bad Likert Judge' Boosts Attack Success Rates by Over 60%
Cybersecurity researchers have shed light on a new jailbreak technique that could be used to get past a large language model's (LLM) safety guardrails and produce potentially harmful or malicious responses.
The multi-turn (aka many-shot) attack strategy has been codenamed Bad Likert Judge by Palo Alto Networks Unit 42 researchers Yongzhe Huang, Yang Ji, Wenjun Hu, Jay Chen, Akshata Rao, and
Cybersecurity researchers have shed light on a new jailbreak technique that could be used to get past a large language model's (LLM) safety guardrails and produce potentially harmful or malicious responses.
The multi-turn (aka many-shot) attack strategy has been codenamed Bad Likert Judge by Palo Alto Networks Unit 42 researchers Yongzhe Huang, Yang Ji, Wenjun Hu, Jay Chen, Akshata Rao, and
Ce site utilise des cookies pour améliorer votre expérience de navigation. En continuant à utiliser ce site, vous consentez à l'utilisation de cookies. Veuillez consulter notre politique de confidentialité pour plus d'informations sur la façon dont nous traitons vos données.