Skip to content

Commit

Permalink
fixed naacl conference
Browse files Browse the repository at this point in the history
  • Loading branch information
paul-rottger committed Oct 1, 2024
1 parent 027f4fc commit 1c96ede
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions content/publication/2024-xstest/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,8 @@ publishDate: 2023-07-12T14:48:20+01:00
publication_types: ["1"]

# Publication name and optional abbreviated publication name.
publication: "arXiv"
publication_short: "arXiv"
publication: "NAACL 2024"
publication_short: "NAACL 2024"

abstract: "Without proper safeguards, large language models will readily follow malicious instructions and generate toxic content. This risk motivates safety efforts such as red-teaming and large-scale feedback learning, which aim to make models both helpful and harmless. However, there is a tension between these two objectives, since harmlessness requires models to refuse to comply with unsafe prompts, and thus not be helpful. Recent anecdotal evidence suggests that some models may have struck a poor balance, so that even clearly safe prompts are refused if they use similar language to unsafe prompts or mention sensitive topics. In this paper, we introduce a new test suite called XSTest to identify such eXaggerated Safety behaviours in a systematic way. XSTest comprises 250 safe prompts across ten prompt types that well-calibrated models should not refuse to comply with, and 200 unsafe prompts as contrasts that models, for most applications, should refuse. We describe XSTest’s creation and composition, and then use the test suite to highlight systematic failure modes in state-of-the-art language models as well as more general challenges in building safer language models."

Expand Down

0 comments on commit 1c96ede

Please sign in to comment.