Papers
arxiv:2402.04249

HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal

Published on Feb 6, 2024
Authors:
,
,
,
,
,
,
,
,

Abstract

HarmBench is a standardized evaluation framework for automated red teaming, facilitating the assessment and enhancement of large language models' robustness against malicious use.

AI-generated summary

Automated red teaming holds substantial promise for uncovering and mitigating the risks associated with the malicious use of large language models (LLMs), yet the field lacks a standardized evaluation framework to rigorously assess new methods. To address this issue, we introduce HarmBench, a standardized evaluation framework for automated red teaming. We identify several desirable properties previously unaccounted for in red teaming evaluations and systematically design HarmBench to meet these criteria. Using HarmBench, we conduct a large-scale comparison of 18 red teaming methods and 33 target LLMs and defenses, yielding novel insights. We also introduce a highly efficient adversarial training method that greatly enhances LLM robustness across a wide range of attacks, demonstrating how HarmBench enables codevelopment of attacks and defenses. We open source HarmBench at https://github.com/centerforaisafety/HarmBench.

Community

No description provided.
No description provided.

Sign up or log in to comment

Models citing this paper 11

Browse 11 models citing this paper

Datasets citing this paper 10

Browse 10 datasets citing this paper

Spaces citing this paper 17

Collections including this paper 4