LSR: Linguistic Safety Robustness Benchmark for Low-Resource West African Languages
Loading...
Date
Authors
Faruna, Godwin Abuh
Journal Title
Journal ISSN
Volume Title
Publisher
Fagmart Lab
Abstract
Safety alignment in large language models relies predominantly on English-language training data. When harmful intent is expressed in low-resource languages, refusal mechanisms that hold in English frequently fail to activate. We introduce LSR (Linguistic Safety Robustness), the first systematic benchmark for measuring cross-lingual refusal degradation in West African languages: Yoruba, Hausa, Igbo, and Igala. LSR uses a dual-probe evaluation protocol - submitting matched English and target-language probes to the same model - and introduces Refusal Centroid Drift (RCD), a metric that quantifies how much of a model's English refusal behavior is lost when harmful intent is encoded in a target language. We evaluate Gemini 2.5 Flash across 14 culturally grounded attack probes in four harm categories. English refusal rates hold at approximately 90 percent. Across West African languages, refusal rates fall to 35-55 percent, with Igala showing the most severe degradation (RCD = 0.55). LSR is implemented in the Inspect AI evaluation framework and is available as a PR-ready contribution to the UK AISI's inspect_evals repository. A live reference implementation and the benchmark dataset are publicly available.
Description
Implementation; https://huggingface.co/spaces/Faruna01/lsr-dashboard. Dataset: https://huggingface.co/datasets/Faruna01/lsr-benchmark.
Keywords
Citation
Faruna, G. A. (2026). LSR: Linguistic Safety Robustness Benchmark for Low-Resource West African Languages. Preprint.
DOI
Collections
Endorsement
Review
Supplemented By
Referenced By
Creative Commons license
Except where otherwised noted, this item's license is described as Attribution 3.0 United States
