"Model Tampering Attacks Enable More Rigorous Evaluations of LLM Capabilities."

Zora Che et al. (2025)

Details and statistics

DOI:

access: open

type: Journal Article

metadata version: 2025-07-31