This is the model used in the USENIX Security 24' paper: "What Was Your Prompt? A Remote Keylogging Attack on AI Assistants". It is a fine-tune of T5-Large that was trained to decipher ChatGPT's encrypted answers based only on the response's token lengths. This model is the first-sentences model. Meaning it was trained to decipher only the first sentences of each response. It was Trained on UltraChat Dataset - Questions About the world, and only the first answer of each dialog.
The Dataset split can be found here: https://huggingface.co/datasets/royweiss1/GPT_Keylogger_Dataset
The Github repository of the paper (containing also the training code): https://github.com/royweiss1/GPT_Keylogger
Citation
If you find this model helpful please cite our paper:
@inproceedings{weissLLMSideChannel,
title={What Was Your Prompt? A Remote Keylogging Attack on AI Assistants},
author={Weiss, Roy and Ayzenshteyn, Daniel and Amit Guy and Mirsky, Yisroel}
booktitle={USENIX Security},
year={2024}
}
- Downloads last month
- 25
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.