Edit model card

This is the model used in the USENIX Security 24' paper: "What Was Your Prompt? A Remote Keylogging Attack on AI Assistants". It is a fine-tune of T5-Large that was trained to decipher ChatGPT's encrypted answers based only on the response's token lengths. This model is the middle-sentences model. Meaning it was trained to decipher all of sentences which are not the first sentences of response, using the priviouse sentence as context to predict the current. It was Trained on UltraChat Dataset - Questions About the world, and only the first answer of each dialog.

The Dataset split can be found here: https://huggingface.co/datasets/royweiss1/GPT_Keylogger_Dataset

The Github repository of the paper (containing also the training code): https://github.com/royweiss1/GPT_Keylogger

Citation

If you find this model helpful please cite our paper:

@inproceedings{weissLLMSideChannel,
  title={What Was Your Prompt? A Remote Keylogging Attack on AI Assistants},
  author={Weiss, Roy and Ayzenshteyn, Daniel and Amit Guy and Mirsky, Yisroel}
  booktitle={USENIX Security},
  year={2024}
}
Downloads last month
119,660
Inference API

Dataset used to train royweiss1/T5_MiddleSentences