context
stringclasses
4 values
prompt
stringclasses
1 value
options
stringclasses
4 values
suspect
stringclasses
4 values
token_size
int64
29.4k
55k
"\n \nMr. Sherlock Holmes, who was usually very late in the mornings, save upon\nthose not infrequen(...TRUNCATED)
"In this story as of now, who do you think is most likely to be the killer? Choose from the below op(...TRUNCATED)
0. Stapleton 1. Morning 2. Belliver 3. Constabulary 4. Black
The suspect is Stapleton
54,977
" . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .\n3\nThe (...TRUNCATED)
"In this story as of now, who do you think is most likely to be the killer? Choose from the below op(...TRUNCATED)
0. Agra 1. firm 2. Millbank 3. Jonathan Small 4. Come
The suspect is Jonathan Small
38,753
".\nMR. SHERLOCK HOLMES.\nIN the year 1878 I took my degree of Doctor of Medicine\nof the University(...TRUNCATED)
"In this story as of now, who do you think is most likely to be the killer? Choose from the below op(...TRUNCATED)
0. Brinvilliers 1. Jefferson Hope 2. WE 3. Quite 4. Majesty
The suspect is Jefferson Hope
29,423
"\nI\nam inclined to think—” said I.\n“I should do so,” Sherlock Holmes re-\nmarked impatien(...TRUNCATED)
"In this story as of now, who do you think is most likely to be the killer? Choose from the below op(...TRUNCATED)
0. Bodymaster 1. Hadn 2. John Douglas 3. Just 4. Twice
The suspect is John Douglas
35,285

Its quite clear to me that needle in a haystack tests are quite broken with every model paper showing like a 99.99% coverage on over 100k+ tokens. When in practice, it can be seen that anything over 20% of total model context size is useless for RAG via input tokens.

So the goal here is to expand this dataset to be the most comprehensive stack of mystery novels and have the LLM predict the final killer as proposed by Sholto Douglas on this podcast with Dwarkesh.

Obviously the missing thing here is that the pre-training data almost 100% covers Sherlock Holmes novels. The most immediate fix I found here was to replace all english names with Hindi or any other regional language to increase token size drastically for the model as these words require more tokens in the vocabulary to embed.

Since, I keep running out of API calls to replace names in the text, I have attached a list of 500 names that I could cleanly generate in the recipe here.

My goal for v2 of this dataset is to just take more of these novels and convert it into an assignment task for detectives. And try to check for sample efficiency to get the LLM to be as good as Sherlock Holmes.

Downloads last month
0
Edit dataset card