More tokens than the textbooks provided by jind11/MedQA?

#1
by xxrjun - opened

I have observed that when using the tokenizer from Llama-2-7b, the number of tokens in content appears to be greater than the total number of tokens found in the English textbooks provided by jind11/MedQA. Could you please explain why this might be the case? Thanks!

  • Total tokens of English textbooks in jind11/MedQA: 25,103,358
  • Total tokens of MedRAG/textbooks ['content']: 27,204,764

When we split one paragraph into multiple chunks, we kept an overlap of 100 characters for each chunk. That should be the reason why you find more tokens here.

Thank you very much for your response! May I ask why you maintain a 100-character overlap? Thank you!

The sentence at the splitting position may be cut in a random way and lose its information if no overlap is kept.

Btw, I double checked our code. The overlap should be 200 characters. You can find our implementation here: https://github.com/Teddy-XiongGZ/MedRAG/blob/main/src/data/textbooks.py

Thank you very much for the clarification and for double-checking the code!

MedRAG changed discussion status to closed

Sign up or log in to comment