Where is the code for evaluation on benchmarks like human-eval etc
#28
by
senxiangms
- opened
I checked https://github.com/bigcode-project/Megatron-LM/blob/multi-query-attention/,
but don't know which folder has evaluation code.
thanks.
you can use the bigcode-evaluation-harness
loubnabnl
changed discussion status to
closed