A helpful resource you may refer while waiting for the leaderboard to run slowly

#44
by zmcmcc - opened

Since the process is really slow, I found a personal project https://github.com/Troyanovsky/Local-LLM-comparison , where the author compares several LLMs that can be deployed locally on consumer hardware. Though the author didn't use the same metrics as this leaderboard does, I find it quite helpful. And that repo updates quite often.

Here's the dynamic table that includes HF top 500+ LLMs with a lot of internal parameters: language/architecture/context length/... -- a lot of properties and params to choose the right LLM.
https://llm.extractum.io

clefourrier changed discussion status to closed

Sign up or log in to comment