[Cache Request] aaditya/Llama3-OpenBioLLM-8B
#106
by
sagarjethi
- opened
Please add the following model to the neuron cache
I think that if you just edit the config.json to set use_cache
to True, then the config is identical to the meta-llama/Meta-LLama-3-8B config and the model will be detected as cached.