liuylhf commited on
Commit
9a1a225
1 Parent(s): 36ecdae

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -28,7 +28,7 @@ This is the `llama3-empower-functions-large` model, which requires 4XA100 to run
28
  | ------------------------------- | ------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------ |
29
  | llama3-empower-functions-small | 8k context, based on [Llama3 8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) | [model](https://huggingface.co/empower-dev/llama3-empower-functions-small), [GGUF](https://huggingface.co/empower-dev/llama3-empower-functions-small-gguf) | Most cost-effective, locally runnable |
30
  | empower-functions-medium | 32k context, based on [Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | [model](https://huggingface.co/empower-dev/empower-functions-medium) | Balance in accuracy and cost |
31
- | llama3-empower-functions-large | 65k context, based on [Llama3 70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B) | [model](https://huggingface.co/empower-dev/llama3-empower-functions-large) | Best accuracy |
32
 
33
  ### Hardware Requirement
34
 
 
28
  | ------------------------------- | ------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------ |
29
  | llama3-empower-functions-small | 8k context, based on [Llama3 8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) | [model](https://huggingface.co/empower-dev/llama3-empower-functions-small), [GGUF](https://huggingface.co/empower-dev/llama3-empower-functions-small-gguf) | Most cost-effective, locally runnable |
30
  | empower-functions-medium | 32k context, based on [Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | [model](https://huggingface.co/empower-dev/empower-functions-medium) | Balance in accuracy and cost |
31
+ | llama3-empower-functions-large | 8k context, based on [Llama3 70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B) | [model](https://huggingface.co/empower-dev/llama3-empower-functions-large) | Best accuracy |
32
 
33
  ### Hardware Requirement
34