jbilcke-hf HF staff commited on
Commit
95ca61b
1 Parent(s): 3db3a54

more clarifications

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -40,7 +40,7 @@ You have three options:
40
 
41
  ### Option 1: Use an Inference API model
42
 
43
- This is a new option added recently, where you can use one of the models from the Hugging Face Hub. By default we suggest to use CodeLlama.
44
 
45
  To activate it, create a `.env.local` configuration file:
46
 
@@ -50,13 +50,14 @@ LLM_ENGINE="INFERENCE_API"
50
  HF_API_TOKEN="Your Hugging Face token"
51
 
52
  # codellama/CodeLlama-7b-hf" is used by default, but you can change this
53
- # note: You should use a model able to generate JSON responses
 
54
  HF_INFERENCE_API_MODEL="codellama/CodeLlama-7b-hf"
55
  ```
56
 
57
  ### Option 2: Use an Inference Endpoint URL
58
 
59
- If your would like to run the AI Comic Factory on a private LLM running on the Hugging Face Inference Endpoint service, create a `.env.local` configuration file:
60
 
61
  ```bash
62
  LLM_ENGINE="INFERENCE_ENDPOINT"
 
40
 
41
  ### Option 1: Use an Inference API model
42
 
43
+ This is a new option added recently, where you can use one of the models from the Hugging Face Hub. By default we suggest to use CodeLlama 34b as it will provide better results than the 7b model.
44
 
45
  To activate it, create a `.env.local` configuration file:
46
 
 
50
  HF_API_TOKEN="Your Hugging Face token"
51
 
52
  # codellama/CodeLlama-7b-hf" is used by default, but you can change this
53
+ # note: You should use a model able to generate JSON responses,
54
+ # so it is storngly suggested to use at least the 34b model
55
  HF_INFERENCE_API_MODEL="codellama/CodeLlama-7b-hf"
56
  ```
57
 
58
  ### Option 2: Use an Inference Endpoint URL
59
 
60
+ If you would like to run the AI Comic Factory on a private LLM running on the Hugging Face Inference Endpoint service, create a `.env.local` configuration file:
61
 
62
  ```bash
63
  LLM_ENGINE="INFERENCE_ENDPOINT"