TheBloke commited on
Commit
fa82664
1 Parent(s): 0a04625

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -9
README.md CHANGED
@@ -4,7 +4,7 @@ inference: false
4
  language:
5
  - en
6
  library_name: transformers
7
- license: cc
8
  model_creator: medalpaca
9
  model_name: Medalpaca 13B
10
  model_type: llama
@@ -94,15 +94,8 @@ Below is an instruction that describes a task. Write a response that appropriate
94
  ```
95
 
96
  <!-- prompt-template end -->
97
- <!-- licensing start -->
98
- ## Licensing
99
 
100
- The creator of the source model has listed its license as `cc`, and this quantization has therefore used that same license.
101
 
102
- As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
103
-
104
- In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [medalpaca's Medalpaca 13B](https://huggingface.co/medalpaca/medalpaca-13b).
105
- <!-- licensing end -->
106
  <!-- compatibility_gguf start -->
107
  ## Compatibility
108
 
@@ -170,7 +163,7 @@ Then click Download.
170
  I recommend using the `huggingface-hub` Python library:
171
 
172
  ```shell
173
- pip3 install huggingface-hub>=0.17.1
174
  ```
175
 
176
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
 
4
  language:
5
  - en
6
  library_name: transformers
7
+ license: other
8
  model_creator: medalpaca
9
  model_name: Medalpaca 13B
10
  model_type: llama
 
94
  ```
95
 
96
  <!-- prompt-template end -->
 
 
97
 
 
98
 
 
 
 
 
99
  <!-- compatibility_gguf start -->
100
  ## Compatibility
101
 
 
163
  I recommend using the `huggingface-hub` Python library:
164
 
165
  ```shell
166
+ pip3 install huggingface-hub
167
  ```
168
 
169
  Then you can download any individual model file to the current directory, at high speed, with a command like this: