NeMo
Safetensors
llama
srvm commited on
Commit
6a7be52
1 Parent(s): a60471e

Update usage instructions

Browse files
Files changed (1) hide show
  1. README.md +2 -14
README.md CHANGED
@@ -48,21 +48,9 @@ Llama-3.1-Minitron-4B-Width-Base uses a model embedding size of 3072, 32 attenti
48
 
49
 
50
  ## Usage
51
- Pull requests
52
- to support this model in Hugging Face Transformers are currently under review
53
- ([#32495](https://github.com/huggingface/transformers/pull/32495) and [#32502](https://github.com/huggingface/transformers/pull/32502))
54
- and are expected to be merged soon. In the meantime,
55
- please follow the installation instructions below:
56
-
57
  ```
58
- # Fetch PR 32502
59
- $ git clone -b suhara/llama-kv-channels --single-branch https://github.com/suhara/transformers.git && cd transformers
60
-
61
- # Fetch changes from PR 32495
62
- $ git fetch https://github.com/suiyoubi/transformers.git aot/head_dim_rope && git cherry-pick FETCH_HEAD --strategy-option theirs
63
-
64
- # Install transformers
65
- $ pip install -e .
66
  ```
67
  We can now run inference on this model:
68
 
 
48
 
49
 
50
  ## Usage
51
+ Support for this model will be added in the upcoming `transformers` release. In the meantime, please install the library from source:
 
 
 
 
 
52
  ```
53
+ pip install git+https://github.com/huggingface/transformers
 
 
 
 
 
 
 
54
  ```
55
  We can now run inference on this model:
56