Abecid commited on
Commit
8c33aab
1 Parent(s): da0d87b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -2,7 +2,7 @@
2
  license: apache-2.0
3
  ---
4
  # ✨ Orion-7B
5
- **Orion is based on llama-7B, further trained on the latest deep learning papers published in notable conferences with various PEFT methods. It is made available under the Apache 2.0 license.**
6
  *Paper coming soon 😊.*
7
 
8
 
@@ -13,7 +13,7 @@ license: apache-2.0
13
  *
14
  ⚠️ **This is a raw, fine-tuned model, which should be further instruction-tuned for production level performance.**
15
 
16
- 💥 **Orion LLM require PyTorch 2.0**
17
 
18
  You will need **at least 20-25GB of memory** to swiftly run inference with Orion-7B.
19
 
@@ -21,7 +21,7 @@ You will need **at least 20-25GB of memory** to swiftly run inference with Orion
21
 
22
  ### Model Description
23
  - **Developed by:** [AttentionX](https://atttentionx.github.io);
24
- - **Base model:** llama-7B;
25
  - **Training method: LoRA, llama-adapter, llama-adapterV2, full fine-tuning**
26
  - **Language(s) (NLP):** English;
27
  - **License:** Apache 2.0 license.
 
2
  license: apache-2.0
3
  ---
4
  # ✨ Orion-7B
5
+ **Orion is based on alpaca-7B, further trained on the latest deep learning papers published at notable conferences with various PEFT methods. It is made available under the Apache 2.0 license.**
6
  *Paper coming soon 😊.*
7
 
8
 
 
13
  *
14
  ⚠️ **This is a raw, fine-tuned model, which should be further instruction-tuned for production level performance.**
15
 
16
+ 💥 **Orion LLM requires PyTorch 2.0**
17
 
18
  You will need **at least 20-25GB of memory** to swiftly run inference with Orion-7B.
19
 
 
21
 
22
  ### Model Description
23
  - **Developed by:** [AttentionX](https://atttentionx.github.io);
24
+ - **Base model:** alpaca-7B;
25
  - **Training method: LoRA, llama-adapter, llama-adapterV2, full fine-tuning**
26
  - **Language(s) (NLP):** English;
27
  - **License:** Apache 2.0 license.