File size: 1,656 Bytes
3134ae6
 
 
8d8cbcb
ea9652b
8d8cbcb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
---
license: apache-2.0
---
# โœจ Orion-7B
**Orion is based on llama-7B trained on the latest deep learning papers that were published in notable conferences with various PEFT methods. It is made available under the Apache 2.0 license.**
*Paper coming soon ๐Ÿ˜Š.*


## What is so special about Orion?
* **Comprehensive performance and evaluation analysis of PEFT methods for knowledge editing** 
* **Novel prompting framekwork for knowledge editing dataset generation**
* **Large knowledge editing dataset from recent deep learning papers published in top conferences**
* 
โš ๏ธ **This is a raw, fine-tuned model, which should be further instruction-tuned for production level performance.** 

๐Ÿ’ฅ **Orion LLM require PyTorch 2.0**

You will need **at least 20-25GB of memory** to swiftly run inference with Orion-7B.

## Model Details

### Model Description
- **Developed by:** [AttentionX](https://atttentionx.github.io);
- **Base model:** llama-7B;
- **Training method: LoRA, llama-adapter, llama-adapterV2, full fine-tuning**
- **Language(s) (NLP):** English;
- **License:** Apache 2.0 license.

### Model Source
- **Paper:** *coming soon*.


## Training Details
*Paper coming soon ๐Ÿ˜Š.*


## Evaluation
*Paper coming soon.*


## Hardware
Orion was trained on a single a100 

## Citation
*Paper coming soon* ๐Ÿ˜Š. In the meanwhile, you can use the following information to cite: 
```
@article{orion7b,
  title={{Orion-7B}: knolwedge editing via peft},
  author={Adam Lee, Sungkyung Kim, Junyoung Park, Sunho Jung, Jusang Oh},
  year={2023}
}
```


## License

Orion is made available under the Apache 2.0 license.

## Contact
attentionx.ai@gmail.com