Orion / README.md
Abecid's picture
Update README.md
ea9652b
|
raw
history blame
1.66 kB
metadata
license: apache-2.0

✨ Orion-7B

Orion is based on llama-7B trained on the latest deep learning papers that were published in notable conferences with various PEFT methods. It is made available under the Apache 2.0 license. Paper coming soon 😊.

What is so special about Orion?

  • Comprehensive performance and evaluation analysis of PEFT methods for knowledge editing
  • Novel prompting framekwork for knowledge editing dataset generation
  • Large knowledge editing dataset from recent deep learning papers published in top conferences
  • ⚠️ This is a raw, fine-tuned model, which should be further instruction-tuned for production level performance.

πŸ’₯ Orion LLM require PyTorch 2.0

You will need at least 20-25GB of memory to swiftly run inference with Orion-7B.

Model Details

Model Description

  • Developed by: AttentionX;
  • Base model: llama-7B;
  • Training method: LoRA, llama-adapter, llama-adapterV2, full fine-tuning
  • Language(s) (NLP): English;
  • License: Apache 2.0 license.

Model Source

  • Paper: coming soon.

Training Details

Paper coming soon 😊.

Evaluation

Paper coming soon.

Hardware

Orion was trained on a single a100

Citation

Paper coming soon 😊. In the meanwhile, you can use the following information to cite:

@article{orion7b,
  title={{Orion-7B}: knolwedge editing via peft},
  author={Adam Lee, Sungkyung Kim, Junyoung Park, Sunho Jung, Jusang Oh},
  year={2023}
}

License

Orion is made available under the Apache 2.0 license.

Contact

attentionx.ai@gmail.com