Text Generation
Transformers
PyTorch
English
llama
Eval Results
text-generation-inference
Inference Endpoints
pankajmathur commited on
Commit
81f9a74
1 Parent(s): 66d3f32

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -2
README.md CHANGED
@@ -112,12 +112,24 @@ model-index:
112
  ---
113
  # orca_mini_v2_7b
114
 
115
- An **Uncensored** LLaMA-7b model in collaboration with [Eric Hartford](https://huggingface.co/ehartford). trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches.
 
 
 
 
 
 
 
 
 
 
 
 
 
116
 
117
  Please note this model has *better code generation capabilities* compare to our original orca_mini_7b which was trained on base OpenLLaMA-7b model and which has the [empty spaces issues & found not good for code generation]((https://github.com/openlm-research/open_llama#update-06072023)).
118
 
119
 
120
- **P.S. I am #opentowork, if you can help, please reach out to me at www.linkedin.com/in/pankajam**
121
 
122
  # Evaluation
123
 
 
112
  ---
113
  # orca_mini_v2_7b
114
 
115
+
116
+ <img src="https://huggingface.co/pankajmathur/orca_mini_v5_8b/resolve/main/orca_minis_small.jpeg" width="auto" />
117
+
118
+ <strong>
119
+ Passionate about Generative AI? I help companies to privately train and deploy custom LLM/MLLM affordably. For startups, I can even assist with securing GPU grants to get you started. Let's chat!
120
+
121
+ <a href="https://www.linkedin.com/in/pankajam" target="_blank">https://www.linkedin.com/in/pankajam</a> Looking forward to connecting!
122
+ </strong>
123
+
124
+ <br>
125
+
126
+
127
+
128
+ **An Uncensored LLaMA-7b model in collaboration with [Eric Hartford](https://huggingface.co/ehartford). trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches.**
129
 
130
  Please note this model has *better code generation capabilities* compare to our original orca_mini_7b which was trained on base OpenLLaMA-7b model and which has the [empty spaces issues & found not good for code generation]((https://github.com/openlm-research/open_llama#update-06072023)).
131
 
132
 
 
133
 
134
  # Evaluation
135