Text Generation
Transformers
llama
Inference Endpoints
text-generation-inference
GodRain commited on
Commit
ea08a07
1 Parent(s): bc44991

Add Citiation

Browse files
Files changed (1) hide show
  1. README.md +12 -5
README.md CHANGED
@@ -2,10 +2,6 @@
2
  license: bigcode-openrail-m
3
  datasets:
4
  - WizardLM/WizardLM_evol_instruct_70k
5
- language:
6
- - en
7
- paper:
8
- - 2304.12244
9
  ---
10
 
11
  <font size=5>Here is an example to show how to use model quantized by auto_gptq</font>
@@ -58,4 +54,15 @@ def evaluate(
58
  s = generation_output.sequences
59
  output = tokenizer.batch_decode(s, skip_special_tokens=True)
60
  return output
61
- ```
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: bigcode-openrail-m
3
  datasets:
4
  - WizardLM/WizardLM_evol_instruct_70k
 
 
 
 
5
  ---
6
 
7
  <font size=5>Here is an example to show how to use model quantized by auto_gptq</font>
 
54
  s = generation_output.sequences
55
  output = tokenizer.batch_decode(s, skip_special_tokens=True)
56
  return output
57
+ ```
58
+
59
+
60
+ Citiation:
61
+ @misc{xu2023wizardlm,
62
+ title={WizardLM: Empowering Large Language Models to Follow Complex Instructions},
63
+ author={Can Xu and Qingfeng Sun and Kai Zheng and Xiubo Geng and Pu Zhao and Jiazhan Feng and Chongyang Tao and Daxin Jiang},
64
+ year={2023},
65
+ eprint={2304.12244},
66
+ archivePrefix={arXiv},
67
+ primaryClass={cs.CL}
68
+ }