Text Generation
Transformers
Safetensors
llama
text-generation-inference
Inference Endpoints
yuxiang630 commited on
Commit
a607e0b
•
1 Parent(s): d7d5499

feat: model card readme

Browse files
Files changed (1) hide show
  1. README.md +110 -1
README.md CHANGED
@@ -5,4 +5,113 @@ datasets:
5
  - ise-uiuc/Magicoder-Evol-Instruct-110K
6
  library_name: transformers
7
  pipeline_tag: text-generation
8
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  - ise-uiuc/Magicoder-Evol-Instruct-110K
6
  library_name: transformers
7
  pipeline_tag: text-generation
8
+ ---
9
+ # 🎩 Magicoder: Source Code Is All You Need
10
+
11
+ > Refer to our GitHub repo [ise-uiuc/magicoder](https://github.com/ise-uiuc/magicoder/) for an up-to-date introduction to the Magicoder family!
12
+
13
+ * 🎩**Magicoder** is a model family empowered by 🪄**OSS-Instruct**, a novel approach to enlightening LLMs with open-source code snippets for generating *low-bias* and *high-quality* instruction data for code.
14
+ * 🪄**OSS-Instruct** mitigates the *inherent bias* of the LLM-synthesized instruction data by empowering them with *a wealth of open-source references* to produce more diverse, realistic, and controllable data.
15
+
16
+ ![Overview of OSS-Instruct](assets/overview.svg)
17
+ ![Overview of Result](assets/result.png)
18
+
19
+ ## Model Details
20
+
21
+ ### Model Description
22
+
23
+ * **Developed by:**
24
+ [Yuxiang Wei](https://yuxiang.cs.illinois.edu),
25
+ [Zhe Wang](https://github.com/zhewang2001),
26
+ [Jiawei Liu](https://jiawei-site.github.io),
27
+ [Yifeng Ding](https://yifeng-ding.com),
28
+ [Lingming Zhang](https://lingming.cs.illinois.edu)
29
+ * **License:** [Llama 2](https://ai.meta.com/llama/license/)
30
+ * **Finetuned from model:** [CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf)
31
+
32
+ ### Model Sources
33
+
34
+ * **Repository:** <https://github.com/ise-uiuc/magicoder>
35
+ * **Paper:** <https://arxiv.org/pdf/2312.02120.pdf>
36
+ * **Demo (powered by [Gradio](https://www.gradio.app)):**
37
+ <https://github.com/ise-uiuc/magicoder/tree/main/demo>
38
+
39
+ ### Training Data
40
+
41
+ * [Magicoder-OSS-Instruct-75K](https://huggingface.co/datasets/ise-uiuc/Magicoder_oss_instruct_75k): generated through **OSS-Instruct** using `gpt-3.5-turbo-1106` and used to train both Magicoder and Magicoder-S series.
42
+ * [Magicoder-Evol-Instruct-110K](https://huggingface.co/datasets/ise-uiuc/Magicoder_evol_instruct_110k): decontaminated and redistributed from [theblackcat102/evol-codealpaca-v1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1), used to further finetune Magicoder series and obtain Magicoder-S models.
43
+
44
+ ## Uses
45
+
46
+ ### Direct Use
47
+
48
+ Magicoders are designed and best suited for **coding tasks**.
49
+
50
+ ### Out-of-Scope Use
51
+
52
+ Magicoders may not work well in non-coding tasks.
53
+
54
+ ## Bias, Risks, and Limitations
55
+
56
+ Magicoders may sometimes make errors, producing misleading contents, or struggle to manage tasks that are not related to coding.
57
+
58
+ ### Recommendations
59
+
60
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
61
+
62
+ ## How to Get Started with the Model
63
+
64
+ Use the code below to get started with the model. Make sure you installed the [transformers](https://huggingface.co/docs/transformers/index) library.
65
+
66
+ ```python
67
+ from transformers import pipeline
68
+ import torch
69
+
70
+ MAGICODER_PROMPT = """You are an exceptionally intelligent coding assistant that consistently delivers accurate and reliable responses to user instructions.
71
+
72
+ @@ Instruction
73
+ {instruction}
74
+
75
+ @@ Response
76
+ """
77
+
78
+ instruction = <Your code instruction here>
79
+
80
+ prompt = MAGICODER_PROMPT.format(instruction=instruction)
81
+ generator = pipeline(
82
+ model="ise-uiuc/Magicoder-CL-7B",
83
+ task="text-generation",
84
+ torch_dtype=torch.bfloat16,
85
+ device_map="auto",
86
+ )
87
+ result = generator(prompt, max_length=1024, num_return_sequences=1, temperature=0.0)
88
+ print(result[0]["generated_text"])
89
+ ```
90
+
91
+ ## Technical Details
92
+
93
+ Refer to our GitHub repo: [ise-uiuc/magicoder](https://github.com/ise-uiuc/magicoder/).
94
+
95
+ ## Citation
96
+
97
+ ```bibtex
98
+ @misc{magicoder,
99
+ title={Magicoder: Source Code Is All You Need},
100
+ author={Yuxiang Wei and Zhe Wang and Jiawei Liu and Yifeng Ding and Lingming Zhang},
101
+ year={2023},
102
+ eprint={2312.02120},
103
+ archivePrefix={arXiv},
104
+ primaryClass={cs.CL}
105
+ }
106
+ ```
107
+
108
+ ## Acknowledgements
109
+
110
+ * [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder): Evol-Instruct
111
+ * [DeepSeek-Coder](https://github.com/deepseek-ai/DeepSeek-Coder): Base model for Magicoder-DS
112
+ * [CodeLlama](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/): Base model for Magicoder-CL
113
+ * [StarCoder](https://arxiv.org/abs/2305.06161): Data decontamination
114
+
115
+ ## Important Note
116
+
117
+ Magicoder models are trained on the synthetic data generated by OpenAI models. Please pay attention to OpenAI's [terms of use](https://openai.com/policies/terms-of-use) when using the models and the datasets. Magicoders will not compete with OpenAI's commercial products.