munish0838 commited on
Commit
15779d9
β€’
1 Parent(s): e86d326

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +133 -0
README.md ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3
3
+ base_model: openchat/openchat-3.6-8b-20240522
4
+ tags:
5
+ - openchat
6
+ - llama3
7
+ - C-RLFT
8
+ library_name: transformers
9
+ pipeline_tag: text-generation
10
+ ---
11
+ # openchat-3.6-8b-20240522-GGUF
12
+ This is quantized version of [openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522) created using llama.cpp
13
+
14
+ # Model Description
15
+ <p align="center" style="margin-top: 0px;">
16
+ <span class="link-text" style=" margin-right: 0px; font-size: 0.8em">Sponsored by RunPod</span>
17
+ <img src="https://styles.redditmedia.com/t5_6075m3/styles/profileIcon_71syco7c5lt81.png?width=256&height=256&frame=1&auto=webp&crop=256:256,smart&s=24bd3c71dc11edc5d4f88d0cbc1da72ed7ae1969" alt="RunPod Logo" style="width:30px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
18
+ </p>
19
+
20
+ <div style="background-color: white; padding: 0.7em; border-radius: 0.5em; color: black; display: flex; flex-direction: column; justify-content: center; text-align: center">
21
+ <a href="https://huggingface.co/openchat/openchat-3.5-0106" style="text-decoration: none; color: black;">
22
+ <span style="font-size: 1.7em; font-family: 'Helvetica'; letter-spacing: 0.1em; font-weight: bold; color: black;">Llama 3 Version: OPENCHAT</span><span style="font-size: 1.8em; font-family: 'Helvetica'; color: #3c72db; ">3.6</span>
23
+ <span style="font-size: 1.0em; font-family: 'Helvetica'; color: white; background-color: #90e0ef; vertical-align: top; border-radius: 6em; padding: 0.066em 0.4em; letter-spacing: 0.1em; font-weight: bold;">20240522</span>
24
+ <span style="font-size: 0.85em; font-family: 'Helvetica'; color: black;">
25
+ <br> πŸ† The Overall Best Performing Open-source 8B Model πŸ†
26
+ <br> πŸš€ Outperforms Llama-3-8B-Instruct and open-source finetunes/merges πŸš€
27
+ </span>
28
+ </a>
29
+ </div>
30
+
31
+ <div style="display: flex; justify-content: center; align-items: center; width: 110%; margin-left: -5%;">
32
+ <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/benchmarks-openchat-3.6-20240522.svg" style="width: 100%; border-radius: 1em">
33
+ </div>
34
+
35
+ <div style="display: flex; justify-content: center; align-items: center">
36
+ <p>* Llama-3-Instruct often fails to follow the few-shot templates.</p>
37
+ </div>
38
+
39
+ <div align="center">
40
+ <h2> Usage </h2>
41
+ </div>
42
+
43
+ To use this model, we highly recommend installing the OpenChat package by following the [installation guide](https://github.com/imoneoi/openchat#installation) in our repository and using the OpenChat OpenAI-compatible API server by running the serving command from the table below. The server is optimized for high-throughput deployment using [vLLM](https://github.com/vllm-project/vllm) and can run on a consumer GPU with 24GB RAM. To enable tensor parallelism, append `--tensor-parallel-size N` to the serving command.
44
+
45
+ Once started, the server listens at `localhost:18888` for requests and is compatible with the [OpenAI ChatCompletion API specifications](https://platform.openai.com/docs/api-reference/chat). Please refer to the example request below for reference. Additionally, you can use the [OpenChat Web UI](https://github.com/imoneoi/openchat#web-ui) for a user-friendly experience.
46
+
47
+ If you want to deploy the server as an online service, you can use `--api-keys sk-KEY1 sk-KEY2 ...` to specify allowed API keys and `--disable-log-requests --disable-log-stats --log-file openchat.log` for logging only to a file. For security purposes, we recommend using an [HTTPS gateway](https://fastapi.tiangolo.com/es/deployment/concepts/#security-https) in front of the server.
48
+
49
+ | Model | Size | Context | Weights | Serving |
50
+ |-----------------------|------|---------|-------------------------------------------------------------------------|---------------------------------------------------------------------------------------|
51
+ | OpenChat-3.6-20240522 | 8B | 8192 | [Huggingface](https://huggingface.co/openchat/openchat-3.6-8b-20240522) | `python -m ochat.serving.openai_api_server --model openchat/openchat-3.6-8b-20240522` |
52
+
53
+ <details>
54
+ <summary>Example request (click to expand)</summary>
55
+
56
+ ```bash
57
+ curl http://localhost:18888/v1/chat/completions \
58
+ -H "Content-Type: application/json" \
59
+ -d '{
60
+ "model": "openchat_3.6",
61
+ "messages": [{"role": "user", "content": "You are a large language model named OpenChat. Write a poem to describe yourself"}]
62
+ }'
63
+ ```
64
+
65
+ </details>
66
+
67
+ ### Conversation templates
68
+
69
+ πŸ’‘ **Default Mode**: Best for coding, chat and general tasks
70
+
71
+ ```
72
+ GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:
73
+ ```
74
+
75
+ ⚠️ **Notice:** Remember to set `<|end_of_turn|>` as end of generation token.
76
+
77
+ The default template is also available as the integrated `tokenizer.chat_template`, which can be used instead of manually specifying the template:
78
+
79
+ ```python
80
+ messages = [
81
+ {"role": "user", "content": "Hello"},
82
+ {"role": "assistant", "content": "Hi"},
83
+ {"role": "user", "content": "How are you today?"}
84
+ ]
85
+ tokens = tokenizer.apply_chat_template(messages, add_generation_prompt=True)
86
+ ```
87
+
88
+ ## Inference using Transformers
89
+
90
+ ```python
91
+ from transformers import AutoTokenizer, AutoModelForCausalLM
92
+ import torch
93
+
94
+ model_id = "openchat/openchat-3.6-8b-20240522"
95
+
96
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
97
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
98
+
99
+ messages = [
100
+ {"role": "user", "content": "Explain how large language models work in detail."},
101
+ ]
102
+ input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
103
+
104
+ outputs = model.generate(input_ids,
105
+ do_sample=True,
106
+ temperature=0.5,
107
+ max_new_tokens=1024
108
+ )
109
+ response = outputs[0][input_ids.shape[-1]:]
110
+ print(tokenizer.decode(response, skip_special_tokens=True))
111
+ ```
112
+
113
+ <div align="center">
114
+ <h2> Limitations </h2>
115
+ </div>
116
+
117
+ **Foundation Model Limitations**
118
+ Despite its advanced capabilities, OpenChat is still bound by the limitations inherent in its foundation models. These limitations may impact the model's performance in areas such as:
119
+
120
+ - Complex reasoning
121
+ - Mathematical and arithmetic tasks
122
+ - Programming and coding challenges
123
+
124
+ **Hallucination of Non-existent Information**
125
+ OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model.
126
+
127
+ **Safety**
128
+ OpenChat may sometimes generate harmful, hate speech, biased responses, or answer unsafe questions. It's crucial to apply additional AI safety measures in use cases that require safe and moderated responses.
129
+
130
+ <div align="center">
131
+ <h2> πŸ’Œ Contact </h2>
132
+ </div>
133
+