|
--- |
|
inference: false |
|
license: other |
|
--- |
|
|
|
<!-- header start --> |
|
<div style="width: 100%;"> |
|
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> |
|
</div> |
|
<div style="display: flex; justify-content: space-between; width: 100%;"> |
|
<div style="display: flex; flex-direction: column; align-items: flex-start;"> |
|
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> |
|
</div> |
|
<div style="display: flex; flex-direction: column; align-items: flex-end;"> |
|
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> |
|
</div> |
|
</div> |
|
<!-- header end --> |
|
|
|
# Minlik's Chinese Alpaca 33B Merged fp16 |
|
|
|
This is fp16 pytorch format model files for [Minlik's Chinese Alpaca 33B Merged](https://huggingface.co/minlik/chinese-alpaca-33b-merged) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test). |
|
|
|
[Kaio Ken's SuperHOT 30b LoRA](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`. |
|
|
|
Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length. |
|
|
|
## Repositories available |
|
|
|
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Chinese-Alpaca-33B-SuperHOT-8K-GPTQ) |
|
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Chinese-Alpaca-33B-SuperHOT-8K-GGML) |
|
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Chinese-Alpaca-33B-SuperHOT-8K-fp16) |
|
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/minlik/chinese-alpaca-33b-merged) |
|
|
|
## How to use this model from Python code |
|
|
|
First make sure you have Einops installed: |
|
|
|
``` |
|
pip3 install auto-gptq |
|
``` |
|
|
|
Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code. |
|
|
|
The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`. |
|
|
|
```python |
|
from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline |
|
import argparse |
|
|
|
model_name_or_path = "TheBloke/Chinese-Alpaca-33B-SuperHOT-8K-fp16" |
|
|
|
use_triton = False |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) |
|
|
|
config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True) |
|
# Change this to the sequence length you want |
|
config.max_position_embeddings = 8192 |
|
|
|
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, |
|
config=config, |
|
trust_remote_code=True, |
|
device_map='auto') |
|
|
|
# Note: check to confirm if this is correct prompt template is correct for this model! |
|
prompt = "Tell me about AI" |
|
prompt_template=f'''USER: {prompt} |
|
ASSISTANT:''' |
|
|
|
print("\n\n*** Generate:") |
|
|
|
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() |
|
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) |
|
print(tokenizer.decode(output[0])) |
|
|
|
# Inference can also be done using transformers' pipeline |
|
|
|
print("*** Pipeline:") |
|
pipe = pipeline( |
|
"text-generation", |
|
model=model, |
|
tokenizer=tokenizer, |
|
max_new_tokens=512, |
|
temperature=0.7, |
|
top_p=0.95, |
|
repetition_penalty=1.15 |
|
) |
|
|
|
print(pipe(prompt_template)[0]['generated_text']) |
|
``` |
|
|
|
## Using other UIs: monkey patch |
|
|
|
Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev. |
|
|
|
It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest. |
|
|
|
<!-- footer start --> |
|
## Discord |
|
|
|
For further support, and discussions on these models and AI in general, join us at: |
|
|
|
[TheBloke AI's Discord server](https://discord.gg/theblokeai) |
|
|
|
## Thanks, and how to contribute. |
|
|
|
Thanks to the [chirper.ai](https://chirper.ai) team! |
|
|
|
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. |
|
|
|
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. |
|
|
|
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. |
|
|
|
* Patreon: https://patreon.com/TheBlokeAI |
|
* Ko-Fi: https://ko-fi.com/TheBlokeAI |
|
|
|
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. |
|
|
|
**Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski. |
|
|
|
Thank you to all my generous patrons and donaters! |
|
|
|
<!-- footer end --> |
|
|
|
# Original model card: Kaio Ken's SuperHOT 8K |
|
|
|
### SuperHOT Prototype 2 w/ 8K Context |
|
|
|
This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). |
|
Tests have shown that the model does indeed leverage the extended context at 8K. |
|
|
|
You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** |
|
|
|
#### Looking for Merged & Quantized Models? |
|
- 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors) |
|
- 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors) |
|
|
|
|
|
#### Training Details |
|
I trained the LoRA with the following configuration: |
|
- 1200 samples (~400 samples over 2048 sequence length) |
|
- learning rate of 3e-4 |
|
- 3 epochs |
|
- The exported modules are: |
|
- q_proj |
|
- k_proj |
|
- v_proj |
|
- o_proj |
|
- no bias |
|
- Rank = 4 |
|
- Alpha = 8 |
|
- no dropout |
|
- weight decay of 0.1 |
|
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 |
|
- Trained on 4-bit base model |
|
|
|
# Original model card: Minlik's Chinese Alpaca 33B Merged |
|
|
|
|
|
加入中文词表并继续预训练中文Embedding,并在此基础上继续使用指令数据集finetuning,得到的中文Alpaca-33B模型。 |
|
|
|
模型转换用到的相关base及lora模型如下: |
|
- base-model: elinas/llama-30b-hf-transformers-4.29 |
|
- lora-model: ziqingyang/chinese-alpaca-lora-33b |
|
|
|
详情可参考:https://github.com/ymcui/Chinese-LLaMA-Alpaca/releases/tag/v4.0 |
|
|
|
|
|
### 使用方法参考 |
|
1. 安装模块包 |
|
```bash |
|
pip install sentencepiece |
|
pip install transformers>=4.28.0 |
|
``` |
|
|
|
2. 生成文本 |
|
```python |
|
import torch |
|
import transformers |
|
from transformers import LlamaTokenizer, LlamaForCausalLM |
|
|
|
def generate_prompt(text): |
|
return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request. |
|
|
|
### Instruction: |
|
{text} |
|
|
|
### Response:""" |
|
|
|
|
|
tokenizer = LlamaTokenizer.from_pretrained('minlik/chinese-alpaca-33b-merged') |
|
model = LlamaForCausalLM.from_pretrained('minlik/chinese-alpaca-33b-merged').half().to('cuda') |
|
model.eval() |
|
|
|
text = '第一个登上月球的人是谁?' |
|
prompt = generate_prompt(text) |
|
input_ids = tokenizer.encode(prompt, return_tensors='pt').to('cuda') |
|
|
|
|
|
with torch.no_grad(): |
|
output_ids = model.generate( |
|
input_ids=input_ids, |
|
max_new_tokens=128, |
|
temperature=1, |
|
top_k=40, |
|
top_p=0.9, |
|
repetition_penalty=1.15 |
|
).cuda() |
|
output = tokenizer.decode(output_ids[0], skip_special_tokens=True) |
|
print(output.replace(prompt, '').strip()) |
|
``` |
|
|