File size: 1,098 Bytes
e513c55
5f8c9fe
01ac1b5
 
5f8c9fe
01ac1b5
e513c55
5f8c9fe
 
 
 
 
02ec642
5f8c9fe
 
 
 
01ac1b5
5f8c9fe
 
01ac1b5
5f8c9fe
01ac1b5
5f8c9fe
 
01ac1b5
02ec642
 
 
01ac1b5
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
language:
- zh
license: apache-2.0
tasks:
- text-generation
---

<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<div align="center">
<h1>
  HuatuoGPT2-13B
</h1>
</div>

<div align="center">
<a href="https://github.com/FreedomIntelligence/HuatuoGPT-II" target="_blank">GitHub</a> | <a href="https://arxiv.org/pdf/2311.09774.pdf" target="_blank">Our Paper</a>
</div>

# <span id="Start">Quick Start</span>

```Python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation.utils import GenerationConfig
tokenizer = AutoTokenizer.from_pretrained("FreedomIntelligence/HuatuoGPT2-13B", use_fast=True, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("FreedomIntelligence/HuatuoGPT2-13B", device_map="auto", torch_dtype='auto', trust_remote_code=True)
model.generation_config = GenerationConfig.from_pretrained("FreedomIntelligence/HuatuoGPT2-13B")
messages = []
messages.append({"role": "user", "content": "肚子疼怎么办?"})
response = model.HuatuoChat(tokenizer, messages)
print(response)
```