update docs
Browse files
docs/transformers_deploy_guide.md
CHANGED
|
@@ -17,7 +17,7 @@ The deployment process is illustrated below using MiniMax-M2.1 as an example.
|
|
| 17 |
|
| 18 |
- Python: 3.9 - 3.12
|
| 19 |
|
| 20 |
-
- Transformers:
|
| 21 |
|
| 22 |
- GPU:
|
| 23 |
|
|
@@ -32,7 +32,7 @@ It is recommended to use a virtual environment (such as **venv**, **conda**, or
|
|
| 32 |
We recommend installing Transformers in a fresh Python environment:
|
| 33 |
|
| 34 |
```bash
|
| 35 |
-
uv pip install transformers
|
| 36 |
```
|
| 37 |
|
| 38 |
Run the following Python script to run the model. Transformers will automatically download and cache the MiniMax-M2.1 model from Hugging Face.
|
|
@@ -46,7 +46,6 @@ MODEL_PATH = "MiniMaxAI/MiniMax-M2.1"
|
|
| 46 |
model = AutoModelForCausalLM.from_pretrained(
|
| 47 |
MODEL_PATH,
|
| 48 |
device_map="auto",
|
| 49 |
-
trust_remote_code=True,
|
| 50 |
)
|
| 51 |
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
|
| 52 |
|
|
@@ -58,7 +57,7 @@ messages = [
|
|
| 58 |
|
| 59 |
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to("cuda")
|
| 60 |
|
| 61 |
-
generated_ids = model.generate(model_inputs, max_new_tokens=100, generation_config=model.generation_config)
|
| 62 |
|
| 63 |
response = tokenizer.batch_decode(generated_ids)[0]
|
| 64 |
|
|
@@ -77,7 +76,7 @@ export HF_ENDPOINT=https://hf-mirror.com
|
|
| 77 |
|
| 78 |
### MiniMax-M2 model is not currently supported
|
| 79 |
|
| 80 |
-
Please check that
|
| 81 |
|
| 82 |
## Getting Support
|
| 83 |
|
|
|
|
| 17 |
|
| 18 |
- Python: 3.9 - 3.12
|
| 19 |
|
| 20 |
+
- Transformers: 5.0.0.dev0
|
| 21 |
|
| 22 |
- GPU:
|
| 23 |
|
|
|
|
| 32 |
We recommend installing Transformers in a fresh Python environment:
|
| 33 |
|
| 34 |
```bash
|
| 35 |
+
uv pip install git+https://github.com/huggingface/transformers torch accelerate
|
| 36 |
```
|
| 37 |
|
| 38 |
Run the following Python script to run the model. Transformers will automatically download and cache the MiniMax-M2.1 model from Hugging Face.
|
|
|
|
| 46 |
model = AutoModelForCausalLM.from_pretrained(
|
| 47 |
MODEL_PATH,
|
| 48 |
device_map="auto",
|
|
|
|
| 49 |
)
|
| 50 |
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
|
| 51 |
|
|
|
|
| 57 |
|
| 58 |
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to("cuda")
|
| 59 |
|
| 60 |
+
generated_ids = model.generate(**model_inputs, max_new_tokens=100, generation_config=model.generation_config)
|
| 61 |
|
| 62 |
response = tokenizer.batch_decode(generated_ids)[0]
|
| 63 |
|
|
|
|
| 76 |
|
| 77 |
### MiniMax-M2 model is not currently supported
|
| 78 |
|
| 79 |
+
Please check that you have installed transformers with a version that supports this model.
|
| 80 |
|
| 81 |
## Getting Support
|
| 82 |
|
docs/transformers_deploy_guide_cn.md
CHANGED
|
@@ -17,7 +17,7 @@
|
|
| 17 |
|
| 18 |
- Python:3.9 - 3.12
|
| 19 |
|
| 20 |
-
- Transformers:
|
| 21 |
|
| 22 |
- GPU:
|
| 23 |
|
|
@@ -32,7 +32,7 @@
|
|
| 32 |
建议在全新的 Python 环境中安装 Transformers:
|
| 33 |
|
| 34 |
```bash
|
| 35 |
-
uv pip install transformers
|
| 36 |
```
|
| 37 |
|
| 38 |
运行如下 Python 命令运行模型,Transformers 会自动从 Huggingface 下载并缓存 MiniMax-M2.1 模型。
|
|
@@ -46,7 +46,6 @@ MODEL_PATH = "MiniMaxAI/MiniMax-M2.1"
|
|
| 46 |
model = AutoModelForCausalLM.from_pretrained(
|
| 47 |
MODEL_PATH,
|
| 48 |
device_map="auto",
|
| 49 |
-
trust_remote_code=True,
|
| 50 |
)
|
| 51 |
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
|
| 52 |
|
|
@@ -58,7 +57,7 @@ messages = [
|
|
| 58 |
|
| 59 |
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to("cuda")
|
| 60 |
|
| 61 |
-
generated_ids = model.generate(model_inputs, max_new_tokens=100, generation_config=model.generation_config)
|
| 62 |
|
| 63 |
response = tokenizer.batch_decode(generated_ids)[0]
|
| 64 |
|
|
|
|
| 17 |
|
| 18 |
- Python:3.9 - 3.12
|
| 19 |
|
| 20 |
+
- Transformers: 5.0.0.dev0
|
| 21 |
|
| 22 |
- GPU:
|
| 23 |
|
|
|
|
| 32 |
建议在全新的 Python 环境中安装 Transformers:
|
| 33 |
|
| 34 |
```bash
|
| 35 |
+
uv pip install git+https://github.com/huggingface/transformers torch accelerate
|
| 36 |
```
|
| 37 |
|
| 38 |
运行如下 Python 命令运行模型,Transformers 会自动从 Huggingface 下载并缓存 MiniMax-M2.1 模型。
|
|
|
|
| 46 |
model = AutoModelForCausalLM.from_pretrained(
|
| 47 |
MODEL_PATH,
|
| 48 |
device_map="auto",
|
|
|
|
| 49 |
)
|
| 50 |
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
|
| 51 |
|
|
|
|
| 57 |
|
| 58 |
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to("cuda")
|
| 59 |
|
| 60 |
+
generated_ids = model.generate(**model_inputs, max_new_tokens=100, generation_config=model.generation_config)
|
| 61 |
|
| 62 |
response = tokenizer.batch_decode(generated_ids)[0]
|
| 63 |
|