nielsr HF Staff commited on
Commit
439c49c
·
verified ·
1 Parent(s): b8899e3

Update pipeline tag and add paper link to metadata and content

Browse files

This PR improves the model card by:

* Updating the `pipeline_tag` to `text-generation`, which accurately reflects the model's capabilities in generating textual responses for medical reasoning tasks. This change will ensure the model is discoverable under the correct pipeline on the Hugging Face Hub (https://huggingface.co/models?pipeline_tag=text-generation).
* Adding the paper ID `2509.02208` to the metadata, linking the model to its official Hugging Face paper page ([`2509.02208`](https://huggingface.co/papers/2509.02208)) for enhanced visibility and context.
* Adding a prominent link to the paper at the top of the model card's content for better immediate visibility of the research artifact.

Files changed (1) hide show
  1. README.md +16 -10
README.md CHANGED
@@ -1,16 +1,21 @@
1
  ---
2
- license: apache-2.0
3
- tags:
4
- - chat
5
- library_name: transformers
6
  language:
7
  - en
8
  - zh
9
- base_model:
10
- - Qwen/Qwen2.5-32B
 
 
 
 
11
  ---
12
 
13
  # Baichuan-M2-32B
 
 
 
14
  [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
15
  [![Hugging Face](https://img.shields.io/badge/🤗%20Hugging%20Face-Model-yellow)](https://huggingface.co/baichuan-inc/Baichuan-M2-32B)
16
  [![M2 GPTQ-4bit](https://img.shields.io/badge/🤗%20M2%20GPTQ--4bit-Model-orange)](https://huggingface.co/baichuan-inc/Baichuan-M2-32B-GPTQ-Int4)
@@ -105,8 +110,10 @@ try:
105
  except ValueError:
106
  index = 0
107
 
108
- thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
109
- content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
 
 
110
 
111
  print("thinking content:", thinking_content)
112
  print("content:", content)
@@ -165,5 +172,4 @@ Thank you to the open-source community. We commit to continuous contribution and
165
 
166
  **Empowering Healthcare with AI, Making Health Accessible to All**
167
 
168
- </div>
169
-
 
1
  ---
2
+ base_model:
3
+ - Qwen/Qwen2.5-32B
 
 
4
  language:
5
  - en
6
  - zh
7
+ library_name: transformers
8
+ license: apache-2.0
9
+ tags:
10
+ - chat
11
+ pipeline_tag: text-generation
12
+ paper: 2509.02208
13
  ---
14
 
15
  # Baichuan-M2-32B
16
+
17
+ This repository contains the model presented in [Baichuan-M2: Scaling Medical Capability with Large Verifier System](https://huggingface.co/papers/2509.02208).
18
+
19
  [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
20
  [![Hugging Face](https://img.shields.io/badge/🤗%20Hugging%20Face-Model-yellow)](https://huggingface.co/baichuan-inc/Baichuan-M2-32B)
21
  [![M2 GPTQ-4bit](https://img.shields.io/badge/🤗%20M2%20GPTQ--4bit-Model-orange)](https://huggingface.co/baichuan-inc/Baichuan-M2-32B-GPTQ-Int4)
 
110
  except ValueError:
111
  index = 0
112
 
113
+ thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("
114
+ ")
115
+ content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("
116
+ ")
117
 
118
  print("thinking content:", thinking_content)
119
  print("content:", content)
 
172
 
173
  **Empowering Healthcare with AI, Making Health Accessible to All**
174
 
175
+ </div>