nielsr HF Staff commited on
Commit
ddd706b
·
verified ·
1 Parent(s): 6992822

Improve model card: Add pipeline tag, library, project page, and sample usage

Browse files

This PR enhances the model card by:

- Adding `pipeline_tag: text-generation` for better discoverability on the Hugging Face Hub (e.g., at https://huggingface.co/models?pipeline_tag=text-generation).
- Adding `library_name: transformers` to correctly identify its compatibility with the Transformers library and enable the automated usage widget.
- Including a link to the project page: [Hugging Face Collection](https://huggingface.co/collections/Gen-Verse/trado-series-68beb6cd6a26c27cde9fe3af).
- Incorporating a sample usage code snippet from the GitHub README to guide users on how to interact with the model.

Files changed (1) hide show
  1. README.md +42 -6
README.md CHANGED
@@ -1,10 +1,12 @@
1
  ---
2
  license: mit
 
 
3
  ---
4
 
5
  # Introduction to TraDo
6
 
7
- [Paper](https://arxiv.org/abs/2509.06949) | [Code](https://github.com/Gen-Verse/dLLM-RL)
8
 
9
  We introduce **TraDo**, SOTA diffusion language model, trained with **TraceRL**.
10
 
@@ -22,8 +24,44 @@ We introduce **TraDo**, SOTA diffusion language model, trained with **TraceRL**.
22
  <img src="https://github.com/yinjjiew/Data/raw/main/dllm-rl/maintable.png" width="100%"/>
23
  </p>
24
 
25
-
26
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
 
28
  # Citation
29
 
@@ -34,6 +72,4 @@ We introduce **TraDo**, SOTA diffusion language model, trained with **TraceRL**.
34
  journal={arXiv preprint arXiv:2509.06949},
35
  year={2025}
36
  }
37
- ```
38
-
39
-
 
1
  ---
2
  license: mit
3
+ pipeline_tag: text-generation
4
+ library_name: transformers
5
  ---
6
 
7
  # Introduction to TraDo
8
 
9
+ [Paper](https://arxiv.org/abs/2509.06949) | [Code](https://github.com/Gen-Verse/dLLM-RL) | [Project Page](https://huggingface.co/collections/Gen-Verse/trado-series-68beb6cd6a26c27cde9fe3af)
10
 
11
  We introduce **TraDo**, SOTA diffusion language model, trained with **TraceRL**.
12
 
 
24
  <img src="https://github.com/yinjjiew/Data/raw/main/dllm-rl/maintable.png" width="100%"/>
25
  </p>
26
 
27
+ ## Usage
28
+
29
+ You can download and try our model:
30
+ ```python
31
+ from transformers import AutoModelForCausalLM, AutoTokenizer
32
+ from generate import block_diffusion_generate
33
+
34
+ model_name = "Gen-Verse/TraDo-8B-Instruct"
35
+
36
+ model = AutoModelForCausalLM.from_pretrained(
37
+ model_name, trust_remote_code=True, torch_dtype="float16", device_map="cuda"
38
+ )
39
+ tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
40
+
41
+ prompt = "What's the solution of x^2 - 2x + 1 = 0\
42
+ Please reason step by step, and put your final answer within \\\\boxed{}.\
43
+ "
44
+ messages = [{"role": "user", "content": prompt}]
45
+ text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
46
+
47
+ tokens = tokenizer.batch_encode_plus([text], return_tensors='pt', padding=True, truncation=True, max_length=200)
48
+ tokens = {k: v.to(model.device) for k, v in tokens.items()}
49
+
50
+ output_ids = block_diffusion_generate(
51
+ model,
52
+ prompt=tokens,
53
+ mask_id=151669,
54
+ gen_length=200,
55
+ block_length=4, denoising_steps=4,
56
+ temperature=1.0, top_k=0, top_p=1.0,
57
+ remasking_strategy="low_confidence_dynamic",
58
+ confidence_threshold=0.9
59
+ )
60
+
61
+ output_text = tokenizer.decode(output_ids[0], skip_special_tokens=False)
62
+ cleaned_text = output_text.replace('<|MASK|>', '').replace('<|endoftext|>', '')
63
+ print(cleaned_text)
64
+ ```
65
 
66
  # Citation
67
 
 
72
  journal={arXiv preprint arXiv:2509.06949},
73
  year={2025}
74
  }
75
+ ```