Junjie-Ye commited on
Commit
11eb5ea
·
verified ·
1 Parent(s): 1c3fedd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -9
README.md CHANGED
@@ -7,7 +7,7 @@ base_model:
7
  ---
8
  # **TL-CodeLLaMA-2**
9
 
10
- TL-CodeLLaMA-2 is a model designed for tool use, built upon CodeLLaMA-7b. It is trained on 1,217 data samples using the *TL-Training* framework and demonstrates effective performance across a variety of tool use tasks. More information can be found in the paper "[TL-Training: A Task-Feature-Based Framework for Training Large Language Models in Tool Use](https://www.arxiv.org/abs/2412.15495)".
11
 
12
  # Model Use
13
 
@@ -110,13 +110,33 @@ print(response)
110
  If you find this model useful in your research, please cite:
111
 
112
  ```bibtex
113
- @misc{TL-Training,
114
- title={TL-Training: A Task-Feature-Based Framework for Training Large Language Models in Tool Use},
115
- author={Junjie Ye and Yilong Wu and Sixian Li and Yuming Yang and Tao Gui and Qi Zhang and Xuanjing Huang and Peng Wang and Zhongchao Shi and Jianping Fan and Zhengyin Du},
116
- year={2024},
117
- eprint={2412.15495},
118
- archivePrefix={arXiv},
119
- primaryClass={cs.CL},
120
- url={https://arxiv.org/abs/2412.15495},
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
121
  }
122
  ```
 
7
  ---
8
  # **TL-CodeLLaMA-2**
9
 
10
+ [EMNLP 2025] TL-CodeLLaMA-2 is a model designed for tool use, built upon CodeLLaMA-7b. It is trained on 1,217 data samples using the *TL-Training* framework and demonstrates effective performance across a variety of tool use tasks. More information can be found in the paper "[TL-Training: A Task-Feature-Based Framework for Training Large Language Models in Tool Use](https://www.arxiv.org/abs/2412.15495)".
11
 
12
  # Model Use
13
 
 
110
  If you find this model useful in your research, please cite:
111
 
112
  ```bibtex
113
+ @inproceedings{TL-Training,
114
+ author = {Junjie Ye and
115
+ Yilong Wu and
116
+ Sixian Li and
117
+ Yuming Yang and
118
+ Zhiheng Xi and
119
+ Tao Gui and
120
+ Qi Zhang and
121
+ Xuanjing Huang and
122
+ Peng Wang and
123
+ Zhongchao Shi and
124
+ Jianping Fan and
125
+ Zhengyin Du},
126
+ editor = {Christos Christodoulopoulos and
127
+ Tanmoy Chakraborty and
128
+ Carolyn Rose and
129
+ Violet Peng},
130
+ title = {TL-Training: {A} Task-Feature-Based Framework for Training Large Language
131
+ Models in Tool Use},
132
+ booktitle = {Findings of the Association for Computational Linguistics: {EMNLP}
133
+ 2025, Suzhou, China, November 4-9, 2025},
134
+ pages = {239--258},
135
+ publisher = {Association for Computational Linguistics},
136
+ year = {2025},
137
+ url = {https://aclanthology.org/2025.findings-emnlp.15/},
138
+ timestamp = {Fri, 20 Feb 2026 08:07:46 +0100},
139
+ biburl = {https://dblp.org/rec/conf/emnlp/YeWLYXGZHWSFD25.bib},
140
+ bibsource = {dblp computer science bibliography, https://dblp.org}
141
  }
142
  ```