qianhuiwu commited on
Commit
2c481ab
1 Parent(s): f06f4ea

Rename the model.

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -4,7 +4,7 @@ license: cc-by-nc-sa-4.0
4
 
5
  # LLMLingua-2-Bert-base-Multilingual-Cased-MeetingBank
6
 
7
- This model was introduced in the paper [**LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression** (Pan et al, 2024)](). It is a [XLM-RoBERTa (large-sized model)](https://huggingface.co/FacebookAI/xlm-roberta-large) finetuned to perform token classification for task agnostic prompt compression. The probability $p_{preserve}$ of each token $x_i$ is used as the metric for compression. This model is trained on an extractive text compression dataset constructed with the methodology proposed in the [LLMLingua-2](), using training examples from [MeetingBank (Hu et al, 2023)](https://meetingbank.github.io/) as the seed data.
8
 
9
  For more details, please check the home page of [LLMLingua-2]() and [LLMLingua Series](https://llmlingua.com/).
10
 
@@ -13,7 +13,7 @@ For more details, please check the home page of [LLMLingua-2]() and [LLMLingua S
13
  from llmlingua import PromptCompressor
14
 
15
  compressor = PromptCompressor(
16
- model_name="qianhuiwu/llmlingua-2-xlm-roberta-large-meetingbank",
17
  use_llmlingua2=True
18
  )
19
 
 
4
 
5
  # LLMLingua-2-Bert-base-Multilingual-Cased-MeetingBank
6
 
7
+ This model was introduced in the paper [**LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression** (Pan et al, 2024)](). It is a [XLM-RoBERTa (large-sized model)](https://huggingface.co/FacebookAI/xlm-roberta-large) finetuned to perform token classification for task agnostic prompt compression. The probability $p_{preserve}$ of each token $x_i$ is used as the metric for compression. This model is trained on [an extractive text compression dataset]() constructed with the methodology proposed in the [LLMLingua-2](), using training examples from [MeetingBank (Hu et al, 2023)](https://meetingbank.github.io/) as the seed data.
8
 
9
  For more details, please check the home page of [LLMLingua-2]() and [LLMLingua Series](https://llmlingua.com/).
10
 
 
13
  from llmlingua import PromptCompressor
14
 
15
  compressor = PromptCompressor(
16
+ model_name="microsoft/llmlingua-2-xlm-roberta-large-meetingbank",
17
  use_llmlingua2=True
18
  )
19