Text2Text Generation
Transformers
PyTorch
Chinese
mt5
Inference Endpoints
File size: 4,714 Bytes
50eeb2d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
---

language: zh
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python(英國發音:/ˈpaɪθən/ 美國發音:/ˈpaɪθɑːn/),是一种广泛使用的解释型、高级和通用的编程语言。Python支持多种编程范型,包括函数式、指令式、反射式、结构化和面向对象编程。它拥有动态类型系统和垃圾回收功能,能够自动管理内存使用,并且其本身拥有一个巨大而广泛的标准库。它的语言结构以及面向对象的方法旨在帮助程序员为小型的和大型的项目编写清晰的、合乎逻辑的代码。"

license: apache-2.0
---


# doc2query/msmarco-chinese-mt5-base-v1

This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).

It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.

## Usage
```python

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

import torch



model_name = 'doc2query/msmarco-chinese-mt5-base-v1'

tokenizer = AutoTokenizer.from_pretrained(model_name)

model = AutoModelForSeq2SeqLM.from_pretrained(model_name)



text = "Python(英國發音:/ˈpaɪθən/ 美國發音:/ˈpaɪθɑːn/),是一种广泛使用的解释型、高级和通用的编程语言。Python支持多种编程范型,包括函数式、指令式、反射式、结构化和面向对象编程。它拥有动态类型系统和垃圾回收功能,能够自动管理内存使用,并且其本身拥有一个巨大而广泛的标准库。它的语言结构以及面向对象的方法旨在帮助程序员为小型的和大型的项目编写清晰的、合乎逻辑的代码。"





def create_queries(para):

    input_ids = tokenizer.encode(para, return_tensors='pt')

    with torch.no_grad():

        # Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality

        sampling_outputs = model.generate(

            input_ids=input_ids,

            max_length=64,

            do_sample=True,

            top_p=0.95,

            top_k=10, 

            num_return_sequences=5

            )

        

        # Here we use Beam-search. It generates better quality queries, but with less diversity

        beam_outputs = model.generate(

            input_ids=input_ids, 

            max_length=64, 

            num_beams=5, 

            no_repeat_ngram_size=2, 

            num_return_sequences=5, 

            early_stopping=True

        )





    print("Paragraph:")

    print(para)

    

    print("\nBeam Outputs:")

    for i in range(len(beam_outputs)):

        query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)

        print(f'{i + 1}: {query}')



    print("\nSampling Outputs:")

    for i in range(len(sampling_outputs)):

        query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)

        print(f'{i + 1}: {query}')



create_queries(text)



```

**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.

## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the  training script, see the `train_script.py` in this repository.

The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces. 

This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).