File size: 4,782 Bytes
92ee2de
da37a2f
92ee2de
 
 
da37a2f
 
 
92ee2de
da37a2f
92ee2de
da37a2f
92ee2de
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
da37a2f
 
 
92ee2de
da37a2f
92ee2de
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
da37a2f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
---
language:
- en
tags:
- text2text-generation
- topic-modeling
- diffusion
- text-diffusion
datasets:
- xwjzds/paraphrase_collections_enhanced
license: apache-2.0
pipeline_tag: text2text-generation
---


# DeTiME

<!-- Provide a quick summary of what the model is/does. -->


DeTiME is a novel framework for topic modeling that leverages Encoder-Decoder-based Large Language Models (LLMs) to produce highly clusterable embeddings, 
enabling the generation of topics with superior clusterability and enhanced semantic coherence. It also utilizes diffusion processes to generate content relevant to the identified topics, 
allowing for the efficient production of highly clustered topics and related content simultaneously. DeTiME is efficient to train and highly adaptable, 
making it suitable for a broad range of applications

## Model Details

### Model Description

DeTiME is a text to text generation model that can generate a text given a text prompt.

- **Developed by:** Amazon
- **Funded by:**  Amazon
- **Model type:** Generative text-to-text model

### Model Sources

For research purposes, we recommend our `DeTiME` Github repository (https://github.com/amazon-science/text_generation_diffusion_llm_topic).

- **Repository:** https://github.com/amazon-science/text_generation_diffusion_llm_topic
- **Paper:** https://aclanthology.org/2023.findings-emnlp.606.pdf

### Model Overview
DeTiME is can extract the text input to 4096 dimension and reconstruct the original sentence



## Code Example





```python
# Load model directly
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('google/flan-t5-large')
model = AutoModel.from_pretrained("xwjzds/detime", trust_remote_code=True)
model.eval()
text = """
Repeat: U.S. prosecutors have arrested more than 130 individuals and have seized more than $17 million in a continuing crackdown on Internet fraud and abuse.""" #make sure to add Repeat at the beginning

inputs = tokenizer(text, return_tensors="pt",  padding='max_length',  max_length = 512).input_ids.cuda()
am = tokenizer(text, return_tensors="pt",  padding='max_length',  max_length = 512).attention_mask.cuda()
outputs = model.cuda().generate(inputs, am, max_length = 512)

#Now decoder_output will output low quality text generation 
```

## Uses

### Direct Use

The model is intended for research purposes for now. Possible research areas and tasks include

- Benchmark on text2text generation quality.
- Generate embeddings that can be used by diffuser to generate high quality text.
- Generate embeddings that can be used for topic modeling.
- Identify similar text or relevant topics.

Excluded uses are described below.



### Recommendations

The model is intended for research purposes only.

## How to Get Started with the Model

Check out https://github.com/amazon-science/text_generation_diffusion_llm_topic

# Citation

**BibTeX:**

```bibtex
@inproceedings{xu-etal-2023-detime,
    title = "{D}e{T}i{ME}: Diffusion-Enhanced Topic Modeling using Encoder-decoder based {LLM}",
    author = "Xu, Weijie  and
      Hu, Wenxiang  and
      Wu, Fanyou  and
      Sengamedu, Srinivasan",
    editor = "Bouamor, Houda  and
      Pino, Juan  and
      Bali, Kalika",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
    month = dec,
    year = "2023",
    address = "Singapore",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.findings-emnlp.606",
    doi = "10.18653/v1/2023.findings-emnlp.606",
    pages = "9040--9057",
    abstract = "In the burgeoning field of natural language processing, Neural Topic Models (NTMs) and Large Language Models (LLMs) have emerged as areas of significant research interest. Despite this, NTMs primarily utilize contextual embeddings from LLMs, which are not optimal for clustering or capable for topic generation. Our study addresses this gap by introducing a novel framework named Diffusion-Enhanced Topic Modeling using Encoder-Decoder-based LLMs (DeTiME). DeTiME leverages Encoder-Decoder-based LLMs to produce highly clusterable embeddings that could generate topics that exhibit both superior clusterability and enhanced semantic coherence compared to existing methods. Additionally, by exploiting the power of diffusion, our framework also provides the capability to generate content relevant to the identified topics. This dual functionality allows users to efficiently produce highly clustered topics and related content simultaneously. DeTiME{'}s potential extends to generating clustered embeddings as well. Notably, our proposed framework proves to be efficient to train and exhibits high adaptability, demonstrating its potential for a wide array of applications.",
}
```