File size: 2,188 Bytes
32e64d2
 
b180d5b
32e64d2
 
 
 
 
 
 
a77226f
32e64d2
 
7284995
32e64d2
b180d5b
32e64d2
 
 
 
 
 
 
 
 
 
 
b180d5b
 
32e64d2
 
 
 
b180d5b
32e64d2
b180d5b
32e64d2
b180d5b
32e64d2
b180d5b
32e64d2
b180d5b
 
32e64d2
b180d5b
 
dceae9f
32e64d2
 
 
b180d5b
32e64d2
 
 
b180d5b
32e64d2
b180d5b
32e64d2
0cc9a48
 
b180d5b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
---
language: en
license: llama3.1
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- beeformer/recsys-movielens-20m
pipeline_tag: sentence-similarity
---
# Llama-movielens-mpnet

This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and it is designed to use in recommender systems for content-base filtering and as a side information for cold-start recommendation.

## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:

```
pip install -U sentence-transformers
```

Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example product description", "Each product description is converted"]
model = SentenceTransformer('beeformer/Llama-movielens-mpnet')
embeddings = model.encode(sentences)
print(embeddings)
```

## Training procedure

### Pre-training 

We use the pretrained [`sentence-transformers/all-mpnet-base-v2`](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) model. Please refer to the model card for more detailed information about the pre-training procedure.

### Fine-tuning

 We use the initial model without modifying its architecture or pre-trained model parameters. 
 However, we reduce the processed sequence length to 384 to reduce the training time of the model. 

### Dataset
  
 We finetuned our model on the MovieLens-20M dataset with item descriptions generated with [`meta-llama/Meta-Llama-3.1-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) model. For details please see the dataset page [`beeformer/recsys-movielens-20m`](https://huggingface.co/datasets/beeformer/recsys-movielens-20m).

## Evaluation Results

Table with results TBA.

## Intended uses

This model was trained as a demonstration of capabilities of the beeFormer training framework (link and details TBA) and is intended for research purposes only.

## Citation

Preprint available [here](https://arxiv.org/pdf/2409.10309)

TBA