File size: 2,181 Bytes
da4a086
 
 
 
7208a4c
 
 
 
 
 
da4a086
7208a4c
 
 
 
 
1b7b96e
da4a086
 
fed0e01
 
03661c5
8b727aa
fed0e01
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50d7e3d
 
 
fed0e01
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9a54951
 
 
fed0e01
6a4c996
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
---
tags:
- autotrain
- text-generation
- mistral
- fine-tune
- text-generation-inference
- chat
- Trained with Auto-train
- pytorch
widget:
- text: 'I love AutoTrain because '
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: conversational
---

# Model Trained Using AutoTrain

![LLM_IMAGE](header.jpeg)

The mistral-7b-fraud2-finetuned Large Language Model (LLM) is a fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generative text model using a variety of synthetically generated Fraudulent transcripts datasets.

For full details of this model please read [release blog post](https://mistral.ai/news/announcing-mistral-7b/)

## Instruction format

In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[\INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.

E.g.

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")

text = "<s>[INST] Below is a conversation transcript [/INST]"
"Your credit card has been stolen, and you need to contact us to resolve the issue. We will help you protect your information and prevent further fraud.</s> "
"[INST] Analyze the conversation and determine if it's fraudulent or legitimate. [/INST]"

encodeds = tokenizer(text, return_tensors="pt", add_special_tokens=False)

model_inputs = encodeds.to(device)
model.to(device)

generated_ids = model.generate(**model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```

## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer

## Version
- v1

## The  Team
- BILIC TEAM OF AI ENGINEERS