File size: 1,554 Bytes
71ac527
 
 
 
 
 
 
 
 
1b3b865
 
 
 
 
 
 
 
 
 
 
 
 
71ac527
 
1b3b865
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71ac527
 
1b3b865
 
 
 
 
 
71ac527
 
1b3b865
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
library_name: transformers
tags:
- argumentation
license: apache-2.0
language:
- en
pipeline_tag: text-generation
---

# ADBL2-Mistral-7B

ADBL2-Mistral-7B is an fine-tuned version of Mistral-7B-v0.1 trained to perform relation-based argument mining.
Giving two arguments *x* and *y*, we use this model in synergy with [LMQL](https://lmql.ai/) to predict wether *y* is attacking or supporting *x*.

## Fine-tuning
We fine-tunde [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) using the PEFT method [QLoRA](https://arxiv.org/abs/2305.14314) and argument pairs coming from the online debate tool [Kialo](https://www.kialo.com/).

## Prompt format
This model has been trained to complete this prompt format:
```
<s>[INST]
Argument 1 : /*Argument 1*/
Argument 2 : /*Argument 2*/
[/INST]
Relation : 
```
by the relation **attack** or **support**
```
Relation : attack/support
</s>
```
### Example :
Giving two arguments, where argument 2 is attacking the argument 1, : 
 - Argument 1 : using machines is advantageous
 - Argument 2 : the usage of machines is harmful for health of humans

The prompt to retrieve the relation between the second and the first argument should be :
```
<s>[INST]
Argument 1 : using machines is advantageous
Argument 2 : the usage of machines is harmful for health of humans
[/INST]
Relation : 
```
Our model should complete this prompt this way :
```
<s>[INST]
Argument 1 : using machines is advantageous
Argument 2 : the usage of machines is harmful for health of humans
[/INST]
Relation : attack
</s>
```