binbin83 commited on
Commit
512ca48
1 Parent(s): f5554fe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +58 -3
README.md CHANGED
@@ -7,7 +7,9 @@ tags:
7
  pipeline_tag: text-classification
8
  ---
9
 
10
- # binbin83/setfit-MiniLM-dialog-act-fr
 
 
11
 
12
  This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
13
 
@@ -28,13 +30,66 @@ You can then run inference as follows:
28
  from setfit import SetFitModel
29
 
30
  # Download from Hub and run inference
31
- model = SetFitModel.from_pretrained("binbin83/setfit-MiniLM-dialog-act-fr")
 
32
  # Run inference
33
- preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
 
 
34
  ```
35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
  ## BibTeX entry and citation info
37
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
  ```bibtex
39
  @article{https://doi.org/10.48550/arxiv.2209.11055,
40
  doi = {10.48550/ARXIV.2209.11055},
 
7
  pipeline_tag: text-classification
8
  ---
9
 
10
+ # binbin83/setfit-MiniLM-dialog-act-13nov
11
+
12
+ The model is a multi-class multi-label text classifier to distinguish the different dialog act in semi-structured interview. The data used fot fine-tuning were in French.
13
 
14
  This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
15
 
 
30
  from setfit import SetFitModel
31
 
32
  # Download from Hub and run inference
33
+ model = SetFitModel.from_pretrained("binbin83/setfit-MiniLM-dialog-act-13nov")
34
+ label_dict = {'Introductory': 0, 'FollowUp': 1, 'Probing': 2, 'Specifying': 3, 'Structuring': 4, 'DirectQuestion': 5, 'Interpreting': 6, 'Ending': 7}
35
  # Run inference
36
+ preds = model(["Vous pouvez continuer", "Pouvez-vous me dire précisément quel a été l'odre chronologique des événements ?"])
37
+ labels = [[[f for f, p in zip(labels_dict, ps) if p] for ps in [pred]] for pred in preds ]
38
+
39
  ```
40
 
41
+ ## Labels and training data
42
+ Brinkmann, S., & Kvale, S (1), define classification of dialog act in interview:
43
+ * Introductory: Can you tell me about ... (something specific)?,
44
+ * Follow-up verbal cues: repeat back keywords to participants, ask for reflection or unpacking of point just made,
45
+ * Probing: Can you say a little more about X? Why do you think X...? (for example, Why do you think X is that way? Why do you think X is important?),
46
+ * Specifying: Can you give me an example of X?,
47
+ * Indirect: How do you think other people view X?,
48
+ * Structuring: Thank you for that. I’d like to move to another topic...
49
+ * Direct (later stages): When you mention X, are you thinking like Y or Z?,
50
+ * Interpreting: So, what I have gathered is that...,
51
+ * Ending: I have asked all the questions I had, but I wanted to check whether there is something else about your experience/understanding we haven’t covered? Do you have any questions for me?,
52
+
53
+ On our corpus of interviews, we humanly label 500 turn of speech using this classification. We use 0.7 to train and evaluate on 0.3.
54
+
55
+ The entire corpus is composed of the following examples:
56
+
57
+ ('DirectQuestion', 23), ('Probing', 15), ('Interpreting', 15), ('Specifying', 14), ('Structuring', 7), ('FollowUp', 6), ('Introductory', 5), ('Ending', 5)
58
+
59
+ (1) Brinkmann, S., & Kvale, S. (2015). InterViews: Learning the Craft of Qualitative Research Interviewing. (3. ed.) SAGE Publications.
60
+
61
+
62
+ ## Training and Performances
63
+
64
+ We finetune: "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2"
65
+ using SetFit with CosineLossSimilarity and this parapeters: epochs = 20, batch_size=32, num_iterations = 50
66
+
67
+ On the test dataset :
68
+ ('Probing', 146), ('Specifying', 135), ('FollowUp', 134), ('DirectQuestion', 125), ('Interpreting', 44), ('Structuring', 27), ('Introductory', 12), ('Ending', 12)
69
+
70
+
71
+ On our test dataset, we get this results:
72
+ {'f1': 0.35005547563028, 'f1_micro': 0.3686131386861314, 'f1_sample': 0.3120075046904315, 'accuracy': 0.19887429643527205}
73
+
74
  ## BibTeX entry and citation info
75
 
76
+
77
+ To cite the current study:
78
+ ```bibtex
79
+ @article{
80
+ doi = {conference paper},
81
+ url = {https://arxiv.org/abs/2209.11055},
82
+ author = {Quillivic Robin, Charles Payet},
83
+ keywords = {NLP, JADT},
84
+ title = {Semi-Structured Interview Analysis: A French NLP Toolbox for Social Sciences},
85
+ publisher = {JADT},
86
+ year = {2024},
87
+ copyright = {Creative Commons Attribution 4.0 International}
88
+ }
89
+ ```
90
+
91
+
92
+ To cite the setFit paper:
93
  ```bibtex
94
  @article{https://doi.org/10.48550/arxiv.2209.11055,
95
  doi = {10.48550/ARXIV.2209.11055},