louisbrulenaudet commited on
Commit
a943bb2
1 Parent(s): ad8f765

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +125 -36
README.md CHANGED
@@ -1,13 +1,18 @@
1
  ---
 
2
  language:
3
  - fr
4
- license: apache-2.0
5
  multilinguality:
6
  - monolingual
7
- size_categories:
8
- - 1K<n<10K
 
 
 
 
9
  source_datasets:
10
  - original
 
11
  task_categories:
12
  - text-generation
13
  - table-question-answering
@@ -15,40 +20,10 @@ task_categories:
15
  - text-retrieval
16
  - question-answering
17
  - text-classification
18
- pretty_name: Code du tourisme
19
- tags:
20
- - finetuning
21
- - legal
22
- - french law
23
- - droit français
24
- - Code du tourisme
25
- dataset_info:
26
- features:
27
- - name: instruction
28
- dtype: string
29
- - name: input
30
- dtype: string
31
- - name: output
32
- dtype: string
33
- - name: start
34
- dtype: string
35
- - name: expiration
36
- dtype: string
37
- - name: num
38
- dtype: string
39
- splits:
40
- - name: train
41
- num_bytes: 540503
42
- num_examples: 647
43
- download_size: 202948
44
- dataset_size: 540503
45
- configs:
46
- - config_name: default
47
- data_files:
48
- - split: train
49
- path: data/train-*
50
  ---
51
- # Code du tourisme, non-instruct (2024-03-27)
52
 
53
  This project focuses on fine-tuning pre-trained language models to create efficient and accurate models for legal practice.
54
 
@@ -64,6 +39,120 @@ Instruction-based fine-tuning significantly enhances the performance of LLMs in
64
  - Interpretability: Instruction-based fine-tuning also makes LLM behavior more interpretable. Since the instructions are human-readable, it becomes easier to understand and control model outputs.
65
  - Adaptive Behavior: LLMs, post instruction-based fine-tuning, exhibit adaptive behavior that is responsive to both explicit task descriptions and implicit cues within the provided text.
66
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67
  ## Dataset generation
68
 
69
  This JSON file is a list of dictionaries, each dictionary contains the following fields:
 
1
  ---
2
+ license: apache-2.0
3
  language:
4
  - fr
 
5
  multilinguality:
6
  - monolingual
7
+ tags:
8
+ - finetuning
9
+ - legal
10
+ - french law
11
+ - droit français
12
+ - Code du tourisme
13
  source_datasets:
14
  - original
15
+ pretty_name: Code du tourisme
16
  task_categories:
17
  - text-generation
18
  - table-question-answering
 
20
  - text-retrieval
21
  - question-answering
22
  - text-classification
23
+ size_categories:
24
+ - 1K<n<10K
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  ---
26
+ # Code du tourisme, non-instruct (2024-04-01)
27
 
28
  This project focuses on fine-tuning pre-trained language models to create efficient and accurate models for legal practice.
29
 
 
39
  - Interpretability: Instruction-based fine-tuning also makes LLM behavior more interpretable. Since the instructions are human-readable, it becomes easier to understand and control model outputs.
40
  - Adaptive Behavior: LLMs, post instruction-based fine-tuning, exhibit adaptive behavior that is responsive to both explicit task descriptions and implicit cues within the provided text.
41
 
42
+ ## Concurrent reading of the LegalKit
43
+
44
+ To use all the legal data published on LegalKit, you can use this code snippet:
45
+ ```python
46
+ # -*- coding: utf-8 -*-
47
+ import concurrent.futures
48
+ import os
49
+
50
+ import datasets
51
+ from tqdm.notebook import tqdm
52
+
53
+ def dataset_loader(
54
+ name:str,
55
+ streaming:bool=True
56
+ ) -> datasets.Dataset:
57
+ """
58
+ Helper function to load a single dataset in parallel.
59
+
60
+ Parameters
61
+ ----------
62
+ name : str
63
+ Name of the dataset to be loaded.
64
+
65
+ streaming : bool, optional
66
+ Determines if datasets are streamed. Default is True.
67
+
68
+ Returns
69
+ -------
70
+ dataset : datasets.Dataset
71
+ Loaded dataset object.
72
+
73
+ Raises
74
+ ------
75
+ Exception
76
+ If an error occurs during dataset loading.
77
+ """
78
+ try:
79
+ return datasets.load_dataset(
80
+ name,
81
+ split="train",
82
+ streaming=streaming
83
+ )
84
+
85
+ except Exception as exc:
86
+ logging.error(f"Error loading dataset {name}: {exc}")
87
+
88
+ return None
89
+
90
+
91
+ def load_datasets(
92
+ req:list,
93
+ streaming:bool=True
94
+ ) -> list:
95
+ """
96
+ Downloads datasets specified in a list and creates a list of loaded datasets.
97
+
98
+ Parameters
99
+ ----------
100
+ req : list
101
+ A list containing the names of datasets to be downloaded.
102
+
103
+ streaming : bool, optional
104
+ Determines if datasets are streamed. Default is True.
105
+
106
+ Returns
107
+ -------
108
+ datasets_list : list
109
+ A list containing loaded datasets as per the requested names provided in 'req'.
110
+
111
+ Raises
112
+ ------
113
+ Exception
114
+ If an error occurs during dataset loading or processing.
115
+
116
+ Examples
117
+ --------
118
+ >>> datasets = load_datasets(["dataset1", "dataset2"], streaming=False)
119
+ """
120
+ datasets_list = []
121
+
122
+ with concurrent.futures.ThreadPoolExecutor() as executor:
123
+ future_to_dataset = {executor.submit(dataset_loader, name): name for name in req}
124
+
125
+ for future in tqdm(concurrent.futures.as_completed(future_to_dataset), total=len(req)):
126
+ name = future_to_dataset[future]
127
+
128
+ try:
129
+ dataset = future.result()
130
+
131
+ if dataset:
132
+ datasets_list.append(dataset)
133
+
134
+ except Exception as exc:
135
+ logging.error(f"Error processing dataset {name}: {exc}")
136
+
137
+ return datasets_list
138
+
139
+
140
+ req = [
141
+ "louisbrulenaudet/code-artisanat",
142
+ "louisbrulenaudet/code-action-sociale-familles",
143
+ # ...
144
+ ]
145
+
146
+ datasets_list = load_datasets(
147
+ req=req,
148
+ streaming=True
149
+ )
150
+
151
+ dataset = datasets.concatenate_datasets(
152
+ datasets_list
153
+ )
154
+ ```
155
+
156
  ## Dataset generation
157
 
158
  This JSON file is a list of dictionaries, each dictionary contains the following fields: