simonosgoode commited on
Commit
d900076
1 Parent(s): 6363cdb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -11
README.md CHANGED
@@ -10,26 +10,20 @@ model-index:
10
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
  should probably proofread and complete it, then remove this comment. -->
12
 
13
- # Canadian Legal Judgement Generator (BLOOM)
14
 
15
- This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on Canadian appellate decisions found in the [Pile of Law](https://huggingface.co/datasets/pile-of-law/pile-of-law) dataset.
16
  It achieves the following results on the evaluation set:
17
  - Loss: 2.0135
18
 
19
- ## Model description
20
-
21
- More information needed
22
-
23
  ## Intended uses & limitations
24
 
25
- More information needed
26
-
27
- ## Training and evaluation data
28
-
29
- More information needed
30
 
31
  ## Training procedure
32
 
 
 
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
 
10
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
  should probably proofread and complete it, then remove this comment. -->
12
 
13
+ # Canadian Appellate Judgement Model
14
 
15
+ This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on Canadian appellate decisions (Ontario Court of Appeal and the British Columbia Court of Appeal) found in the [Pile of Law](https://huggingface.co/datasets/pile-of-law/pile-of-law) dataset.
16
  It achieves the following results on the evaluation set:
17
  - Loss: 2.0135
18
 
 
 
 
 
19
  ## Intended uses & limitations
20
 
21
+ This model is intended to facilitate research into large language models and legal reasoning.
 
 
 
 
22
 
23
  ## Training procedure
24
 
25
+ This model was trained using the methodology set out in this [notebook](https://huggingface.co/docs/transformers/training).
26
+
27
  ### Training hyperparameters
28
 
29
  The following hyperparameters were used during training: