legalis-scikit / README.md
LennardZuendorf's picture
Update README.md
278b236
---
license: mit
library_name: sklearn
tags:
- sklearn
- skops
- text-classification
model_format: pickle
model_file: legalis-scikit.pkl
datasets:
- LennardZuendorf/legalis
language:
- de
metrics:
- accuracy
- f1
---
# Model description
This is a tuned random forest classifiert, trained on a processed dataset of 2800 German court cases (see [legalis dataset](https://huggingface.co/datasets/LennardZuendorf/legalis)). It predicts the winner (defended/"Verklagt*r" or plaintiff/"Kläger*in") of a court case based on facts provided (in German).
## Intended uses & limitations
- This model was created as part of a university project and should be considered highly experimental.
## get started with the model
Try out the hosted Interference UI or the [Huggingface Space](https://huggingface.co/spaces/LennardZuendorf/legalis)
```
import pickle
with open(dtc_pkl_filename, 'rb') as file:
clf = pickle.load(file)
```
### The modelHyperparameters
- The Classifier was tuned with scikit's cv search method, the pipeline used a CountVectorizer with common German stopwords.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|-------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| memory | |
| steps | [('count', CountVectorizer(ngram_range=(1, 3),<br /> stop_words=['aber', 'alle', 'allem', 'allen', 'aller', 'alles',<br /> 'als', 'also', 'am', 'an', 'ander', 'andere',<br /> 'anderem', 'anderen', 'anderer', 'anderes',<br /> 'anderm', 'andern', 'anderr', 'anders', 'auch',<br /> 'auf', 'aus', 'bei', 'bin', 'bis', 'bist', 'da',<br /> 'damit', 'dann', ...])), ('clf', RandomForestClassifier(min_samples_split=5, random_state=0))] |
| verbose | False |
| count | CountVectorizer(ngram_range=(1, 3),<br /> stop_words=['aber', 'alle', 'allem', 'allen', 'aller', 'alles',<br /> 'als', 'also', 'am', 'an', 'ander', 'andere',<br /> 'anderem', 'anderen', 'anderer', 'anderes',<br /> 'anderm', 'andern', 'anderr', 'anders', 'auch',<br /> 'auf', 'aus', 'bei', 'bin', 'bis', 'bist', 'da',<br /> 'damit', 'dann', ...]) |
| clf | RandomForestClassifier(min_samples_split=5, random_state=0) |
| count__analyzer | word |
| count__binary | False |
| count__decode_error | strict |
| count__dtype | <class 'numpy.int64'> |
| count__encoding | utf-8 |
| count__input | content |
| count__lowercase | True |
| count__max_df | 1.0 |
| count__max_features | |
| count__min_df | 1 |
| count__ngram_range | (1, 3) |
| count__preprocessor | |
| count__stop_words | ['aber', 'alle', 'allem', 'allen', 'aller', 'alles', 'als', 'also', 'am', 'an', 'ander', 'andere', 'anderem', 'anderen', 'anderer', 'anderes', 'anderm', 'andern', 'anderr', 'anders', 'auch', 'auf', 'aus', 'bei', 'bin', 'bis', 'bist', 'da', 'damit', 'dann', 'der', 'den', 'des', 'dem', 'die', 'das', 'dass', 'daß', 'derselbe', 'derselben', 'denselben', 'desselben', 'demselben', 'dieselbe', 'dieselben', 'dasselbe', 'dazu', 'dein', 'deine', 'deinem', 'deinen', 'deiner', 'deines', 'denn', 'derer', 'dessen', 'dich', 'dir', 'du', 'dies', 'diese', 'diesem', 'diesen', 'dieser', 'dieses', 'doch', 'dort', 'durch', 'ein', 'eine', 'einem', 'einen', 'einer', 'eines', 'einig', 'einige', 'einigem', 'einigen', 'einiger', 'einiges', 'einmal', 'er', 'ihn', 'ihm', 'es', 'etwas', 'euer', 'eure', 'eurem', 'euren', 'eurer', 'eures', 'für', 'gegen', 'gewesen', 'hab', 'habe', 'haben', 'hat', 'hatte', 'hatten', 'hier', 'hin', 'hinter', 'ich', 'mich', 'mir', 'ihr', 'ihre', 'ihrem', 'ihren', 'ihrer', 'ihres', 'euch', 'im', 'in', 'indem', 'ins', 'ist', 'jede', 'jedem', 'jeden', 'jeder', 'jedes', 'jene', 'jenem', 'jenen', 'jener', 'jenes', 'jetzt', 'kann', 'kein', 'keine', 'keinem', 'keinen', 'keiner', 'keines', 'können', 'könnte', 'machen', 'man', 'manche', 'manchem', 'manchen', 'mancher', 'manches', 'mein', 'meine', 'meinem', 'meinen', 'meiner', 'meines', 'mit', 'muss', 'musste', 'nach', 'nicht', 'nichts', 'noch', 'nun', 'nur', 'ob', 'oder', 'ohne', 'sehr', 'sein', 'seine', 'seinem', 'seinen', 'seiner', 'seines', 'selbst', 'sich', 'sie', 'ihnen', 'sind', 'so', 'solche', 'solchem', 'solchen', 'solcher', 'solches', 'soll', 'sollte', 'sondern', 'sonst', 'über', 'um', 'und', 'uns', 'unsere', 'unserem', 'unseren', 'unser', 'unseres', 'unter', 'viel', 'vom', 'von', 'vor', 'während', 'war', 'waren', 'warst', 'was', 'weg', 'weil', 'weiter', 'welche', 'welchem', 'welchen', 'welcher', 'welches', 'wenn', 'werde', 'werden', 'wie', 'wieder', 'will', 'wir', 'wird', 'wirst', 'wo', 'wollen', 'wollte', 'würde', 'würden', 'zu', 'zum', 'zur', 'zwar', 'zwischen'] |
| count__strip_accents | |
| count__token_pattern | (?u)\b\w\w+\b |
| count__tokenizer | |
| count__vocabulary | |
| clf__bootstrap | True |
| clf__ccp_alpha | 0.0 |
| clf__class_weight | |
| clf__criterion | gini |
| clf__max_depth | |
| clf__max_features | sqrt |
| clf__max_leaf_nodes | |
| clf__max_samples | |
| clf__min_impurity_decrease | 0.0 |
| clf__min_samples_leaf | 1 |
| clf__min_samples_split | 5 |
| clf__min_weight_fraction_leaf | 0.0 |
| clf__n_estimators | 100 |
| clf__n_jobs | |
| clf__oob_score | False |
| clf__random_state | 0 |
| clf__verbose | 0 |
| clf__warm_start | False |
</details>
### Model Plot
<style>#sk-container-id-1 {color: black;background-color: white;}#sk-container-id-1 pre{padding: 5;}#sk-container-id-1 div.sk-toggleable {background-color: white;}#sk-container-id-1 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-1 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-1 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-1 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-1 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-1 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-1 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-1 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-1 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-1 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-1 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-1 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-1 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-1 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-1 div.sk-item {position: relative;z-index: 1;}#sk-container-id-1 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-1 div.sk-item::before, #sk-container-id-1 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-1 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-1 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-1 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-1 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-1 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-1 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-1 div.sk-label-container {text-align: center;}#sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-1 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-1" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[(&#x27;count&#x27;,CountVectorizer(ngram_range=(1, 3),stop_words=[&#x27;aber&#x27;, &#x27;alle&#x27;, &#x27;allem&#x27;, &#x27;allen&#x27;,&#x27;aller&#x27;, &#x27;alles&#x27;, &#x27;als&#x27;, &#x27;also&#x27;,&#x27;am&#x27;, &#x27;an&#x27;, &#x27;ander&#x27;, &#x27;andere&#x27;,&#x27;anderem&#x27;, &#x27;anderen&#x27;, &#x27;anderer&#x27;,&#x27;anderes&#x27;, &#x27;anderm&#x27;, &#x27;andern&#x27;,&#x27;anderr&#x27;, &#x27;anders&#x27;, &#x27;auch&#x27;, &#x27;auf&#x27;,&#x27;aus&#x27;, &#x27;bei&#x27;, &#x27;bin&#x27;, &#x27;bis&#x27;, &#x27;bist&#x27;,&#x27;da&#x27;, &#x27;damit&#x27;, &#x27;dann&#x27;, ...])),(&#x27;clf&#x27;,RandomForestClassifier(min_samples_split=5, random_state=0))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-1" type="checkbox" ><label for="sk-estimator-id-1" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[(&#x27;count&#x27;,CountVectorizer(ngram_range=(1, 3),stop_words=[&#x27;aber&#x27;, &#x27;alle&#x27;, &#x27;allem&#x27;, &#x27;allen&#x27;,&#x27;aller&#x27;, &#x27;alles&#x27;, &#x27;als&#x27;, &#x27;also&#x27;,&#x27;am&#x27;, &#x27;an&#x27;, &#x27;ander&#x27;, &#x27;andere&#x27;,&#x27;anderem&#x27;, &#x27;anderen&#x27;, &#x27;anderer&#x27;,&#x27;anderes&#x27;, &#x27;anderm&#x27;, &#x27;andern&#x27;,&#x27;anderr&#x27;, &#x27;anders&#x27;, &#x27;auch&#x27;, &#x27;auf&#x27;,&#x27;aus&#x27;, &#x27;bei&#x27;, &#x27;bin&#x27;, &#x27;bis&#x27;, &#x27;bist&#x27;,&#x27;da&#x27;, &#x27;damit&#x27;, &#x27;dann&#x27;, ...])),(&#x27;clf&#x27;,RandomForestClassifier(min_samples_split=5, random_state=0))])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-2" type="checkbox" ><label for="sk-estimator-id-2" class="sk-toggleable__label sk-toggleable__label-arrow">CountVectorizer</label><div class="sk-toggleable__content"><pre>CountVectorizer(ngram_range=(1, 3),stop_words=[&#x27;aber&#x27;, &#x27;alle&#x27;, &#x27;allem&#x27;, &#x27;allen&#x27;, &#x27;aller&#x27;, &#x27;alles&#x27;,&#x27;als&#x27;, &#x27;also&#x27;, &#x27;am&#x27;, &#x27;an&#x27;, &#x27;ander&#x27;, &#x27;andere&#x27;,&#x27;anderem&#x27;, &#x27;anderen&#x27;, &#x27;anderer&#x27;, &#x27;anderes&#x27;,&#x27;anderm&#x27;, &#x27;andern&#x27;, &#x27;anderr&#x27;, &#x27;anders&#x27;, &#x27;auch&#x27;,&#x27;auf&#x27;, &#x27;aus&#x27;, &#x27;bei&#x27;, &#x27;bin&#x27;, &#x27;bis&#x27;, &#x27;bist&#x27;, &#x27;da&#x27;,&#x27;damit&#x27;, &#x27;dann&#x27;, ...])</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-3" type="checkbox" ><label for="sk-estimator-id-3" class="sk-toggleable__label sk-toggleable__label-arrow">RandomForestClassifier</label><div class="sk-toggleable__content"><pre>RandomForestClassifier(min_samples_split=5, random_state=0)</pre></div></div></div></div></div></div></div>
## Evaluation Results
| Metric | Value |
|----------|----------|
| accuracy | 0.664286 |
| f1 score | 0.664286 |
# Model Card Authors
This model card and the model itself are written by following authors:
[@LennardZuendorf -HGF](https://huggingface.co/LennardZuendorf)
[@LennardZuendorf - Github](https://github.com/LennardZuendorf)
# Citation
See Dataset for Sources and refer to [Github](https://github.com/LennardZuendorf/uniArchive-legalis) for collection of all files.