File size: 1,163 Bytes
2aa7fed
 
 
 
ffef964
2aa7fed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1eaea22
2aa7fed
 
 
 
 
 
 
8a5c42f
2aa7fed
8a5c42f
2aa7fed
ffef964
2aa7fed
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
tags: autotrain
language: en
widget:
- text: "Why is the username the largest part of each card?"
datasets:
- Shenzy2/autotrain-data-NER4DesignTutor
co2_eq_emissions: 0.004032656988228696
---

# Model Trained Using AutoTrain

- Problem type: Entity Extraction
- Model ID: 1169643336
- CO2 Emissions (in grams): 0.004032656988228696

## Validation Metrics

- Loss: 0.677674412727356
- Accuracy: 0.8129095674967235
- Precision: 0.4424778761061947
- Recall: 0.4844961240310077
- F1: 0.4625346901017577

## Usage

You can use cURL to access this model:

```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "Why is the username the largest part of each card?"}' https://api-inference.huggingface.co/models/Shenzy2/NER4DesignTutor
```

Or Python API:

```
from transformers import AutoModelForTokenClassification, AutoTokenizer

model = AutoModelForTokenClassification.from_pretrained("Shenzy2/NER4DesignTutor")

tokenizer = AutoTokenizer.from_pretrained("Shenzy2/NER4DesignTutor")

inputs = tokenizer("Why is the username the largest part of each card?", return_tensors="pt")

outputs = model(**inputs)
```