lvwerra HF staff commited on
Commit
6d1687a
1 Parent(s): 505bd87

update lvwerra namespace

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -9,8 +9,8 @@ You can load the CodeParrot model and tokenizer directly in `transformers`:
9
  ```Python
10
  from transformers import AutoTokenizer, AutoModelWithLMHead
11
 
12
- tokenizer = AutoTokenizer.from_pretrained("lvwerra/codeparrot-small")
13
- model = AutoModelWithLMHead.from_pretrained("lvwerra/codeparrot-small")
14
 
15
  inputs = tokenizer("def hello_world():", return_tensors="pt")
16
  outputs = model(**inputs)
@@ -21,13 +21,13 @@ or with a `pipeline`:
21
  ```Python
22
  from transformers import pipeline
23
 
24
- pipe = pipeline("text-generation", model="lvwerra/codeparrot-small")
25
  outputs = pipe("def hello_world():")
26
  ```
27
 
28
  ## Training
29
 
30
- The model was trained on the cleaned [CodeParrot 🦜 dataset](https://huggingface.co/datasets/lvwerra/codeparrot-clean) with the following settings:
31
 
32
  |Config|Value|
33
  |-------|-----|
@@ -57,6 +57,6 @@ The [pass@k metric](https://huggingface.co/metrics/code_eval) tells the probabil
57
 
58
  ## Resources
59
 
60
- - Dataset: [full](https://huggingface.co/datasets/lvwerra/codeparrot-clean), [train](https://huggingface.co/datasets/lvwerra/codeparrot-clean-train), [valid](https://huggingface.co/datasets/lvwerra/codeparrot-clean-valid)
61
  - Code: [repository](https://github.com/huggingface/transformers/tree/master/examples/research_projects/codeparrot)
62
  - Spaces: [generation](), [highlighting]()
 
9
  ```Python
10
  from transformers import AutoTokenizer, AutoModelWithLMHead
11
 
12
+ tokenizer = AutoTokenizer.from_pretrained("codeparrot/codeparrot-small")
13
+ model = AutoModelWithLMHead.from_pretrained("codeparrot/codeparrot-small")
14
 
15
  inputs = tokenizer("def hello_world():", return_tensors="pt")
16
  outputs = model(**inputs)
 
21
  ```Python
22
  from transformers import pipeline
23
 
24
+ pipe = pipeline("text-generation", model="codeparrot/codeparrot-small")
25
  outputs = pipe("def hello_world():")
26
  ```
27
 
28
  ## Training
29
 
30
+ The model was trained on the cleaned [CodeParrot 🦜 dataset](https://huggingface.co/datasets/codeparrot/codeparrot-clean) with the following settings:
31
 
32
  |Config|Value|
33
  |-------|-----|
 
57
 
58
  ## Resources
59
 
60
+ - Dataset: [full](https://huggingface.co/datasets/codeparrot/codeparrot-clean), [train](https://huggingface.co/datasets/codeparrot/codeparrot-clean-train), [valid](https://huggingface.co/datasets/codeparrot/codeparrot-clean-valid)
61
  - Code: [repository](https://github.com/huggingface/transformers/tree/master/examples/research_projects/codeparrot)
62
  - Spaces: [generation](), [highlighting]()