mbrack commited on
Commit
83714b8
1 Parent(s): a67a700

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -33,7 +33,7 @@ If you want to train a model for your own language or are working on evaluations
33
  - **Compute resources:** [DFKI cluster](https://www.dfki.de/en/web)
34
  - **Contributors:** Manuel Brack, Patrick Schramowski, Pedro Ortiz, Malte Ostendorff, Fabio Barth, Georg Rehm, Kristian Kersting
35
  - **Research labs:** [Occiglot](https://occiglot.github.io/occiglot/) with support from [SAINT](https://www.dfki.de/en/web/research/research-departments/foundations-of-systems-ai) and [SLT](https://www.dfki.de/en/web/research/research-departments/speech-and-language-technology)
36
- - **Contact:** [Discord](https://discord.gg/wUpvYs4XvM) [hello@occiglot.org](mailto:hello@occiglot.org)
37
 
38
  ### How to use
39
 
@@ -43,8 +43,8 @@ set a seed for reproducibility:
43
 
44
  ```python
45
  >>> from transformers import AutoTokenizer, MistralForCausalLM, set_seed
46
- >>> tokenizer = AutoTokenizer.from_pretrained("occiglot/occiglot-7b-de-en-instruct")
47
- >>> model = MistralForCausalLM.from_pretrained('occiglot/occiglot-7b-de-en-instruct') # You may want to use bfloat16 and/or move to GPU here
48
  >>> set_seed(42)
49
  >>> messages = [
50
  >>> {"role": "system", 'content': 'You are a helpful assistant. Please give short and concise answers.'},
 
33
  - **Compute resources:** [DFKI cluster](https://www.dfki.de/en/web)
34
  - **Contributors:** Manuel Brack, Patrick Schramowski, Pedro Ortiz, Malte Ostendorff, Fabio Barth, Georg Rehm, Kristian Kersting
35
  - **Research labs:** [Occiglot](https://occiglot.github.io/occiglot/) with support from [SAINT](https://www.dfki.de/en/web/research/research-departments/foundations-of-systems-ai) and [SLT](https://www.dfki.de/en/web/research/research-departments/speech-and-language-technology)
36
+ - **Contact:** [Discord](https://discord.gg/wUpvYs4XvM)
37
 
38
  ### How to use
39
 
 
43
 
44
  ```python
45
  >>> from transformers import AutoTokenizer, MistralForCausalLM, set_seed
46
+ >>> tokenizer = AutoTokenizer.from_pretrained("occiglot/occiglot-7b-eu5-instruct")
47
+ >>> model = MistralForCausalLM.from_pretrained('occiglot/occiglot-7b-eu5-instruct') # You may want to use bfloat16 and/or move to GPU here
48
  >>> set_seed(42)
49
  >>> messages = [
50
  >>> {"role": "system", 'content': 'You are a helpful assistant. Please give short and concise answers.'},