TristanBehrens commited on
Commit
7422747
1 Parent(s): 3d52a00

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -22
README.md CHANGED
@@ -32,28 +32,8 @@ This model is just a proof of concept. It shows that HuggingFace can be used to
32
 
33
  ### How to use
34
 
35
- You can immediately start generating music running these lines of code:
36
-
37
- ```
38
- from transformers import AutoTokenizer, AutoModelForCausalLM
39
-
40
- tokenizer = AutoTokenizer.from_pretrained("TristanBehrens/js-fakes-4bars")
41
- model = AutoModelForCausalLM.from_pretrained("TristanBehrens/js-fakes-4bars")
42
-
43
- input_ids = tokenizer.encode("PIECE_START", return_tensors="pt")
44
- print(input_ids)
45
-
46
- generated_ids = model.generate(input_ids, max_length=500)
47
- generated_sequence = tokenizer.decode(generated_ids[0])
48
- print(generated_sequence)
49
- ```
50
-
51
- Note that this just generates music as a text. In order to actually listen to the generated music, you can use this [notebook](https://huggingface.co/TristanBehrens/js-fakes-4bars/blob/main/colab_jsfakes_generation.ipynb).
52
 
53
  ### Limitations and bias
54
 
55
- Since this model has been trained on a very small corpus of music, it is overfitting heavily.
56
-
57
- ## Training data
58
-
59
- The model has been trained on Omar Peracha's [JS Fake Chorales](https://github.com/omarperacha/js-fakes) dataset, which is a fine collection of 500 Bach-like chorales.
 
32
 
33
  ### How to use
34
 
35
+ There is a notebook in the repo that you can run on Google Colab.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
 
37
  ### Limitations and bias
38
 
39
+ Since this model has been trained on a very small corpus of music, it is overfitting heavily.