data
stringclasses 18
values |
---|
dep#0002: You could have just redirected the welcome to a new channel yk
JL#1976: Let's keep all the discussions in this general chat, so #ποΈ±welcome channel serves as a landing page and rules page if we get more users.
ZestyLemonade#1012: **Welcome to #π¬οΈ±general**
This is the start of the #π¬οΈ±general channel.
JL#1976: You learn something new every day π
γο½ο½ο½ο½ο½ο½
ο½ο½γ#3903: where's the language list for riffusion
γο½ο½ο½ο½ο½ο½
ο½ο½γ#3903: i want to know what genres it supports
γο½ο½ο½ο½ο½ο½
ο½ο½γ#3903: found it nvm
db0798#7460: Where is it then?
γο½ο½ο½ο½ο½ο½
ο½ο½γ#3903: https://huggingface.co/riffusion/riffusion-model-v1/blob/main/tokenizer/vocab.json
γο½ο½ο½ο½ο½ο½
ο½ο½γ#3903: here
db0798#7460: Thanks!
γο½ο½ο½ο½ο½ο½
ο½ο½γ#3903: it has a lot of non-musical words though
doesn't exactly help
γο½ο½ο½ο½ο½ο½
ο½ο½γ#3903: not too sure how it's supposed to work |
db0798#7460: This is identical with the vocab.json of the Stable Diffusion 1.5 base model https://huggingface.co/runwayml/stable-diffusion-v1-5/raw/main/tokenizer/vocab.json
db0798#7460: I think the genre names probably come from somewhere else
dep#0002: @seth (sorry if too many pings) I am trying to make my own audio seed. Could the spectogram_from_waveform (from https://github.com/hmartiro/riffusion-inference/blob/6c99dba1c81b2126a2042712ab0c35d0668bd83c/riffusion/audio.py#L89) be used to transform a WAV tensor's (I'm guessing from torchaudio.load) into a spectogram object, and then do the reverse of spectrogram_from_image to basically have a custom seed?
dep#0002: I also see the following comment:
```
"""
Compute a spectrogram magnitude array from a spectrogram image.
TODO(hayk): Add image_from_spectrogram and call this out as the reverse.
"""
```
I could try doing it
alfredw#2036: Can we make it 10x better soon?
dep#0002: I was looking into converting it to TensorRT with Volta but it seems it has 1 more layer
γο½ο½ο½ο½ο½ο½
ο½ο½γ#3903: could we send some songs of different genres so that the ai can generate a wider array of genres?
i have a decent amount of obscure genres in my playlist |
alfredw#2036: what's the training set?
γο½ο½ο½ο½ο½ο½
ο½ο½γ#3903: vaporwave, chopped and screwed, free folk, experimental rock, art pop, etc?
γο½ο½ο½ο½ο½ο½
ο½ο½γ#3903: don't think the ai knows those genres too well
dep#0002: gptchat:
```python
def image_from_spectrogram(spectrogram: np.ndarray, max_volume: float = 50, power_for_image: float = 0.25) -> Image.Image:
"""
Compute a spectrogram image from a spectrogram magnitude array.
"""
# Reverse the power curve
data = np.power(spectrogram, power_for_image)
# Rescale to the range 0-255
data = data * 255 / max_volume
|
# Invert
data = 255 - data
# Flip Y and add a single channel
data = data[::-1, :, None]
# Convert to an image
return Image.fromarray(data.astype(np.uint8))
```
dep#0002: anyways I will see what I can do
dep#0002: this is basically audio2audio
γο½ο½ο½ο½ο½ο½
ο½ο½γ#3903: could we help increase the dataset
dep#0002: If they release the training code I will train it on the entirety of pandemic sound
db0798#7460: I would also like to have some way of adding things to the dataset
γο½ο½ο½ο½ο½ο½
ο½ο½γ#3903: i'd train it on my playlist + more albums that i somewhat like |
Slynk#7009: omg I've been dying for something audio related to happen with all this AI hype.
γο½ο½ο½ο½ο½ο½
ο½ο½γ#3903: so that way the ai can endlessly churn out music i like >:D
my playlist would probably be too smoll for it though
Thistle Cat#9883: Hi what's happened?
Paulinux#8579: What I'm doing wrongly? I have URL like this:
https://www.riffusion.com/?%20&prompt=folk&%20denoising=0.05&%20seedImageId=agile
And sometimes this AI couldn't produce for me anything
db0798#7460: Try the Colab (https://colab.research.google.com/drive/1FhH3HlN8Ps_Pr9OR6Qcfbfz7utDvICl0?usp=sharing) instead, perhaps? Colab seems to work consistently
Paulinux#8579: OK, thanks
γο½ο½ο½ο½ο½ο½
ο½ο½γ#3903: ooh wait i know what i'd do
i gather all the alternative/avant-garde genres i can muster, select some albums from those genres (with the genre names + other descriptions with them) and then i'd put those in the ai
dep#0002: overloaded
dep#0002: If anyone wants I can host a mirror
hayk#0058: We have this code and it's very simple, we just haven't added it to the inference repo
dep#0002: Could it be possible for you to send it here or push it to the repo? |
hayk#0058: Yeah if you open an issue on github we will aim to get to it soon!
AmbientArtstyles#1406: Hey @ZestyLemonade, I'm writing an article about sound design (sfx for games/movies) and Riffusion, can I use your sentience.wav clip in it?
AmbientArtstyles#1406: I so want to collaborate on training the algorithm with my personal sound libraries. ποΈ πΆ
Thistle Cat#9883: Has the website been fixed?
JL#1976: Works for me.
Thistle Cat#9883: Nice!
Thistle Cat#9883: I will have to check it again tonight
Tekh#3634: How do I make a continuous stream of interconnected clips with the colab?
Tekh#3634: also, is there any way to change tempo and such like on the webapp using the colab?
dep#0002: thanks pls do so asap I cant wait
dep#0002: in the meantime im getting things like these
dep#0002: https://cdn.discordapp.com/attachments/1053081177772261386/1053101779992186941/FINAL.png
Thistle Cat#9883: Anyone hearing a snapping noise when it goes to the next spectrogram?
yokento#6970: Can you seed riffusion with your own clip of audio?
April#5244: 14gb ckpt? |
dep#0002: thats what I was trying
Jack Julian#8888: Crazy stuff yall
im a musician myself, and seeing this is both interesting and 'worrying'. Love how you thought this out and put it to works.
April#5244: was hoping to gen using automatic1111's sd webui and perhaps finetune the model using dreambooth. but I feel like I'm in a bit over my head lol
WereSloth#0312: you'll need to convert music to a spectrogram
WereSloth#0312: and then, yes, you should be able to dreambooth
db0798#7460: Is there a script for converting your own audio sample to the right kind of spectrogram already available anywhere? There seem to be functions that do this in the Riffusion codebase but I guess they don't work as a standalone script?
dep#0002: the image_from_spectogram function is missing
dep#0002: https://github.com/hmartiro/riffusion-inference/issues/9
dep#0002: supposedly they have it but they haven't added it yet
dep#0002: Maybe tomorrow it could be ready
lxe#0001: π
lxe#0001: Just wanted to stop by and say how awesome this thing is
lxe#0001: Wonder if something like deforum for it is in the works.
justinethanmathews#7521: this is very interesting. i am mostly in the "afraid of AI" camp, but music is a field I understand and I can see how interesting this is. |
this might be a stupid question. but what was this trained on?
April#5244: I'm also curious about the dataset tbh
April#5244: also managed a small success: converting from wav file to spectrogram and back is working perfectly, and I have a working ckpt that can generate the spectrogram images. Next is to make a finetuning dataset and run it through dreambooth π
dep#0002: Can you share your convertor?
April#5244: https://pastebin.com/raw/0ALzwee4
April#5244: just a word of warning @dep I have no idea what I'm doing and this code was generated with the help of an ai and my own tinkering. might have some serious stuff wrong with it lol
April#5244: I've only tested it on the generated 5-second wav files that are created from the sister script
April#5244: also trying to re-input the generated pics doesn't work right so I have to manually save in paint and then it works for some reason lol
April#5244: but from my testing it seems to work well enough
April#5244: currently seeing if I can get it to work from mp3 and clip like the first 5 seconds or something
dep#0002: time to train on the internet
dep#0002: π₯
April#5244: okay so I think I got it working with mp3 so I threw a whole dang song in there and it generated an image but it's much wider in resolution, and cropping it down just results in junk lol
April#5244: might have to limit it to 5 seconds |
April#5244: ```
def spectrogram_image_from_mp3(mp3_bytes: io.BytesIO, max_volume: float = 50, power_for_image: float = 0.25) -> Image.Image:
"""
Generate a spectrogram image from an MP3 file.
"""
# Load MP3 file into AudioSegment object
audio = pydub.AudioSegment.from_mp3(mp3_bytes)
# Convert to mono and set frame rate
audio = audio.set_channels(1)
audio = audio.set_frame_rate(44100)
# Extract first 5 seconds of audio data
audio = audio[:5000]
|
# Convert to WAV and save as BytesIO object
wav_bytes = io.BytesIO()
audio.export(wav_bytes, format="wav")
wav_bytes.seek(0)
# Generate spectrogram image from WAV file
return spectrogram_image_from_wav(wav_bytes, max_volume=max_volume, power_for_image=power_for_image)
```
```
# Open MP3 file
with open('music.mp3', 'rb') as f:
mp3_bytes = io.BytesIO(f.read())
# Generate spectrogram image
image = spectrogram_image_from_mp3(mp3_bytes) |
```
April#5244: this is the img2wav script I'm using https://cdn.discordapp.com/attachments/1053081177772261386/1053166079804981268/audio.py
April#5244: I'm actually not using any other code lol
April#5244: so idk why/how to fix any issues with the riffusion ui stuff
dep#0002: What error did you got
dep#0002: I didnt got any errors but it took longer because it was larger than 512by512
dep#0002: after resizing it it worked as normal
dep#0002: however I can barely hear the see
dep#0002: I had to set denoising to 0.01 to actually remember the tempo
dep#0002: Anyways
dep#0002: I have an A100 so I will see if I can finetune it
April#5244: I actually fixed the error by editing the img2wav script lol
April#5244: one of the things had an extra parameter for whatever reason which was messing it up
April#5244: https://cdn.discordapp.com/attachments/1053081177772261386/1053175675709837412/output.wav,https://cdn.discordapp.com/attachments/1053081177772261386/1053175675986649158/clip.wav,https://cdn.discordapp.com/attachments/1053081177772261386/1053175676334788689/outputspectro.png
April#5244: example conversion |
April#5244: "clip.wav" is the 5 second clip from the original mp3 that's used for conversion. the image is the converted spectrum from the mp3. and output.wav is the reconverted song from the image
April#5244: scripts used https://cdn.discordapp.com/attachments/1053081177772261386/1053176008141963364/audio2spectro.py,https://cdn.discordapp.com/attachments/1053081177772261386/1053176008590770236/audio.py
April#5244: notably the script doesn't clip the first 5 seconds, but rather the next 5 after that
April#5244: since I wanted to avoid the sometimes slow intro that songs have lol
April#5244: still need to do some more work on the scripts before I can have it auto-generate some dataset images properly @.@
April#5244: might actually need to fetch later in the songs lol π€
April#5244: I noticed it still distorts the sound a bit...
April#5244: comparing: clip is the cropped audio, output.wav is the audio->img->audio convert https://cdn.discordapp.com/attachments/1053081177772261386/1053177485497475113/output.wav,https://cdn.discordapp.com/attachments/1053081177772261386/1053177485862387712/clip.wav
dep#0002: WoohH!!!!!
dep#0002: I dont know how to thank you
dep#0002: and its just been a day Lol
dep#0002: @JL may I suggest some emojis
April#5244: I'm sure someone smarter than me can figure out how to fix it lol
JL#1976: Yes, let me know if you have any cool ideas for the channel
dep#0002: Can I dm you the stickers and emojis |
dep#0002: I prob will also make it into a bot
dep#0002: (riffusion)
dep#0002: although I've also heard that another dev is also working on one
dep#0002: u know lopho from sail?
dep#0002: the decentralized training server
dep#0002: I might ask him tomorrow
dep#0002: he knows a lot about this stuff
dep#0002: he rewrote the bucketing code himself lol
dep#0002: yet haru didnt merged it and he deleted it
JL#1976: Yup
dep#0002: hm.... I dont think we should mess with the channels....
dep#0002: but its worth the attempt
April#5244: π€·ββοΈ honestly most of this code is ai generated. I don't know anything about music lol. removing that line of code seems to break the conversion entirely
April#5244: looking into the actual conversion a bit more it seems like the sample rate is getting changed which may be why the quality is decreasing π€
dep#0002: = ( https://cdn.discordapp.com/attachments/1053081177772261386/1053183348715032576/recompiled.mp3 |
dep#0002: og https://cdn.discordapp.com/attachments/1053081177772261386/1053183641699753984/invadercrop.wav
dep#0002: fixed https://cdn.discordapp.com/attachments/1053081177772261386/1053184852154916894/recompiled.mp3
April#5244: got it pretty close https://cdn.discordapp.com/attachments/1053081177772261386/1053189308154122391/reconstructed.wav,https://cdn.discordapp.com/attachments/1053081177772261386/1053189308531613707/clip.wav
April#5244: there's some clipping though π€
April#5244: basically just change max volume to 80 on both scripts to get this result
dep#0002: @April My image is on 512x501, is there anyway to fix that?
dep#0002: or just resize in paint
dep#0002: https://cdn.discordapp.com/attachments/1053081177772261386/1053191450768187392/agile.png
April#5244: > audio = audio[:5119]
April#5244: when you're doing converting
April#5244: the size of the image is based on the length of the audio
April#5244: clipping it to 5119 seems to work
April#5244: I'm currently using this to get the middle of the song:
> audio = audio[int(len(audio)/2):int(len(audio)/2)+5119]
dep#0002: kay I will investigae more tomorrow |
dep#0002: maybe the first finetune
April#5244: I wonder if there's a way to just have the whole song in the image π€
April#5244: I guess it'd have to be a larger image...
vai#0872: any way to fine tune this model?
Milano#2460: hi All! are you aware of https://www.isik.dev/posts/Technoset.html ?
Milano#2460: Technoset is a data-set of 90,933 electronic music loops, totalling around 50 hours. Each loop has a length of 1.827-seconds and is at 128bpm. The loops are from 10,000 separate electronic music tracks.
Milano#2460: I'm wondering how best to preprocess it.
db0798#7460: I think this one is pretty cool actually, sounds like Autechre
jacobresch#3699: here's an extension for auto1111 which automatically converts the images to audio again π
https://github.com/enlyth/sd-webui-riffusion
JL#1976: Pinned a message.
HD#1311: this is exactly the kind of plugin I was looking for
HD#1311: thanks
HD#1311: I'll report if it works when I get home from work
April#5244: Worked for me but it messed up my sd install and python because it tried to install pytorch audio which I already had |
JeniaJitsev#1332: Great work, folks, very impressive! I am scientific lead and co-founder of LAION, datasets of which are used to train original image based stable diffusion. Very nice to see such a cool twist for getting spectrogram based training running. We would be very much interested to cooperate on that and scale it further up - just join our LAION discord : https://discord.gg/we4DaujH
JL#1976: Welcome, happy to see you here!
HD#1311: just tested it out and it works
HD#1311: fun stuff
pnuts#1013: π
dep#0002: So, me an lopho have been experimenting with converting audio to spectogram images
dep#0002: @April we got better results by increasing the n_mels but it would prob not be compatible with the current model
dep#0002: 512 (original) https://cdn.discordapp.com/attachments/1053081177772261386/1053353723319038012/invadercrop_nmels_512.png,https://cdn.discordapp.com/attachments/1053081177772261386/1053353723826552852/invader_nmels_512.wav
dep#0002: 768 https://cdn.discordapp.com/attachments/1053081177772261386/1053353776838365224/invadercrop_nmels_768.png,https://cdn.discordapp.com/attachments/1053081177772261386/1053353777136148602/invader_nmels_768.wav
dep#0002: 1024 https://cdn.discordapp.com/attachments/1053081177772261386/1053353811399409724/invadercrop_nmels_1024.png,https://cdn.discordapp.com/attachments/1053081177772261386/1053353811714002996/invader_nmels_1024.wav
dep#0002: original (nothing) https://cdn.discordapp.com/attachments/1053081177772261386/1053353879825305650/invadercrop.wav
dep#0002: you will probably need good headphones to hear the difference
dep#0002: but you can hear one of the beats more clearly compared to 512
JL#1976: https://www.futurepedia.io/tool/riffusion Riffusion added to futerepedia.io
dep#0002: I'll be updating these tools here: |
https://github.com/chavinlo/riffusion-manipulation
JL#1976: Let's create a post and get that pinned.
JL#1976: **Official website:**
https://riffusion.com/
**Technical explanation:**
https://www.riffusion.com/about
**Riffusion App Github:**
https://github.com/hmartiro/riffusion-app
**Riffusion Inference Server Github:
**https://github.com/hmartiro/riffusion-inference/
**Developers:** |
@seth
@hayk
**HackerNews thread:**
https://news.ycombinator.com/item?id=33999162
**Subreddit:**
https://reddit.com/r/riffusion
**Riffusion manipulation tools from @dep :**
https://github.com/chavinlo/riffusion-manipulation
**Riffusion extension for AUTOMATIC1111 Web UI**:
https://github.com/enlyth/sd-webui-riffusion
|
No dataset card yet
New: Create and edit this dataset card directly on the website!
Contribute a Dataset Card- Downloads last month
- 0