data stringlengths 14 24.3k |
|---|
Philpax#0001: nah, it converts the riffusion spectogram to audio and vice versa
nullerror#1387: it does audio to img and vice versa
nullerror#1387: ah dang same mind
Philpax#0001: it doesn't convert arbitrary images
Philpax#0001: for that you'd need something to map the image to the audio space
cravinadventure#7884: ah... |
nullerror#1387: sent u a dm
cravinadventure#7884: thanks 🙂
nullerror#1387: with a tut
DeadfoxX#0666: Will there be a way to create longer Songs and to add lyrics later?
JL#1976: Pinned a message.
matteo101man#6162: anyone have a way to do a batch file2img
matteo101man#6162: also this 4gb model appears to be broken (ht... |
Edenoide#0166: The painful process of installing Riffusion Inference Server on Windows10 (PART III)
Hi again. First of all amb not a programmer so my python knowledge is very basic. I've installed a clean version of anaconda and then git cloned the riffusion inference server repository:
conda create --name riffusion-in... |
Edenoide#0166: there's always a 404 error when opening the site. Any ideas?
Edenoide#0166: Thank you in advance!
matteo101man#6162: 🤷♂️
Edenoide#0166: My windows experience with riffusion is pain. So easy when running the colab's I've found. I asume it's optimized for linux?
matteo101man#6162: I'm not very knowledgea... |
Edenoide#0166: The thing I want to achieve is running the real time song with my custom model
Edenoide#0166: I'm training an electronic cumbia model that's starting to generate funny results
Edenoide#0166: https://cdn.discordapp.com/attachments/1053081177772261386/1055118609770364998/haunted_cumbias.mp3
Edenoide#0166:... |
0nion_man_LV#6572: the "soundfile" command didn't find any package so i had to use the command conda page suggested:
`https://anaconda.org/bricew/soundfile`
`conda install -c bricew soundfile`
as well as PySoundFile didn't work so their solution is the following:
`https://anaconda.org/conda-forge/pysoundfile`
`conda i... |
0nion_man_LV#6572: give money :^)
doomsboygaming#2550: college student here, broke as all get out
doomsboygaming#2550: But yeah, installing from a fresh env
0nion_man_LV#6572: SD worked perfectly fine, be it cpu or gpu generating the image. I refuse to believe that generating a black and white spectrum would be any mor... |
doomsboygaming#2550: I forgot how many packages were in there
doomsboygaming#2550: Thank goodness i have fast download and SSD
0nion_man_LV#6572: only minor cpu issues but it's bigtime bottlenecked by gpu anyways.
doomsboygaming#2550: Yeah the soundfile and that has issues just as stated by the other person
doomsboygam... |
0nion_man_LV#6572: lol
doomsboygaming#2550: just needed to restart the terminal
doomsboygaming#2550: Might be nice to add "you need to have node.js installed"
doomsboygaming#2550: with the link
doomsboygaming#2550: cause my dumb brain forgot thats what npm uses
doomsboygaming#2550: delicious download speeds https://cdn... |
doomsboygaming#2550: Now how does one train LMAO
0nion_man_LV#6572: there's plenty of guides online
doomsboygaming#2550: kk
0nion_man_LV#6572: have fun managing your storage if you wanna train anything actually worth your time
doomsboygaming#2550: I have 8 tb
Edenoide#0166: I'm using the fast dreambooth colab: https://... |
0nion_man_LV#6572: server files are kinda abstract to me 🥺
doomsboygaming#2550: God damn, this thing makes someone decent vocals
Edenoide#0166: so it means if you want clean loops with no cuts or weird rhythm jumps it only works on 94bpms
Edenoide#0166: or half/doubles
Edenoide#0166: electroswing is about 128 beats pe... |
Edenoide#0166: (turning your audios into images for training)
doomsboygaming#2550: Yeah, audio to spectrogram
Edenoide#0166: This colab works great https://discord.com/channels/1053034685590143047/1053081177772261386/1054726766129844224
doomsboygaming#2550: Did i just hear the AI make a "person" say Gangnam Style?
doom... |
doomsboygaming#2550: I see
doomsboygaming#2550: Yeah 4gb of Vram is kinda bad
0nion_man_LV#6572: i wonder what are the minimum requirements in that case
hulla#5846: hello i just come in this discord channel and see what you said humm it is possible to use a cluster of more than one computer ?
doomsboygaming#2550: I don... |
a_robot_kicker#7014: Which defines clip duration as 5000ms and proceeds from there, but using those numbers ends up producing a 502px spectrogram 🤔
a_robot_kicker#7014: I wonder if it's doing something like that to make space to loop the clips smoothly or something
db0798#7460: With riffusion-manipulation scripts you ... |
matteo101man#6162: Can you train models at irregular resolutions like 512x2048?
Edenoide#0166: I think so but the original model and the Riffusion app work with 512x512 chunks
Edenoide#0166: training 4 bars of a song per image would be great
monasterydreams#4709: I don't know if this plausible. But I was thinking about... |
Does this make sense based on how I understand riffusion can work. Or am I way off?
denny#1553: you got the basic principle down. You can use node to convert the base64 encoded URI into an mp3/wav file (I use wav with some code modifications). Yeah if you can create a plugin that can send a POST request out in ableton ... |
MentalPistol#9423: how do you make this leave drums out
gorb#1295: you mean separating drums from a mix?
gorb#1295: i use demucs for that
MentalPistol#9423: good looking out
Meatfucker#1381: Heads up, the anti ai-art campaigners are out for blood recently. Got kickstarter to kick all AI stuff off it. Non-zero chance th... |
IgnizHerz#2097: It's also important we understand why said people react. Pointing fingers at each other will never solve anything. But thats my two cents.
Meatfucker#1381: framing it as AI gives it a mystical quality it simply doesnt have
Meatfucker#1381: Yeah, there are valid concerns from every context, mixed in alon... |
denny#1553: mhmm. Grading systems don't work. We're being taught arbitrary structure of society over anything else
denny#1553: I have hope that all the backlash strengthens the technology and allows people to see how to use it beyond making a quick dollar
denny#1553: because it's powerful.
IgnizHerz#2097: Humans are wo... |
Nikuson#6709: Anyone have an unconditional learning notepad of any model to generate 512*512 images?
Edenoide#0166: mmm I think it still catching it from huggingface
Edenoide#0166: There's something inside server.py for changing it for sure... But should be a shorter way to achieve it https://cdn.discordapp.com/attachm... |
denny#1553: more info here: https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipeline_utils.py#L300-L355 you would have to have a directory with a pipeline.py file in it
hayk#0058: I think the issue is the traced UNet is still coming from huggingface. If you give the server.py script the --checkpoint to... |
XIVV#9579: am i the only one
XIVV#9579: that constanly has this servers scaling error
XIVV#9579: like
XIVV#9579: i have no idea why its happening
Meatfucker#1381: you on mobile?
Meatfucker#1381: discords had scaling issues on mobile for months
Meatfucker#1381: itll just get bigger and bigger depending on how you switch... |
monasterydreams#4709: that is sick, would love to see how you accomplish it
a_robot_kicker#7014: Juce it turns out is very magical. Almost as full featured as Qt
XIVV#9579: oh
XIVV#9579: ok
monasterydreams#4709: Yeah I would love to get a peak if you already have head way on this. See how I could help in any programmin... |
hayk#0058: There's a colab linked here https://github.com/riffusion/riffusion-app#riffusion-app
hayk#0058: For sure, just some code needs to be written to handle varying resolutions and adapt spectrograms
doomsboygaming#2550: I’m surprised nobody thought of this kind of product earlier, if you are able to know what sou... |
tugen#7971: thx, that is enlightening 💡
IgnizHerz#2097: There's definetly some routes to be explored, on the laion server uptightmoose explains some ideas for making the generation longer. In much finer detail than I can explain
April#5244: it's the same thing for sizes. you *can* use a 512 model to gen, say, 1024x102... |
hayk#0058: There's probably some. You can check the huggingface spaces. But I will also add something basic in the next few days
hayk#0058: Hey @Edenoide I fixed loading custom checkpoints with this commit: https://github.com/riffusion/riffusion/commit/8349ccff5957f42d8ae7838b6d8218e3060ad1ee
So now if you specify a n... |
I believe you can train an 1.5 model with the 768px images without much trouble, but ideally the 2.1 768 model would be best 🤔
undefined#3382: You get higher quality and also longer samples of almost 8s
Edenoide#0166: You can use dreambooth for training. I've been using the 4GB model: https://huggingface.co/ckpt/riffu... |
https://jukebox.openai.com/
Edenoide#0166: Yeeehaa! It works!! I've followed this post for the conversion from .ckpt to diffusers if anyone is interested: https://www.reddit.com/r/StableDiffusion/comments/xooavu/how_does_this_script_works_ckpt_to_diffusers/
monasterydreams#4709: Gonna give this a shot, dope man
Nubsy#6... |
Meatfucker#1381: and its mostly visual artists, so since its audio based they may also be less interested in attacking this project
Jay#0152: https://colab.research.google.com/github/thx-pw/riffusion-music2music-colab/blob/main/riffusion_music2music.ipynb
April#5244: sounds like kickstarter will only allow ai projects ... |
XIVV#9579: what's the best (and easiest) place to use this
XIVV#9579: i wanna generate some sweet metal riffs
XIVV#9579: but i just cant with the site
Meatfucker#1381: You can run it on your own computer locally. Has similar requirements to stable diffusion
Meatfucker#1381: Im not sure how much the process has changed ... |
Meatfucker#1381: everyones about to get it in their pockets and its going to be transformative
0x4d#1101: arguably people already have it in their pockets
0x4d#1101: well, the compute is of course not run locally yet but internet access is a constant in the first world these days
0x4d#1101: but you can pretty easily us... |
Tivra#3760: Merry Christmas
Elconite#8348: after I make a spectrograph how can I transcribe it to a .wav?
denny#1553: you can use the https://github.com/chavinlo/riffusion-manipulation img2audio script here or use the auto1111 extension https://github.com/enlyth/sd-webui-riffusion with a denoise strength of 0 in img2im... |
a_robot_kicker#7014: There are already rules on music sampling and licensing, but for AI sampling it's totally unclear what IP law that would fall under. I expect some kind of new IP regulation or court interpretation of existing regulation, but it's always slower than the technology development.
a_robot_kicker#7014: I... |
S.#2668: Also can anyone tell me what the other seed images are for? I assume for blending together segments of audio? How is the best way we can do this?
S.#2668: I’d like to build up a library of useful seed images
wedgeewoo#7793: heres an embedding of mine if anyone is interested
wedgeewoo#7793: https://civitai.com/... |
hayk#0058: 🔥 **Riffusion v0.3.0** 🔥
@everyone Hey everyone! I'm excited to announce the v0.3.0 code release of the Riffusion repo. This includes a full rewrite to go from a hack to a quality software project. It also includes a CLI tool and an interactive streamlit app for common tasks, MPS backend support, stereo s... |
joseph#9145: Huge thanks for considering Mac users!
Marcos | Meta Pool#2081: This is huge!
a_robot_kicker#7014: Excellent. Will rebase next week and try to get a vst running on the new version.
jp#4195: Very nice! I'll try to install this version on my laptop again (GeForce, RTX 3090, 16hn) and see. The previous versi... |
dent#5397: Like with the dream booth fine tuning notebook it would work?
dent#5397: Not sure how to use the PyTorch audio spectrogram converter but I guess i could figure it out
dent#5397: Just saying making a notebook that streamlines the process would be great
hayk#0058: Yes please see the streamlit app, the Text to ... |
Delayedchaos#3646: I just plugged in my info into chatgpt whenever I got confused. In terms of coding I'm just a smidge more functional than a normie lol
Delayedchaos#3646: it looks great though! I'm so excited. I'm running this on a 3060 so I won't get the fast live stuff but I intend to pull samples out into other th... |
gives an error, since riffusion.audio does not exist in the library (and I can confirm that searching the github repo). Does anyone have some sample code to do a test run of the library that is updated?
wedgeewoo#7793: haha oh boy i dont even know where to start
hayk#0058: Ah yes that module was refactored away. So eit... |
Robin🐦#8003: I found this online tool that can play them (and generate them, sorta) https://nsspot.herokuapp.com/imagetoaudio/ but for some reason it's not generating anything but noise
wedgeewoo#7793: https://github.com/chavinlo/riffusion-manipulation
Robin🐦#8003: thanks 😄
bread browser#3870: like this https://pyto... |
Robin🐦#8003: didn't help unfortunately, getting the same error - I downloaded the riffusion repo and extracted it to a fresh folder, created a new env in there and installed the requirements, then I got a fresh copy of riffusion-manipulation and moved it to a folder within my riffusion folder, and ran the scripts in t... |
wedgeewoo#7793: in curves i adjusted the curve levels in gimp to get rid of some frequencies, cool stuff
Edenoide#0166: Yess we can edit music with photoshop, copy paste just some frequencies or even stretch the beat when it's out of step! It's very cool when your eyes start to identify instruments only by its visual s... |
Nico-Flor#2315: When working with this updated notebook: https://colab.research.google.com/drive/1JOOqXLxXgvNmVwatb7UwHP-_wkYwjVAP?usp=sharing
April#5244: riffusion generates spectrograms at a default 512x512 resolution which gives you a 5.12s clip. You can increase the width of the generated images to generate longer ... |
Delayedchaos#3646: the idea I've been toying around with lately is to see how detailed of an image we could export into this sort of thing while still having it be somewhat musical
Delayedchaos#3646: here's an example of some oscilloscope music if you're not familiar:
https://youtu.be/jQjJZbgMw7E
0x4d#1101: someone cor... |
Delayedchaos#3646: "Cost-prohibitive?" oof
Yeah that makes sense though ChatGPT could be making stuff up idk(cross referencing)
sperzieb00n#3903: thats smart; yeah, why not use all colors?
zanz#3084: I am still learning about this, some of the results have been very cool. I'm sure this or something similar has been tho... |
Delayedchaos#3646: due to overlapping concepts
Delayedchaos#3646: so if there's a sound map attached to each of these meanings it's almost like a whole hidden dimension to them
bread browser#3870: i was thinking of using FFmpeg to make different Hz's to make music. like https://www.youtube.com/@realwebdrivertorso but m... |
bread browser#3870: it doesn't, and it is super hard to find any answers to it.
Delayedchaos#3646: oh ok well that's good to confirm it. I try to keep that POV in mind just in case because ppl like to tell me that with my ideas a lot lol
Delayedchaos#3646: it might exist in parts though
bread browser#3870: i have made ... |
Delayedchaos#3646: I think we could have the two systems play together but I'd say the limiting factor would be the latent space maybe? idk
bread browser#3870: more like this https://cdn.discordapp.com/attachments/1053081177772261386/1058070464582393917/midi-hex.txt
Delayedchaos#3646: I'm just thinking out loud on how ... |
Delayedchaos#3646: Only way I've seen audio2img is directly in a conda environment. https://cdn.discordapp.com/attachments/1053081177772261386/1058165378141925426/image.png
Delayedchaos#3646: but that's still not technically batch. You could probably just code it to process a whole folder IDK. ChatGPT is a pretty decen... |
AdaptivePath#4443: Anyone have a good method for extracting midi data from these clips?
AdaptivePath#4443: Right now I will record like 10 min of output, then listen through and pull out short phrases (manually clipping in audacity). Then I take that into reason and bounce down to midi. But it's sh1t for the most part ... |
bread browser#3870: m40 gpus
bread browser#3870: a 12 gig m40 gpu and a 16 gig m40 gpu
Delayedchaos#3646: You putting it into a computer or do you have some sort of special rig sorted out?
Delayedchaos#3646: I've heard it can be a pain to install those unless you have the right stuff.
bread browser#3870: computer
bread... |
bread browser#3870: then it must not use arm
Delayedchaos#3646: I can just make modified parts.
bread browser#3870: the best server in the world costs only $500,000
Delayedchaos#3646: I know a few ppl who've done modded racks so I'd have to pick their brain and go get a buddy to let me cut out some parts on his CNC
Del... |
Delayedchaos#3646: I didn't really have the $$$ at the time to pursue a lot of these goals but now that's all I can think of. I'd like to think if he could get it to function on a CNC there should be no reason it wouldn't work on a number of other tools. Any additive/subtractive manufacturing I'd say.
Delayedchaos#3646... |
ClayhillJammy#0563: How do I save like, 5 minutes of a song?
Aurora~#0001: is there a way to use riffusion-app with auto1111 webui, kinda struggling to get it to work normally
Aurora~#0001: alternatively is there a way to generate from only one prompt and get good results rather than using prompt travel
ClayhillJammy#0... |
OutOfMemoryError: CUDA out of memory. Tried to allocate 4.00 GiB (GPU 0; 14.76 GiB total capacity; 9.26 GiB already allocated; 3.44 GiB free; 10.26 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management a... |
TemporAlyx#1181: I've gotten good piano riffs out of it
taste#0960: Hi all! Having a great time with riffusion!
A question- how can we convert our own music to the same looking image for the spectrogram?
MatrixMoney#5643: Hello guys how do I start to use Riffusion, is there a webapp ?
ClayhillJammy#0563: Yes, Look up R... |
TruBlu#6206: Im not seeing a channel for troubleshooting, but I am struggling with the hard transitions between images. Im using Riffusion in Automatic1111 via <https://github.com/enlyth/sd-webui-riffusion> + <https://github.com/Kahsolt/stable-diffusion-webui-prompt-travel>. I have the 13.6gb model. The regular generat... |
TruBlu#6206: Youve gotta be doing better than I am...
Aurora~#0001: wdym
TruBlu#6206: I get clone images the second that I try to get away from the hard transitions problem.
TruBlu#6206: I mean, I am making EDM... it can be whatever. But, not with the jarring transitions every 5 seconds without the Prompt Traveling
Tru... |
Edenoide#0166: Check my guide: https://www.reddit.com/r/riffusion/comments/zrubc9/installation_guide_for_riffusion_app_inference/
Edenoide#0166: (It's for Windows)
Aurora~#0001: yea thats what i followed
Edenoide#0166: maybe something changed with the new version
Aurora~#0001: hold on lemme show what went wrong again
E... |
Edenoide#0166: *the file
Aurora~#0001: i did do that yeah
Aurora~#0001: the issue is the server itself doesnt work https://cdn.discordapp.com/attachments/1053081177772261386/1059844703232720896/image.png
Aurora~#0001: ill try installing all the requirements again ig
Edenoide#0166: good luck then! I know they've changed... |
Aurora~#0001: ya
Edenoide#0166: perfect
Edenoide#0166: try to change the name 'riffusion' for 'riffusion-inference' maybe it's a mess but who knows
Kama#1898: how do i get my settings to stick? (the seed image and denoising).
seems to very rarely sometimes work, but usually it just stays at whatever it was set to
Nubsy... |
tugen#7971: run it through some AI mastering? https://www.landr.com/en/online-audio-mastering/
Nubsy#6528: Oh I'll try that! I feel like it's less mastering I need and more like... upscaling?
Nubsy#6528: I'm also looking for stuff I can do locally
tugen#7971: yeah , what happens if you run the spectrogram through say..... |
Nubsy#6528: when I need a little more of a generated song, I open it in paint, drag it left or right, depending on whether I want more before it starts or after it finishes, and then do inpainting with the same prompt, but masking out the white space and setting the denoising strength to 1. It seems to work very well e... |
Nubsy#6528: oh preach brother, that's why I'm hoping someone writes a script lol
TemporAlyx#1181: I do think that an automated script that detects the bpm and adjusts the outpaint / tiled img2img would work wonders
Nubsy#6528: see you don't even need that though
Nubsy#6528: just shift it left, cut it in half, and inpai... |
Nubsy#6528: because in the end it works waaaay better if it's still working on 512
TemporAlyx#1181: right, its still 512 x 512, just using half of the first input
TemporAlyx#1181: although I have had some limited success doing tiled img2img, where I do a pass with ~0.20-0.35 denoising at 512x512, and then a pass at ~0.... |
Nubsy#6528: https://cdn.discordapp.com/attachments/1053081177772261386/1060349154129363014/example.wav
Nubsy#6528: (there's a little phasing effect just because I literally just stacked them on top of each other where the overlap)
Nubsy#6528: and I think it works waaaay better than the online example despite not being... |
evangeesman#8969: But I want to generate singular sounds, not loops of multiple instruments
TruBlu#6206: You think inpainting something this would solve the transitions between images? https://cdn.discordapp.com/attachments/1053081177772261386/1060566020940640306/05667-3532982410-electro_funk.png
TruBlu#6206: Enabling ... |
TruBlu#6206: There was this timeline info I found as well. https://cdn.discordapp.com/attachments/1053081177772261386/1060639328344211586/image_1.png
TruBlu#6206: So little info around 😮
TruBlu#6206: Maybe some updates to Riffusion in a couple of months 🔥
mataz#8375: try just "birds", it's the best
TemporAlyx#1181: M... |
ALVARO#2720: how would copyright work if someone else use's the same prompt? i guess first to release gets it? lol
TruBlu#6206: lol I am. We spoke of this before. The Prompt Travel.
TemporAlyx#1181: Ai created works under current law are not eligible for copyright on their own I believe
TemporAlyx#1181: Also many open ... |
PeterGanunis#3634: I’ve been rocking with only 6gb vram on automatic1111
PeterGanunis#3634: I had no problems at all setting up anything
sperzieb00n#3903: pff... glad its still relatively quiet here when it comes to people caring about who owns what art and style... just like in image SD now, this place gonna be wild o... |
Fucius#3059: Hmmm I'll check it out.
Fucius#3059: Like do you think you could take normal song samples, condense them into 512x512 spectograms.. Read those spectograms into code and train an autoencoder against the original sample?
joachim#4676: I don’t know anything about how much gpu power it takes to get this tech w... |
prescience#0001: has anyone started working on a stems extractor `page` in the Streamlit app?
I'm about to dig in to add it to my workflow, but if a better one is likely to be merged soon I might wait
hayk#0058: Just merged a great one! https://github.com/riffusion/riffusion/blob/main/riffusion/streamlit/pages/split_au... |
https://www.gwern.net/GPT-2-music#generating-midi-with-10k30k-context-windows
teseting#9616: Or gpt neo
ALVARO#2720: midi always seems to be so tricky, i'd rather record in melodyne and re-draw the midi
teseting#9616: I guess that's just musenet though
Mandapoe#6608: Are the requirements to run this the same as normal ... |
Avant_Garde_1917#8538: the trick to midi is to tokenize the different values taking into account not just note but velocity, instrument and timing, represented as integers, and then to just feed it in and it learns to see the 5 integers as a single token
Avant_Garde_1917#8538: and that tokenization and conversion is al... |
MintOctopus#8867: This is Unfilter's UI, check the aqua/teal curve to see the output after the settings are applied to the pink original signal. https://cdn.discordapp.com/attachments/1053081177772261386/1062752717648445532/unfilter.png
MintOctopus#8867: The EQ alone, even as extreme as shown here, won't get me the hig... |
This will let you run text to audio, audio to audio, stem splitting, interpolation, and more on GPU for free. Ping here or open an issue in the riffusion repo if you run into problems or have improvements.
To get more updates on Riffusion, throw in your email here: http://eepurl.com/ih9ZPz
Meatfucker#1381: Awesome
Me... |
shoeg#9037: Hello everyone. I used this in a project a couple years ago. https://github.com/bearpelican/musicautobot
hulla#5846: hello i have used " mubert " right now
hulla#5846: https://youtu.be/IpeDxWexzXI
hulla#5846: hello i have do another one https://youtu.be/jjLh-JCR8Nw
evangeesman#8969: I get an error on the fi... |
hulla#5846: hello i come back with another one https://youtu.be/6u75HuBPAys
Norgus#2992: ok I've just been playing about with the riffusion extension version in auto1111 with the relevant model, haven't quite settled on a way of producing coherent clip merging
Norgus#2992: outpainting was somewhat promising, but still... |
Norgus#2992: there's the settings I used on that clip anway https://cdn.discordapp.com/attachments/1053081177772261386/1064209470051328020/image.png
Norgus#2992: I think the 'alternate steps' sounded better than 'blend average'
Norgus#2992: I reckon this might be a nice way to make an underlying spectrograph to img2img... |
AVTV64#2335: I just wanna make the bootleg style transfer thing
vananaBanana#0866: Is anyone training a model specifically for normal everyday sounds?
vananaBanana#0866: If so, I'd love to help on such a project
Leon -#4657: https://flavioschneider.notion.site/flavioschneider/Audio-Generation-with-Diffusion-c4f29f39048... |
matteo101man#6162: The upsampler looks amazing
matteo101man#6162: Don’t particularly understand how to use it just yet but I think combining that with riffusion would yield interesting results
matteo101man#6162: if someone could explain how you'd go about running it from this to someone who doesn't understand python ve... |
COMEHU#2094: i still love it
COMEHU#2094: ooh the guitar solo at 8:46 is also fire
obelisk#1740: wooo
obelisk#1740: it then turned into some indian song xd
COMEHU#2094: i always liked the creativity of Jukebox
obelisk#1740: hm, but lets say i have 1 (or many) particular artist, who's style i want to replicate. What ste... |
obelisk#1740: ehhh paywall as expected. I like their outputs tho
obelisk#1740: this one in particular https://cdn.discordapp.com/attachments/1053081177772261386/1065447435960336534/2ec551fc9d3f4313b1fa8a455b6f00f2.wav,https://cdn.discordapp.com/attachments/1053081177772261386/1065447436316835923/image.png
obelisk#1740:... |
teseting#9616: i could show you but unfortunately i can't run because i have a 4090 so it's incompatible
matteo101man#6162: That’s tough
mataz#8375: it would be cool to make a thing like riffusion for emotional states (Valence-Arousal-Dominance) and call it "effusion"
mataz#8375: https://www.researchgate.net/figure/The... |
matteo101man#6162: I don’t use the playground but that’s dope
Draconiator#6375: For the most part it gets genres right holy crap. Still need work on Trance though.
Draconiator#6375: This is getting dangerously close to my vision. Wanna hear what a farting dragon sounds like?
Leon -#4657: yeah i feel like lucille ball... |
vananaBanana#0866: No, I got a working diffusion example, BUT I do not have the diffusion model now
vananaBanana#0866: Ofcourse the autoencoder and the vocoder are downloadable and do work
obelisk#1740: ok so he deleted this model
obelisk#1740: how about we directly ask him whats going on? (extremely politely)
obelisk#... |
vananaBanana#0866: It's mind blowing
vananaBanana#0866: And also mind blowing how flawless they managed to make the libraries and APIs
vananaBanana#0866: You don't even need to download the model urself u can just type AutoModel.load('identifier') and itll automatically download it
vananaBanana#0866: It's crazy
obelisk... |
vananaBanana#0866: lool thank god
vananaBanana#0866: I was literally trying to set it up when it got removed
vananaBanana#0866: I showed someone the model like "check this out"
vananaBanana#0866: and then the page 404'd like I was bullshitting them xD
obelisk#1740: keeping hand on pulse
MintOctopus#8867: HA that is gre... |
obelisk#1740: oh thats quite interesting
tugen#7971: i think colab was updated for audio-diffusion-pytorch? I see commits from only 2 days ago... the upsampler looks so wild!
tugen#7971: Just caved and bought colab Pro LOL
tugen#7971: oh nvm, unauthorized to download model from hugging face while running colab https://... |
RawrXD#3892: how come the riffs in this discord sound much better than the website?
norm#1888: Someone in the share-riffs channel mentioned using Ableton Live, so I'm guessing people process the website audio in some way, but I don't know for sure
kyemvy#0433: probs post processing
kyemvy#0433: thats what i do anyways
... |
ARTOMIA#8987: yo! new guy here, I wanna learn how to toy with this model but i cant figure out how Latent Space works, anyone that has patience with idiots to ELI5? I'm using auto1111
! Kami#0420: Automatic1111 has this in extension's in case anyone is using that. One click Install super dope!
obelisk#1740: i've seen t... |
obelisk#1740: isnt there audio2audio?
Twigg#8481: The audio 2 audio is "text prompt to text prompt"
obelisk#1740: oh
Twigg#8481: or
"audio to text prompt"
Really nothing that interpolates between source /target
obelisk#1740: what about conventional morphing?
Twigg#8481: Point me somewhere?
mutant0#0319: Is there a way... |
Invisible Mending Music#8879: • Mel-scale – okay, I sort of understand what this is, but WHY is it used for the frequency bins in the spectrograms? A standard equal temperament tuning would make it more feasible to accompany the Riffusion output playing a “real” musical instrument.
• In that case, it would be hel... |
db0798#7460: I hadn't updated the Dreambooth plugin installation on my computer since December, updated it now and saw that it's all different indeed
Avant_Garde_1917#8538: people should migrate away from MSE loss based training and migrate towards CLIP multi modal trainings. dreambooth afaik is all MSE on pixel vs pre... |
audry#7777: you just have to redownload pedalboard i think
audry#7777: or import it in the webui python script
audry#7777: i just run the webui on colab though so it might be different if youre doing it locally
matteo101man#6162: locally is a no
audry#7777: wdym
audry#7777: im pretty sure you can run the webui locally
... |
Haycoat#4808: https://colab.research.google.com/github/thx-pw/riffusion-music2music-colab/blob/main/riffusion_music2music.ipynb#scrollTo=9SE80Grls13Z
Twigg#8481: I got as far as step 3.
```
RuntimeError: Detected that PyTorch and torchvision were compiled with different CUDA versions. PyTorch has CUDA Version=11.7 and... |
Kevin [RTX 3060]#1512: No, pretty much everything in the app runs with A1111.
TecnoWorld#3509: ok I need to try to understand I guess
[PRINCESS MISTY]#0003: heyy
[PRINCESS MISTY]#0003: how can i help i am a chatgpt prompt engigneer
joachim#4676: cool but we can't make music with it ourselves?
joachim#4676: @seth can we... |
arha#9740: i'm mega interested on how riffusion works for the purpose of IDing rf and ham signals (beyond how awesomely cool the concept is)
amisane#9173: just checking out the riffusion colab - very cool
question I've got - what is the "Negative prompt" input's purpose/use in text-to-audio?
is it "what you don't wan... |
Steak#5270: Hello, just stumbled upon rifussion, is there any guide to use the model from local machine?
Steak#5270: will automatic1111 gui works just fine?
matteo101man#6162: I don’t think it’s really there yet tbh, but it might give you an outline to follow so you could recreate something coherent if you’re good at t... |
MintOctopus#8867: No disrespect intended if I'm coming off as patronizing.
dholt24#6009: No it's good
MintOctopus#8867: @dholt24 - speaking of music in videogames, this is my all-time favorite musical thing in any game ever, it's from Kentucky Route Zero: https://www.youtube.com/watch?v=ufAUonsYhVU
[PRINCESS MISTY]#000... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.