Text-to-Speech
Transformers 10 languages music Inference Endpoints
Edit model card

SoundSlayerAI

SoundSlayerAI is an innovative project that focuses on music-related tasks This project aims to provide various functionalities for audio analysis and processing, making it easier to work with music datasets.

Datasets

SoundSlayerAI makes use of the following datasets:

  • Fhrozen/AudioSet2K22
  • Chr0my/Epidemic_sounds
  • ChristophSchuhmann/lyrics-index
  • Cropinky/rap_lyrics_english
  • tsterbak/eurovision-lyrics-1956-2023
  • brunokreiner/genius-lyrics
  • google/MusicCaps
  • ccmusic-database/music_genre
  • Hyeon2/riffusion-musiccaps-dataset
  • SamAct/autotrain-data-musicprompt
  • Chr0my/Epidemic_music
  • juliensimon/autonlp-data-song-lyrics
  • Datatang/North_American_English_Speech_Data_by_Mobile_Phone_and_PC
  • Chr0my/freesound.org
  • teticio/audio-diffusion-256
  • KELONMYOSA/dusha_emotion_audio
  • Ar4ikov/iemocap_audio_text_splitted
  • flexthink/ljspeech
  • mozilla-foundation/common_voice_13_0
  • facebook/voxpopuli
  • SocialGrep/one-million-reddit-jokes
  • breadlicker45/human-midi-rlhf
  • breadlicker45/midi-gpt-music-small
  • projectlosangeles/Los-Angeles-MIDI-Dataset
  • huggingartists/epic-rap-battles-of-history
  • SocialGrep/one-million-reddit-confessions
  • shahules786/prosocial-nsfw-reddit
  • Thewillonline/reddit-sarcasm
  • autoevaluate/autoeval-eval-futin__guess-vi-4200fb-2012366606
  • lmsys/chatbot_arena_conversations
  • mozilla-foundation/common_voice_11_0
  • mozilla-foundation/common_voice_4_0

Library

The core library used in this project is "pyannote-audio." This library provides a wide range of functionalities for audio analysis and processing, making it an excellent choice for working with music datasets. The "pyannote-audio" library offers a comprehensive set of tools and algorithms for tasks such as audio segmentation, speaker diarization, music transcription, and more.

Metrics

To evaluate the performance of SoundSlayerAI, several metrics are employed, including:

  • Accuracy
  • Bertscore
  • BLEU
  • BLEURT
  • Brier Score
  • Character

These metrics help assess the effectiveness and accuracy of the implemented algorithms and models.

Language

The SoundSlayerAI project primarily focuses on the English language. The datasets and models used in this project are optimized for English audio and text analysis tasks.

Usage

To use SoundSlayerAI, follow these steps:

  1. Install the required dependencies by running pip install pyannote-audio.

  2. Import the necessary modules from the "pyannote.audio" package to access the desired functionalities.

  3. Load the audio data or use the provided datasets to perform tasks such as audio segmentation, speaker diarization, music transcription, and more.

  4. Apply the appropriate algorithms and models from the "pyannote.audio" library to process and analyze the audio data.

  5. Evaluate the results using the specified metrics, such as accuracy, bertscore, BLEU, BLEURT, brier_score, and character.

  6. Iterate and refine your approach to achieve the desired outcomes for your music-related tasks.

License

SoundSlayerAI is released under the Openrail license. Please refer to the LICENSE file for more details.

Contributions

Contributions to SoundSlayerAI are welcome! If you have any ideas, bug fixes, or enhancements, feel free to submit a pull request or open an issue on the GitHub repository.

Contact

For any inquiries or questions regarding SoundSlayerAI, please reach out to the project maintainer at [insert email address].

Thank you for your interest in SoundSlayerAI!

Downloads last month
85

Datasets used to train or4cl3ai/SoundSlayerAI

Space using or4cl3ai/SoundSlayerAI 1