Nam Nguyen

ngphuchoangnam

AI & ML interests

None yet

Recent Activity

View all activity

Organizations

None yet

ngphuchoangnam's activity

Reacted to fdaudens's post with πŸš€β€οΈπŸ‘€ about 1 month ago
view post
Post
2513
Exciting news for audio AI enthusiasts! πŸŽ™οΈπŸŒ

The Emilia dataset dropped last week, and it's a cool one:
- 101k+ hours of high-quality audio
- 6 languages: πŸ‡¨πŸ‡³ πŸ‡ΊπŸ‡Έ πŸ‡―πŸ‡΅ πŸ‡°πŸ‡· πŸ‡©πŸ‡ͺ πŸ‡«πŸ‡·
- Diverse content: talk shows, interviews, debates, sports commentary, audiobooks

This dataset could improve multilingual speech generation and recognition. Opens up many possibilities for global media, language learning, and accessibility!

Explore it: amphion/Emilia

#AIAudio
Reacted to Undi95's post with πŸ”₯ 6 months ago
view post
Post
20592
Hello!
The 8B/70B OG Llama-3 models made with the Orthogonal Activation Steering script as been pushed in private.

After multiple test with an empty prompt system, I can confirm it's not uncensored enough, but I wanted to try all the GGUF before (and it take time to do lmao)

If you want to try that yourself, here is the script : https://gist.github.com/wassname/42aba7168bb83e278fcfea87e70fa3af
And here is the same script that we modified to be able to use it on multiple GPU for 70B : https://files.catbox.moe/ya4rto.ipynb

Llama3-Unholy-8B-OAS don't have the problem as it was already trained to be less censored, but the OG one was really too much censored.

I will try to redo that soon, as it seems to HAVE WORKED for some prompt (as seen on the log, for exemple) but it's not enough.

32 entry of the dataset is clearly not enough, but it's okay, I really wanted to try that as it was something new.
I could take the Unholy way and retrain the 70B before using OAS but it should work without, that's not the goal.
Β·