Algorithmic bias

#4
by fdaudens HF staff - opened
Journalists on Hugging Face org

What tools do you need to deconstruct bias in algorithms? (You know, this thing that is becoming increasingly prevalent in our lives)

In the spirit of open-source AI, we are actively exploring the creation of starting tools for the Journalists at Hugging Face community to better understand AI under the hood.

My talented colleague Avijit has deep expertise in algorithmic biases. (Check out his investigation of colorism and gender imbalance in the Indian context—link in comments.)

Are there any specific Python notebooks that would be useful to you? Consider, for example, tools for data scraping and analysis.

Tell us what you think!

Journalists on Hugging Face org

I haven't implemented mitigation strategies yet. But I've been using tools to build reports for risk management assessment regarding BIAS and other safety issues. I recommend checking out ps-fuzz and garak although there are many others available.

The LLM Transparency Tool from Meta might also be a good experiment for this "under the hood" comprehension on how an LLM chooses the more likely word. I wish we had it for popular models like Mistral and Llama.

Sign up or log in to comment