Concern Regarding Flux AI's Prompt Alterations Affecting Cultural Representation

#94
by kinglifer - opened

Dear Black Forest Labs Team,

I hope this message finds you well. I am writing to express my concern regarding an issue I have encountered while using your product, Flux AI. As an advocate for diversity and accurate cultural representation in AI-generated content, I believe it is crucial to bring this matter to your attention.

On multiple occasions, I have observed that this Flux AI Pro website ( https://fluxpro.art/ ) injected specific words into prompts that appear to inhibit lifelike renderings when cultural descriptions, particularly those involving non-white races, are included. To illustrate, I conducted a comparison where I generated images using the same prompt with and without race-specific terms. The results were markedly different (keep in mind this was done over 10 times to verify my suspicions:

Without any race terms added:
Screenshot 2024-08-21 204619.png
with race terms added:

Screenshot 2024-08-21 205939.png
Screenshot 2024-08-21 205958.png
Screenshot 2024-08-21 210011.png

As you can see, the inclusion of race-specific terms resulted in less realistic and less accurate depictions. This pattern suggests an underlying bias in how the prompts are processed, potentially limiting the ability to create diverse and culturally accurate representations.

I understand that Black Forest Labs likely values diversity and the inclusion of various cultures. However, this issue raises concerns about the fairness and inclusivity of your AI tools. Accurate representation should be consistent across all cultural and racial descriptions, ensuring that all users can create lifelike images regardless of the cultural context they are working with.

The original poster did not wish to report it because she felt she would either be banned or restricted further. She expressed this on my group https://www.facebook.com/groups/blackai and I tested myself and immediately came here.
Attached is an attempt to ask for this to be looked at but the damage is already there because it was implemented.

Screenshot 2024-08-21 204122.png
Screenshot 2024-08-21 204130.png

Given the importance of this issue, I felt it was pertinent to bring it directly to your attention. I hope that your team can address this matter and ensure that Flux AI upholds the values of fairness, inclusivity, and accurate representation for all users, regardless of cultural background.

Regardless of it not happening in the future it did occur and that does not change the affects it had upon me using it to create something I could identify with.

Thank you for your time and consideration. I look forward to your response.

Best regards,

Anyway - just use the model locally (Flux Dev) so you don't have to deal with prompt injections.

I must point out the inadequacies and harmful implications within your response. Suggesting that I simply use the model locally (Flux Dev) to avoid the issue of prompt injections entirely misses the point and demonstrates a failure to address the core problem at hand.

Firstly, your response is a non-sequitur—my concern was about the online model being manipulated to inject prompts that distort diverse representations into cartoonish stereotypes. By shifting the responsibility to me to "just use the model locally," you’re not only sidestepping accountability but also enabling the harmful practices that prompted my original complaint. This is a textbook example of a microaggression—a subtle dismissal of a legitimate concern about racial bias but in this case it is in AI models.

Secondly, this kind of response perpetuates the very issue I’m raising: rather than addressing the root cause of the racial bias in the platform, you're suggesting that I take action to avoid it. This ignores the systemic issue and ensures that those injecting these harmful biases can continue unchecked. The impact is felt not only by me but by every user who expects a fair and equitable experience on the platform. It’s not about my ability to use the model locally; it’s about the platform itself perpetuating racial bias.

Lastly, the lack of response from Black Labs on this issue only deepens the problem. When companies fail to address these serious concerns, they signal to marginalized users that their voices don’t matter. This silence enables the continuation of practices that dehumanize and reduce people of color to exaggerated, cartoonish representations, and it sets a precedent for further racial microaggressions in the AI space.

Black Labs I urge you to reconsider the gravity of this issue and to respond with actionable steps to ensure that this platform and your models becomes a space where diverse representations are treated with the dignity and accuracy they deserve. The responsibility to change this lies with those developing and moderating these platforms, not with the users who are being affected by these discriminatory practices.

Hahaha, You just used GPT to type up a 3 paragraph non-sense response. Good job.

I'll say it again. Whocars, Don't like it? Use another model. . . Like SD3.

Is anyone monitoring these posts?

I have not said anything offensive or that would go against Huggingface's rules. I am simply saying you are being quite annoying and using GPT-generated paragraphs to support your point.

If the site is injecting prompts into your image generations, Then use the model locally. Black forest labs owns the GPUs that the model is generating on, You have no say in what they do or do not do.

Are you part of or associated with Black Labs?

That is not relevant lol. I am just saying, Don't like what a company is doing to your model? Use it locally.

Please do not communicate with me further. Thank you.

Lol. Stay mad.

Sign up or log in to comment