🚩 Report : Legal issue(s)

#1
by julien-c HF staff - opened

Edit:
Stability legal team reached out to Hugging Face reverting the initial takedown request, therefore we closed this thread

Takedown request

⚠️⚠️⚠️
Company StabilityAI has requested a takedown of this published model characterizing it as a leak of their IP

While we are awaiting for a formal legal request, and even though Hugging Face is not knowledgeable of the IP agreements (if any) between this repo owner (RunwayML) and StabilityAI, we are flagging this repository as having potential/disputed IP rights.

⚠️⚠️⚠️
We reserve potential future action such as disabling this repo, and we ask both repo owner and StabilityAI to please chime in, if possible publicly here for accountability and transparency

julien-c changed discussion title from 🚩 Report : Legal issue(s) to 🚩 Report : Legal issue(s) : Takedown request

This timeline is insane 🍿

It's easy, you will learn diffusion.

@julien-c If this isn't official, then I'm wondering how did HF decided to incorporate Runway ML's inpainting checkpoint to the official diffusers - The announcement even mentioned that this is the official stable diffusion inpainting model https://github.com/huggingface/diffusers/releases/tag/v0.6.0

Might be useful to take a look at it!

yeah - this isn't their model and does belong to stability.ai - it shouldn't be in the public hands like this.

yeah - this isn't their model and does belong to stability.ai - it shouldn't be in the public hands like this.

Runway worked on this model too, this isn't just SAI even if they contributed the most to training. The original code is from a university group so really everyone here is building on top of each other.

Guess they decided to run away with the model.

This comment has been hidden

I was excited for SD because of its potential as an art tool, but I think I'm now staying for the new shitshow each week. Fake apologies, subreddit turmoil, staff negligence, and communication issues are the name of the game for Stability.

Hi all,

Cris here - the CEO and Co-founder of Runway. Since our founding in 2018, we’ve been on a mission to empower anyone to create the impossible. So, we’re excited to share this newest version of Stable Diffusion so that we can continue delivering on our mission.

This version of Stable Diffusion is a continuation of the original High-Resolution Image Synthesis with Latent Diffusion Models work that we created and published (now more commonly referred to as Stable Diffusion). Stable Diffusion is an AI model developed by Patrick Esser from Runway and Robin Rombach from LMU Munich. The research and code behind Stable Diffusion was open-sourced last year. The model was released under the CreativeML Open RAIL M License.

We confirm there has been no breach of IP as flagged and we thank Stability AI for the compute donation to retrain the original model.

photoshop?

Okay nerds

"We confirm there has been no breach of IP as flagged and we thank Stability AI for the compute donation to retrain the original model."
This is some running with the model especially if you agreed with stability to not release the model if I seen one. I guess you deemed it legally safe and just literally RAN with the model, funny.

"we thank Stability AI for the compute donation to retrain the original model."

Spicy. 👍

Fantastic Thread.

Hey I was here too!
Thanks for keeping SAI at their word of releasing 1.5

Are you ready for another drama kids?
(Aye-aye, Captain!)

Emad just said on Discord that the takedown request has been 'reversed', and the warning is gone.

This is way more interesting than anything happening in Crypto rn. 🍿

Came for the weird and wacky imagery that can be created , stayed for the drama

julien-c changed discussion title from 🚩 Report : Legal issue(s) : Takedown request to 🚩 Report : Legal issue(s)
julien-c changed discussion status to closed

"if possible publicly here for accountability and transparency" one team decided not to I guess

very sad outcome and no respect for runway at all now. you want to release the model, at least have the decency to name it something other than stable diffusion so there's no confusion between your pirated copy and stability.ai's official version.

very sad outcome and no respect for runway at all now. you want to release the model, at least have the decency to name it something other than stable diffusion so there's no confusion between your pirated copy and stability.ai's official version.

They're one of the groups working on Stable Diffusion with Stability.

Wrong site, please refer to https://twitter.com/

and I thought Stability AI was a more friendly and open-source company ....

and I thought Stability AI was a more friendly and open-source company ....

take down request was reversed

* Cascually typing git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 into my terminal and getting some popcorn as I stumble on this thread... *

Oh, here's the popcorn...

Now, please continue...

giphy.gif

and I thought Stability AI was a more friendly and open-source company ....

take down request was reversed

Shouldn't have been issued in the first place.

very sad outcome and no respect for runway at all now. you want to release the model, at least have the decency to name it something other than stable diffusion so there's no confusion between your pirated copy and stability.ai's official version.

Wow

and I thought Stability AI was a more friendly and open-source company ....

take down request was reversed

Shouldn't have been issued in the first place.

I agree 100%.

Stability needs to think long and hard about who on their team is making these decisions, and probably take their decision making authority away.

Between this, the subreddit thing and the AUTOMATIC1111 drama/falsehood, that's three fairly major reversals in a very short period of time.

Between this, the subreddit thing and the AUTOMATIC1111 drama/falsehood, that's three fairly major reversals in a very short period of time.

↑↑↑ THIS ↑↑↑

I've been but a casual & passive observer in all of this drama, but the more I learn about it the worse it looks for both HuggingFace & StabilityAI.

Get your shit together, folks. Y'all look like a bunch of amateurs who don't know what they're doing.

IMO, the only party that's done anything wrong is SAI.

HuggingFace, as I understand it, is a university research group that SAI and Runway are funding.

Having been part of a (much smaller) university research group, I know that merely funding a group does not give you exclusive rights to a project. If SAI had purchased those rights, the controlling university would've put the kibosh on the shared information shortly after purchase.

This just looks like another case of SAI playing ready fire aim and not understanding what their money bought them.

As I see it the quick rise of SD is causing instabilities in those structures that work with it.
Suddenly it's a 100M$ business, they are overwhelmed and one hand doesn't know what the other should be doing.

The outcome will be interesting.

you'd think that IF they weren't trying to steal this, they'd have had the common decency to name their release after themselves - maybe something like, I dunno - runway diffusion. But no, all the name recognition is stable diffusion, everyone knows that stability.ai released it. Who's even heard of runway? But runway knows people want 1.5. i can just hear the argument between stability.ai and runway "you need to release the update" "it's not ready" "it is ready, people want it, release it." "NO, we're still working on it, it isnt' ready." "fine, WE'LL release it whether you like it or not!"

incidently - tests people have done on this release, and posted online, show that it has some serious instabilities in it

have fun

incidently - tests people have done on this release, and posted online, show that it has some serious instabilities in it

Any links you can provide for more details?

I was planning to start testing as soon as I got some sleep...

It's always good to be prepared...

you'd think that IF they weren't trying to steal this, they'd have had the common decency to name their release after themselves - maybe something like, I dunno - runway diffusion. But no, all the name recognition is stable diffusion, everyone knows that stability.ai released it. Who's even heard of runway? But runway knows people want 1.5. i can just hear the argument between stability.ai and runway "you need to release the update" "it's not ready" "it is ready, people want it, release it." "NO, we're still working on it, it isnt' ready." "fine, WE'LL release it whether you like it or not!"

Well that's just wrong, everything you said was wrong.

The 1.5 model is built on base of the 1.2 model which is built on 1.1, it is a continued training which means the license is unchanged.
The license carries over, just as when you fork an open source project. You can't just make it private source afterward.
The license: https://huggingface.co/spaces/CompVis/stable-diffusion-license
"Distribution and Redistribution: You may host for Third Party remote access purposes (e.g. software-as-a-service), reproduce and distribute copies of the Model or Derivatives of the Model thereof in any medium, with or without modifications"
"Grant of Copyright License: Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare, publicly display, publicly perform,
sublicense, and distribute the Complementary Material, the Model, and Derivatives of the Model."
"Derivatives of the Model: means all modifications to the Model, works based on the Model, or any other model which is created or initialized by transfer of patterns of the weights, parameters, activations or output of the Model, to the other model, in order to cause the other model to perform similarly to the Model, including - but not limited to - distillation methods entailing the use of intermediate data representations or methods based on the generation of synthetic data by the Model for training the other model
The 1.5 model is (also by license definiton) a "derivative of the model".

It is not possible to steal it, the same with ANY other "leak" we've seen. Any model based on the Stable Diffusion models is a derivative and can be distributed, leaked. You've a copyright.
Anyone has it, it's not stealing if it is a GIFT.

This also counts for NAI, their models, hypernetworks and VAE files are locked to the same license which means EVERYONE has copyright to them.
Only their sourcecode is not free except for all the files they copied from open source repositories.

we went through the same thing with MUDS - commandline percursers to graphical MMORPGs - only the people back then that were taking the code and putting out their own versions had the smarts to name their versions differently.
There's a reason runway did NOT name their version after themselves - and it isn't a nice reason.

I thank Runway for contributing this model to the community.

For those who already want to give 1.5 a try on Google Colab, I ported the demo app to 1.5 and added support for Google Colab.

You can find it at https://colab.research.google.com/github/jslegers/stable-diffusion/blob/main/Stable_Diffusion_Demo_App.ipynb

HuggingFace, as I understand it, is a university research group that SAI and Runway are funding.

Having been part of a (much smaller) university research group, I know that merely funding a group does not give you exclusive rights to a project. If SAI had purchased those rights, the controlling university would've put the kibosh on the shared information shortly after purchase.

This just looks like another case of SAI playing ready fire aim and not understanding what their money bought them.

This isn't close to being true

  1. HuggingFace is not affiliated with the development of the model
  2. HuggingFace is not a university research group
  3. HuggingFace is not funded by Stability and Runway
  4. Neither side is complaining about HuggingFace's actions

HuggingFace's role here is similar to Github's or YouTube's. They own a website where people upload things. This model upload caused a dispute over ownership and Stability asked them to take it down, maybe because either a) they didn't understand the licensing or b) they thought that the model was something it's not (I don't think this is the same model as what's currently offered in DreamStudio?). Then Stability backed down, likely because of a combo of realizing they were going to get a lot of bad press and clearing up their confusion about the previous points.

HuggingFace has an ethical obligation (that they intermittently take seriously) to not allow people to post models and datasets in violation of the licensing of those models and datasets on their website. The standard mechanism for enforcing such an action is to issue a takedown request, and it's very common for things to be taken down pending evaluating the takedown request because in the overwhelming majority of cases the harm done by improper release is more severe than the harm done in needlessly delaying release.

This comment has been hidden

So, I will say I misunderstood the story I was told third-hand about the CompVis. A surprising amount of Googling later, and I see that the research group now goes by Machine Vision and Learning research group at Ludwig Maximilian University.

I don't think it's fair or accurate to paint my entire post with the "this isn't close to being true brush".

Incorrectly stating that this site is run by the research group, doesn't invalidate the rest of the post.

You guys should probably read this article
https://danieljeffries.substack.com/p/why-the-future-of-open-source-ai

You just beat me to it...

You guys should probably read this article
https://danieljeffries.substack.com/p/why-the-future-of-open-source-ai

fuck, so this is all because of california congresswoman eshoo threatening to severely limit open source ai because it isn't SFW by contacting the NSA about it.

Welp... looks like I'm not voting for her this upcoming midterm.

california, the land of ruin.

Basically, the situation is entirely as I suspected. To quote myself 13 hours ago :

Stable Diffusion makes it incredibly easy to make eg. deepfaked porn starring celebrities or other highly questionable content.

I suspect 1.5 won't be released until they find ways to make it much harder / impossible to produce content of such a questionable nature.

Problem is... The genie was already out of the bottle the moment 1.4 was released. People found out how to turn off the NSFW filter and generate highly questionable content in no time. And no matter how you want to restrict this legally or practically, there will be people who will use it this way, much like there will always be people who use "pirated" software.

Trying to make it impossible to turn off the NSFW filter of future versions of SD and/or similar restrictions intended to reduce the potential for what you guys perceive as "abuse" will only result in fewer people deciding to upgrade. This in turn will have a negative impact for everyone, since it results in a more fractured AI landscape.

Because of this, it's better to embrace the situation as it is and acknowledge that most people outside the puritanical USA will turn off the NSFW filter by default anyway. Those who abuse this are a minority anyway. And, considering the genie is already out of the bottle, it makes far more sense to focus on detecting illegal and/or immoral usage of AI and providing a legal framework that allows prosecution thereof than to focus on restricting what's possible with SD and therewith limit not just illegitimate usages thereof but also many legitimate uses, like eg. artistic nudes.

Thus, IMO StabilityAI's official position is based on a naive and very nonsensical perspective and RunwayML was totally justified in releasing 1.5 to the public as was promised for weeks!

Basically, the situation is entirely as I suspected. To quote myself 13 hours ago :

Stable Diffusion makes it incredibly easy to make eg. deepfaked porn starring celebrities or other highly questionable content.

I suspect 1.5 won't be released until they find ways to make it much harder / impossible to produce content of such a questionable nature.

Problem is... The genie was already out of the bottle the moment 1.4 was released. People found out how to turn off the NSFW filter and generate highly questionable content in no time. And no matter how you want to restrict this legally or practically, there will be people who will use it this way, much like there will always be people who use "pirated" software.

Trying to making it impossible to turn off the NSFW filter of future versions of SD and/or similar restrictions intended to reduce the potential for what you guys perceive as "abuse" will only result in fewer people deciding to upgrade. This in turn will have a negative impact for everyone, since it results in a more fractured AI landscape.

Because of this, it's better to embrace the situation as it is and acknowledge that most people outside the puritanical USA will turn off the NSFW filter by default anyway. Those who abuse this are a minority anyway. And, considering the genie is already out of the bottle, it makes far more sense to focus on detecting illegal and/or immoral usage of AI and providing a legal framework that allows prosecution thereof than to focus on restricting what's possible with SD and therewith limit not just illegitimate usages thereof but also many legitimate uses, like eg. artistic nudes.

Thus, IMO StabilityAI's official position is based on a naive and very nonsensical perspective and RunwayML was totally justified in releasing 1.5 to the public as was promised for weeks!

we should not kneel to corporations who wish to dominate. I would hate to wake up from a coma to realize that Meta has a monopoly over AI.

You guys should probably read this article
https://danieljeffries.substack.com/p/why-the-future-of-open-source-ai

I've never read something more meaningless and dilusional than this.
Should we make Adobe create a NSFW and deepfake detection in Photoshop? PS has been used for illegals and stuff for ages. Yes, it is not as easy as in SD, but it's quite doable.
I would guess we will never know the real reason this got released in the way it did

I've never read something more meaningless and dilusional than this.

"Meaningless and dilusional" sounds like a good motto for what qualifies as culture in present day Commiefornia.

These people live in ivory towers and have zero clues how the real world operates...

live in your dreamworld, John - however congress is indeed already involved in deciding if stable diffusion is a security risk and whether you like it or not, runway releasing the code today may have just totally destroyed any possiblity of open source AI for the future.

In the US you mean, right?

live in your dreamworld

My dreamworld?

I'm not the one who believes the genie can be put back in the bottle.

runway releasing the code today may have just totally destroyed any possiblity of open source AI for the future

StabilityAI censoring future versions of AI would have only result in a more fractured AI landscape. That strategy is completely naive and nonsensical and would have hurt literally everyone.

RunwayML was right to respect promises made to the community and release 1.5 as they should have done weeks ago.

And if the US will take legal actions to persecute competitors of Google, this may result in a temporary throwback for AI but this is unlikely to have any negative impact in the long run unless the rest of the world follows suit. And I don't see that happening. Also, this might even stimulate the development of some sort of "underground" AI movement, similar to the software "piracy" movement. Good luck restricting "illegitimate" use of AI if that's the road taken...

Y'all might want to read this :
https://www.reddit.com/r/StableDiffusion/comments/y9ga5s/stability_ais_take_on_stable_diffusion_15_and_the/

The community over there is standing pretty unanimously behind RunwayML on this...

Basically, the situation is entirely as I suspected. To quote myself 13 hours ago :

Stable Diffusion makes it incredibly easy to make eg. deepfaked porn starring celebrities or other highly questionable content.

I suspect 1.5 won't be released until they find ways to make it much harder / impossible to produce content of such a questionable nature.

Problem is... The genie was already out of the bottle the moment 1.4 was released. People found out how to turn off the NSFW filter and generate highly questionable content in no time. And no matter how you want to restrict this legally or practically, there will be people who will use it this way, much like there will always be people who use "pirated" software.

Trying to make it impossible to turn off the NSFW filter of future versions of SD and/or similar restrictions intended to reduce the potential for what you guys perceive as "abuse" will only result in fewer people deciding to upgrade. This in turn will have a negative impact for everyone, since it results in a more fractured AI landscape.

Because of this, it's better to embrace the situation as it is and acknowledge that most people outside the puritanical USA will turn off the NSFW filter by default anyway. Those who abuse this are a minority anyway. And, considering the genie is already out of the bottle, it makes far more sense to focus on detecting illegal and/or immoral usage of AI and providing a legal framework that allows prosecution thereof than to focus on restricting what's possible with SD and therewith limit not just illegitimate usages thereof but also many legitimate uses, like eg. artistic nudes.

Thus, IMO StabilityAI's official position is based on a naive and very nonsensical perspective and RunwayML was totally justified in releasing 1.5 to the public as was promised for weeks!

I don't imagine that such changes would have been hard to implement from the start, if there had been any interest in doing so. Remove or genericize the relevant words from the training captions, run a couple iterations of face and text detection on the training images, and blur them like the faces and license plates on Streetview. Not even human could expect to see through the walls of Plato's Cave with their wonderful million-trillion-parameter organic brain.

What this smacks of is too depressing to write about here, and what it would do to the capability and performance of these models would probably (and sadly) be found out soon enough. Then it's not as if someone else would not have replicated the model without such limitations, like how GPT-2 1.5B was not released due to essentially similar concerns, until someone replicated the results and released their model.

Hi all,

Cris here - the CEO and Co-founder of Runway. Since our founding in 2018, we’ve been on a mission to empower anyone to create the impossible. So, we’re excited to share this newest version of Stable Diffusion so that we can continue delivering on our mission.

This version of Stable Diffusion is a continuation of the original High-Resolution Image Synthesis with Latent Diffusion Models work that we created and published (now more commonly referred to as Stable Diffusion). Stable Diffusion is an AI model developed by Patrick Esser from Runway and Robin Rombach from LMU Munich. The research and code behind Stable Diffusion was open-sourced last year. The model was released under the CreativeML Open RAIL M License.

We confirm there has been no breach of IP as flagged and we thank Stability AI for the compute donation to retrain the original model.

A very BIG thx for releasing the MODEL and not censorship it as STABILITY wanted!

VVVVVVV.png

Datasets/Models must remain uncensored, that's the spirit of an Open Source Project!

Because of this, it's better to embrace the situation as it is and acknowledge that most people outside the puritanical USA will turn off the NSFW filter by default anyway. Those who abuse this are a minority anyway. And, considering the genie is already out of the bottle, it makes far more sense to focus on detecting illegal and/or immoral usage of AI and providing a legal framework that allows prosecution thereof than to focus on restricting what's possible with SD and therewith limit not just illegitimate usages thereof but also many legitimate uses, like eg. artistic nudes.

I like the way stable diffusion is doing it. You have a safety filter after generation, which makes sure you don't get nsfw images, unless you really want to. The model is capable of everything (within its limits), users can decide themself to patch the software, and web services can decide to turn off the filter or to enforce it.

The problem starts, if the future is really filtering training material and creating models, which are not capable of nude skin. This will probably not only prevent the generation of nude images (whatever reason, if artistic, pornographic, both, or something else), but worsen the performance in other areas as well, because they depend on information that is similar to nudity.

Just compare the approaches of the big networks:

  • Dall-E filters text input and adds words like "woman" trying to improve diversity.
  • stable diffusion filters undesirable output and diversity can be increased by steering the network into that direction.

I like the approach more, that the network is capable of everything, input is unfiltered, but output can be filtered.
The most interesting approach would be to have additional input parameters, which steers the network into (N)SFW areas. I think of something like backpropagating the loss of the safety checker into a single "Safety Level" input.

I think it's even more basic than this. At some level, if AI-based image generation is ever truly going to "look right", these models are going to have to understand something about human anatomy. It will never be able to draw a truly correctly fitting hoodie or crop top or t-shirt or wife beater or polo etc unless it knows how to draw the underlying anatomy for both men and women.

Now one could argue that we don't have to feed it truly nude images, but what are we going to do? Take the time to censor the input dataset, removing nipples etc?

SD and it's cohorts are an artist in training. Can you imagine trying to teach a human artist how to draw a clothed person without allowing them to know what the body looks like under the clothes?

Runway looks pretty full of it

Look at https://research.runwayml.com/publications/high-resolution-image-synthesis-with-latent-diffusion-models

High-Resolution Image Synthesis with Latent Diffusion Models
by Patrick Esser et al.

ummmmmmmmm Patrick Esser is the 4TH author

image.png
image.png

Now one could argue that we don't have to feed it truly nude images, but what are we going to do? Take the time to censor the input dataset, removing nipples etc?

Also, what about eg. artistic nudes?

From neolithic cave paintings to Greco-Roman marble statues and Renaissance paintings, nudity, sex and romance have been popular themes in high art from even before humans were capable of writing.

Often the line between pornographic material and high art is very thick and well-defined. Sometimes it is very thin and blurry. And, at least in my humble opinion, it is not up to tech corps or politicians to determine where to draw that line. Censorship of nudity based on what a select group of corporate or political leaders consider "appropriate" content reeks of totalitarianism, really, and reminds me of some of the darkest chapters of Theocratic, Communist & Fascist dictatorships alike. It's not something you want to model yourself after, especially if you want to profile yourself as "open" and "free".

by Patrick Esser et al.

ummmmmmmmm Patrick Esser is the 4TH author

In academia it's the norm for professors to take all or most of the credit for work done by their PhD & post-doc researchers. In my experience with corporate, it's barely different in corporate settings. Status, not actual work done, almost always determines who gets most of the credit.

I don't see anything out of the ordinary here.

No one will know who or what Stability AI is in two years. These VCs aren't naive (lol, okay, maybe a little) but they are desperate for places to put their inflationary money. Stable Diffusion is a sideshow gimmick.

"Look, ma, I put a macaroni face on an avocado!"

Grow up.

You guys should probably read this article
https://danieljeffries.substack.com/p/why-the-future-of-open-source-ai

fuck, so this is all because of california congresswoman eshoo threatening to severely limit open source ai because it isn't SFW by contacting the NSA about it.

Welp... looks like I'm not voting for her this upcoming midterm.

california, the land of ruin.

solution is simple. geoblock cali/merica from downloading the model. cali/merica realizes they just f*****d their state/country from tech progress. law is reversed

the professional managerial class needs to stop adding millions of intermediaries to basic tech, siphoning intellectual capital, and consequentially, exponentially hindering progress. make no mistake, this attempt to corporatize open source software only benefits centralized tech oligarchs, as they are the only ones with the resources to comply. regulatory capture.

No. The solution is not to fucking block California.
It’s free speech, using these AIs for any output images is also basic free speech.
What does the fascist woman want ? Ban AI, ban photoshop, ban nudity?

Stability AI clearly aims at destroying the model to become a “woke and responsible” company. In my opinion they have just now lost the game, I am not giving a shot about what they want as they are anti open, anti free and anti future.

We will not censor AI and we will not censor photoshop. If anyone tries we will avoid those people as the gutless beings they are.

I think it's even more basic than this. At some level, if AI-based image generation is ever truly going to "look right", these models are going to have to understand something about human anatomy. It will never be able to draw a truly correctly fitting hoodie or crop top or t-shirt or wife beater or polo etc unless it knows how to draw the underlying anatomy for both men and women.

Now one could argue that we don't have to feed it truly nude images, but what are we going to do? Take the time to censor the input dataset, removing nipples etc?

SD and it's cohorts are an artist in training. Can you imagine trying to teach a human artist how to draw a clothed person without allowing them to know what the body looks like under the clothes?

Yeah it limits the potential for design use drastically, no swim suit stuff, or even things like Hulk with bare chest.

live in your dreamworld, John - however congress is indeed already involved in deciding if stable diffusion is a security risk and whether you like it or not, runway releasing the code today may have just totally destroyed any possiblity of open source AI for the future.

ok. Not everyone lives in America. Just saying, not everyone has a Congress, even?
There is a lot of Americano-centrism going on, it asks more questions that it answers.

This thread continues to be a wild ride //

if you honestly think that all the political orginizations in the rest of the world aren't also looking at stable with worry, think again - however the laws that pertain are those of the country where the owners of the code live - and they live in the USA. And if those lawmakers come down on the devlopers, and lock it down, do you honestly think that none of the other politicians around the world aren't going to follow suit?

No. The solution is not to fucking block California.
It’s free speech, using these AIs for any output images is also basic free speech.
What does the fascist woman want ? Ban AI, ban photoshop, ban nudity?

Stability AI clearly aims at destroying the model to become a “woke and responsible” company. In my opinion they have just now lost the game, I am not giving a shot about what they want as they are anti open, anti free and anti future.

We will not censor AI and we will not censor photoshop. If anyone tries we will avoid those people as the gutless beings they are.

photoshop is already censored. Try this - load a photo of money into it and modify it. ooops? You can't, can you - wonder why that is.

photoshop is already censored. Try this - load a photo of money into it and modify it. ooops? You can't, can you - wonder why that is.

For real?

That's disturbing...

Probably everything with a EURion constellation.
I think most Open Source image editors do not implement such filters. (I guess, some people may say they are not good enough to counterfeit money anyway ;-))

photoshop is already censored. Try this - load a photo of money into it and modify it. ooops? You can't, can you - wonder why that is.

For real?

That's disturbing...

for real - try it out

for real - try it out

I don't have access to a recent version of Photoshop.

Any idea when they added this "feature"?

i'm assuming as soon as the concern that photoshop could be used to counterfit money became high enough. It's been at least 5 years since I worked on the project where I needed to use a piece of a bill for a picture and I could NOT get photoshop to allow me to do that. here's an article on it that was posted in 2011 https://fstoppers.com/news/photoshop-wont-let-you-work-images-currency-7291

Certain color printers have had that feature at the driver-level for quite a while now, prints little dots

image.png
https://en.wikipedia.org/wiki/Machine_Identification_Code

Could imagine some form of embeddable stenography/watermark enabled for larger models that's degradation resistant, like how "Open"AI is doing it. Not really the thread to talk about it, since huggingface repo discussions are still technically Git and for repo-specific issues.

why are we getting 403's

why are we getting 403's

i'm not getting errors

Edit:
Stability legal team reached out to Hugging Face reverting the initial takedown request, therefore we closed this thread

Takedown request

⚠️⚠️⚠️
Company StabilityAI has requested a takedown of this published model characterizing it as a leak of their IP

While we are awaiting for a formal legal request, and even though Hugging Face is not knowledgeable of the IP agreements (if any) between this repo owner (RunwayML) and StabilityAI, we are flagging this repository as having potential/disputed IP rights.

⚠️⚠️⚠️
We reserve potential future action such as disabling this repo, and we ask both repo owner and StabilityAI to please chime in, if possible publicly here for accountability and transparency

did StabilityAI really do a takedown?

that was a long time back and no, they didn't follow through with it, but they did initially request it

julien-c locked this discussion

Sign up or log in to comment