🚩 Report : Legal issue(s) #1

by julien-c HF staff - opened

Edit:
Stability legal team reached out to Hugging Face reverting the initial takedown request, therefore we closed this thread

Takedown request

⚠️⚠️⚠️
Company StabilityAI has requested a takedown of this published model characterizing it as a leak of their IP

While we are awaiting for a formal legal request, and even though Hugging Face is not knowledgeable of the IP agreements (if any) between this repo owner (RunwayML) and StabilityAI, we are flagging this repository as having potential/disputed IP rights.

⚠️⚠️⚠️
We reserve potential future action such as disabling this repo, and we ask both repo owner and StabilityAI to please chime in, if possible publicly here for accountability and transparency

julien-c changed discussion title from 🚩 Report : Legal issue(s) to 🚩 Report : Legal issue(s) : Takedown request

This timeline is insane 🍿

It's easy, you will learn diffusion.

@julien-c If this isn't official, then I'm wondering how did HF decided to incorporate Runway ML's inpainting checkpoint to the official diffusers - The announcement even mentioned that this is the official stable diffusion inpainting model https://github.com/huggingface/diffusers/releases/tag/v0.6.0

Might be useful to take a look at it!

yeah - this isn't their model and does belong to stability.ai - it shouldn't be in the public hands like this.

yeah - this isn't their model and does belong to stability.ai - it shouldn't be in the public hands like this.

Runway worked on this model too, this isn't just SAI even if they contributed the most to training. The original code is from a university group so really everyone here is building on top of each other.

Guess they decided to run away with the model.

This comment has been hidden

I was excited for SD because of its potential as an art tool, but I think I'm now staying for the new shitshow each week. Fake apologies, subreddit turmoil, staff negligence, and communication issues are the name of the game for Stability.

Hi all,

Cris here - the CEO and Co-founder of Runway. Since our founding in 2018, we’ve been on a mission to empower anyone to create the impossible. So, we’re excited to share this newest version of Stable Diffusion so that we can continue delivering on our mission.

This version of Stable Diffusion is a continuation of the original High-Resolution Image Synthesis with Latent Diffusion Models work that we created and published (now more commonly referred to as Stable Diffusion). Stable Diffusion is an AI model developed by Patrick Esser from Runway and Robin Rombach from LMU Munich. The research and code behind Stable Diffusion was open-sourced last year. The model was released under the CreativeML Open RAIL M License.

We confirm there has been no breach of IP as flagged and we thank Stability AI for the compute donation to retrain the original model.

photoshop?

Okay nerds

"We confirm there has been no breach of IP as flagged and we thank Stability AI for the compute donation to retrain the original model."
This is some running with the model especially if you agreed with stability to not release the model if I seen one. I guess you deemed it legally safe and just literally RAN with the model, funny.

"we thank Stability AI for the compute donation to retrain the original model."

Spicy. πŸ‘

Fantastic Thread.

Hey I was here too!
Thanks for keeping SAI at their word of releasing 1.5

Are you ready for another drama kids?
(Aye-aye, Captain!)

Emad just said on Discord that the takedown request has been 'reversed', and the warning is gone.

This is way more interesting than anything happening in Crypto rn. 🍿

Came for the weird and wacky imagery that can be created , stayed for the drama

julien-c changed discussion title from 🚩 Report : Legal issue(s) : Takedown request to 🚩 Report : Legal issue(s)
julien-c changed discussion status to closed

"if possible publicly here for accountability and transparency" one team decided not to I guess

very sad outcome and no respect for runway at all now. you want to release the model, at least have the decency to name it something other than stable diffusion so there's no confusion between your pirated copy and stability.ai's official version.

very sad outcome and no respect for runway at all now. you want to release the model, at least have the decency to name it something other than stable diffusion so there's no confusion between your pirated copy and stability.ai's official version.

They're one of the groups working on Stable Diffusion with Stability.

Wrong site, please refer to https://twitter.com/

and I thought Stability AI was a more friendly and open-source company ....

and I thought Stability AI was a more friendly and open-source company ....

take down request was reversed

* Cascually typing git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 into my terminal and getting some popcorn as I stumble on this thread... *

Oh, here's the popcorn...

Now, please continue...

giphy.gif

and I thought Stability AI was a more friendly and open-source company ....

take down request was reversed

Shouldn't have been issued in the first place.

very sad outcome and no respect for runway at all now. you want to release the model, at least have the decency to name it something other than stable diffusion so there's no confusion between your pirated copy and stability.ai's official version.

Wow

and I thought Stability AI was a more friendly and open-source company ....

take down request was reversed

Shouldn't have been issued in the first place.

I agree 100%.

Stability needs to think long and hard about who on their team is making these decisions, and probably take their decision making authority away.

Between this, the subreddit thing and the AUTOMATIC1111 drama/falsehood, that's three fairly major reversals in a very short period of time.

Between this, the subreddit thing and the AUTOMATIC1111 drama/falsehood, that's three fairly major reversals in a very short period of time.

↑↑↑ THIS ↑↑↑

I've been but a casual & passive observer in all of this drama, but the more I learn about it the worse it looks for both HuggingFace & StabilityAI.

Get your shit together, folks. Y'all look like a bunch of amateurs who don't know what they're doing.

IMO, the only party that's done anything wrong is SAI.

HuggingFace, as I understand it, is a university research group that SAI and Runway are funding.

Having been part of a (much smaller) university research group, I know that merely funding a group does not give you exclusive rights to a project. If SAI had purchased those rights, the controlling university would've put the kibosh on the shared information shortly after purchase.

This just looks like another case of SAI playing ready fire aim and not understanding what their money bought them.

As I see it the quick rise of SD is causing instabilities in those structures that work with it.
Suddenly it's a 100M$ business, they are overwhelmed and one hand doesn't know what the other should be doing.

The outcome will be interesting.

you'd think that IF they weren't trying to steal this, they'd have had the common decency to name their release after themselves - maybe something like, I dunno - runway diffusion. But no, all the name recognition is stable diffusion, everyone knows that stability.ai released it. Who's even heard of runway? But runway knows people want 1.5. i can just hear the argument between stability.ai and runway "you need to release the update" "it's not ready" "it is ready, people want it, release it." "NO, we're still working on it, it isnt' ready." "fine, WE'LL release it whether you like it or not!"

incidently - tests people have done on this release, and posted online, show that it has some serious instabilities in it

have fun

incidently - tests people have done on this release, and posted online, show that it has some serious instabilities in it

Any links you can provide for more details?

I was planning to start testing as soon as I got some sleep...

It's always good to be prepared...

you'd think that IF they weren't trying to steal this, they'd have had the common decency to name their release after themselves - maybe something like, I dunno - runway diffusion. But no, all the name recognition is stable diffusion, everyone knows that stability.ai released it. Who's even heard of runway? But runway knows people want 1.5. i can just hear the argument between stability.ai and runway "you need to release the update" "it's not ready" "it is ready, people want it, release it." "NO, we're still working on it, it isnt' ready." "fine, WE'LL release it whether you like it or not!"

Well that's just wrong, everything you said was wrong.

The 1.5 model is built on base of the 1.2 model which is built on 1.1, it is a continued training which means the license is unchanged.
The license carries over, just as when you fork an open source project. You can't just make it private source afterward.
The license: https://huggingface.co/spaces/CompVis/stable-diffusion-license
"Distribution and Redistribution: You may host for Third Party remote access purposes (e.g. software-as-a-service), reproduce and distribute copies of the Model or Derivatives of the Model thereof in any medium, with or without modifications"
"Grant of Copyright License: Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare, publicly display, publicly perform,
sublicense, and distribute the Complementary Material, the Model, and Derivatives of the Model."
"Derivatives of the Model: means all modifications to the Model, works based on the Model, or any other model which is created or initialized by transfer of patterns of the weights, parameters, activations or output of the Model, to the other model, in order to cause the other model to perform similarly to the Model, including - but not limited to - distillation methods entailing the use of intermediate data representations or methods based on the generation of synthetic data by the Model for training the other model
The 1.5 model is (also by license definiton) a "derivative of the model".

It is not possible to steal it, the same with ANY other "leak" we've seen. Any model based on the Stable Diffusion models is a derivative and can be distributed, leaked. You've a copyright.
Anyone has it, it's not stealing if it is a GIFT.

This also counts for NAI, their models, hypernetworks and VAE files are locked to the same license which means EVERYONE has copyright to them.
Only their sourcecode is not free except for all the files they copied from open source repositories.

we went through the same thing with MUDS - commandline percursers to graphical MMORPGs - only the people back then that were taking the code and putting out their own versions had the smarts to name their versions differently.
There's a reason runway did NOT name their version after themselves - and it isn't a nice reason.

I thank Runway for contributing this model to the community.

For those who already want to give 1.5 a try on Google Colab, I ported the demo app to 1.5 and added support for Google Colab.

You can find it at https://colab.research.google.com/github/jslegers/stable-diffusion/blob/main/Stable_Diffusion_Demo_App.ipynb

HuggingFace, as I understand it, is a university research group that SAI and Runway are funding.

Having been part of a (much smaller) university research group, I know that merely funding a group does not give you exclusive rights to a project. If SAI had purchased those rights, the controlling university would've put the kibosh on the shared information shortly after purchase.

This just looks like another case of SAI playing ready fire aim and not understanding what their money bought them.

This isn't close to being true

  1. HuggingFace is not affiliated with the development of the model
  2. HuggingFace is not a university research group
  3. HuggingFace is not funded by Stability and Runway
  4. Neither side is complaining about HuggingFace's actions

HuggingFace's role here is similar to Github's or YouTube's. They own a website where people upload things. This model upload caused a dispute over ownership and Stability asked them to take it down, maybe because either a) they didn't understand the licensing or b) they thought that the model was something it's not (I don't think this is the same model as what's currently offered in DreamStudio?). Then Stability backed down, likely because of a combo of realizing they were going to get a lot of bad press and clearing up their confusion about the previous points.

HuggingFace has an ethical obligation (that they intermittently take seriously) to not allow people to post models and datasets in violation of the licensing of those models and datasets on their website. The standard mechanism for enforcing such an action is to issue a takedown request, and it's very common for things to be taken down pending evaluating the takedown request because in the overwhelming majority of cases the harm done by improper release is more severe than the harm done in needlessly delaying release.

This comment has been hidden

So, I will say I misunderstood the story I was told third-hand about the CompVis. A surprising amount of Googling later, and I see that the research group now goes by Machine Vision and Learning research group at Ludwig Maximilian University.

I don't think it's fair or accurate to paint my entire post with the "this isn't close to being true brush".

Incorrectly stating that this site is run by the research group, doesn't invalidate the rest of the post.

You guys should probably read this article
https://danieljeffries.substack.com/p/why-the-future-of-open-source-ai

You just beat me to it...

You guys should probably read this article
https://danieljeffries.substack.com/p/why-the-future-of-open-source-ai

fuck, so this is all because of california congresswoman eshoo threatening to severely limit open source ai because it isn't SFW by contacting the NSA about it.

Welp... looks like I'm not voting for her this upcoming midterm.

california, the land of ruin.

Basically, the situation is entirely as I suspected. To quote myself 13 hours ago :

Stable Diffusion makes it incredibly easy to make eg. deepfaked porn starring celebrities or other highly questionable content.

I suspect 1.5 won't be released until they find ways to make it much harder / impossible to produce content of such a questionable nature.

Problem is... The genie was already out of the bottle the moment 1.4 was released. People found out how to turn off the NSFW filter and generate highly questionable content in no time. And no matter how you want to restrict this legally or practically, there will be people who will use it this way, much like there will always be people who use "pirated" software.

Trying to make it impossible to turn off the NSFW filter of future versions of SD and/or similar restrictions intended to reduce the potential for what you guys perceive as "abuse" will only result in fewer people deciding to upgrade. This in turn will have a negative impact for everyone, since it results in a more fractured AI landscape.

Because of this, it's better to embrace the situation as it is and acknowledge that most people outside the puritanical USA will turn off the NSFW filter by default anyway. Those who abuse this are a minority anyway. And, considering the genie is already out of the bottle, it makes far more sense to focus on detecting illegal and/or immoral usage of AI and providing a legal framework that allows prosecution thereof than to focus on restricting what's possible with SD and therewith limit not just illegitimate usages thereof but also many legitimate uses, like eg. artistic nudes.

Thus, IMO StabilityAI's official position is based on a naive and very nonsensical perspective and RunwayML was totally justified in releasing 1.5 to the public as was promised for weeks!

Basically, the situation is entirely as I suspected. To quote myself 13 hours ago :

Stable Diffusion makes it incredibly easy to make eg. deepfaked porn starring celebrities or other highly questionable content.

I suspect 1.5 won't be released until they find ways to make it much harder / impossible to produce content of such a questionable nature.

Problem is... The genie was already out of the bottle the moment 1.4 was released. People found out how to turn off the NSFW filter and generate highly questionable content in no time. And no matter how you want to restrict this legally or practically, there will be people who will use it this way, much like there will always be people who use "pirated" software.

Trying to making it impossible to turn off the NSFW filter of future versions of SD and/or similar restrictions intended to reduce the potential for what you guys perceive as "abuse" will only result in fewer people deciding to upgrade. This in turn will have a negative impact for everyone, since it results in a more fractured AI landscape.

Because of this, it's better to embrace the situation as it is and acknowledge that most people outside the puritanical USA will turn off the NSFW filter by default anyway. Those who abuse this are a minority anyway. And, considering the genie is already out of the bottle, it makes far more sense to focus on detecting illegal and/or immoral usage of AI and providing a legal framework that allows prosecution thereof than to focus on restricting what's possible with SD and therewith limit not just illegitimate usages thereof but also many legitimate uses, like eg. artistic nudes.

Thus, IMO StabilityAI's official position is based on a naive and very nonsensical perspective and RunwayML was totally justified in releasing 1.5 to the public as was promised for weeks!

we should not kneel to corporations who wish to dominate. I would hate to wake up from a coma to realize that Meta has a monopoly over AI.

You guys should probably read this article
https://danieljeffries.substack.com/p/why-the-future-of-open-source-ai

I've never read something more meaningless and dilusional than this.
Should we make Adobe create a NSFW and deepfake detection in Photoshop? PS has been used for illegals and stuff for ages. Yes, it is not as easy as in SD, but it's quite doable.
I would guess we will never know the real reason this got released in the way it did

I've never read something more meaningless and dilusional than this.

"Meaningless and dilusional" sounds like a good motto for what qualifies as culture in present day Commiefornia.

These people live in ivory towers and have zero clues how the real world operates...

live in your dreamworld, John - however congress is indeed already involved in deciding if stable diffusion is a security risk and whether you like it or not, runway releasing the code today may have just totally destroyed any possiblity of open source AI for the future.

In the US you mean, right?

live in your dreamworld

My dreamworld?

I'm not the one who believes the genie can be put back in the bottle.

runway releasing the code today may have just totally destroyed any possiblity of open source AI for the future

StabilityAI censoring future versions of AI would have only result in a more fractured AI landscape. That strategy is completely naive and nonsensical and would have hurt literally everyone.

RunwayML was right to respect promises made to the community and release 1.5 as they should have done weeks ago.

And if the US will take legal actions to persecute competitors of Google, this may result in a temporary throwback for AI but this is unlikely to have any negative impact in the long run unless the rest of the world follows suit. And I don't see that happening. Also, this might even stimulate the development of some sort of "underground" AI movement, similar to the software "piracy" movement. Good luck restricting "illegitimate" use of AI if that's the road taken...

Y'all might want to read this :
https://www.reddit.com/r/StableDiffusion/comments/y9ga5s/stability_ais_take_on_stable_diffusion_15_and_the/

The community over there is standing pretty unanimously behind RunwayML on this...

Basically, the situation is entirely as I suspected. To quote myself 13 hours ago :

Stable Diffusion makes it incredibly easy to make eg. deepfaked porn starring celebrities or other highly questionable content.

I suspect 1.5 won't be released until they find ways to make it much harder / impossible to produce content of such a questionable nature.

Problem is... The genie was already out of the bottle the moment 1.4 was released. People found out how to turn off the NSFW filter and generate highly questionable content in no time. And no matter how you want to restrict this legally or practically, there will be people who will use it this way, much like there will always be people who use "pirated" software.

Trying to make it impossible to turn off the NSFW filter of future versions of SD and/or similar restrictions intended to reduce the potential for what you guys perceive as "abuse" will only result in fewer people deciding to upgrade. This in turn will have a negative impact for everyone, since it results in a more fractured AI landscape.

Because of this, it's better to embrace the situation as it is and acknowledge that most people outside the puritanical USA will turn off the NSFW filter by default anyway. Those who abuse this are a minority anyway. And, considering the genie is already out of the bottle, it makes far more sense to focus on detecting illegal and/or immoral usage of AI and providing a legal framework that allows prosecution thereof than to focus on restricting what's possible with SD and therewith limit not just illegitimate usages thereof but also many legitimate uses, like eg. artistic nudes.

Thus, IMO StabilityAI's official position is based on a naive and very nonsensical perspective and RunwayML was totally justified in releasing 1.5 to the public as was promised for weeks!

I don't imagine that such changes would have been hard to implement from the start, if there had been any interest in doing so. Remove or genericize the relevant words from the training captions, run a couple iterations of face and text detection on the training images, and blur them like the faces and license plates on Streetview. Not even human could expect to see through the walls of Plato's Cave with their wonderful million-trillion-parameter organic brain.

What this smacks of is too depressing to write about here, and what it would do to the capability and performance of these models would probably (and sadly) be found out soon enough. Then it's not as if someone else would not have replicated the model without such limitations, like how GPT-2 1.5B was not released due to essentially similar concerns, until someone replicated the results and released their model.

Hi all,

Cris here - the CEO and Co-founder of Runway. Since our founding in 2018, we’ve been on a mission to empower anyone to create the impossible. So, we’re excited to share this newest version of Stable Diffusion so that we can continue delivering on our mission.

This version of Stable Diffusion is a continuation of the original High-Resolution Image Synthesis with Latent Diffusion Models work that we created and published (now more commonly referred to as Stable Diffusion). Stable Diffusion is an AI model developed by Patrick Esser from Runway and Robin Rombach from LMU Munich. The research and code behind Stable Diffusion was open-sourced last year. The model was released under the CreativeML Open RAIL M License.

We confirm there has been no breach of IP as flagged and we thank Stability AI for the compute donation to retrain the original model.

A very BIG thx for releasing the MODEL and not censorship it as STABILITY wanted!

VVVVVVV.png

Datasets/Models must remain uncensored, that's the spirit of an Open Source Project!