captjt commited on
Commit
aa01589
1 Parent(s): 6f6ca6a

pin space to org

Browse files
Files changed (1) hide show
  1. README.md +15 -15
README.md CHANGED
@@ -4,20 +4,20 @@ emoji: 📉
4
  colorFrom: blue
5
  colorTo: green
6
  sdk: docker
7
- pinned: false
8
  ---
9
 
10
  # AI Lineage Explorer: A Step Towards AI Integrity.
11
 
12
- We're [EQTY Lab](https://eqtylab.io), a team passionate about building trust in open source AI.
13
 
14
  Our years being in the mix of AI's evolving landscape led us to focus on a crucial element: the lineage of AI models. Things move fast when you’re developing AI. You experiment. You collaborate with many people. Keeping track of all the moving parts is tough.
15
 
16
- But we need to get it right. There is a vital role for transparent and safe open research in democratizing AI technology — and allowing it to stay open.
17
 
18
  # Today we are introducing on Hugging Face Spaces an early version of our first product: The AI Lineage Explorer.
19
 
20
- The tool is a practical solution designed to enhance transparency throughout the AI engineering process without getting in your way. It’s free and easily launches right from your model page on Hugging Face.
21
 
22
  We worked with our [favorite designers](https://www.gladeye.com/) to make this happen and brought to the table some really cool new cryptographic methods to make this a reliable tool that we hope anyone will want to use.
23
 
@@ -25,7 +25,7 @@ We built the [AI Lineage Explorer](https://huggingface.co/spaces/EQTYLab/lineage
25
 
26
  1. Create a tamper-proof manifest that tags along when you ship to bolster trust in your creation.
27
 
28
- 2. Transform boring attestations into an immersive blueprint that provides a real-time visual of your model’s entire lifecycle —now just one click away on Hugging Face.
29
 
30
  ![A screenshot of the AI Lineage Explorer](./screenshot.png)
31
 
@@ -40,9 +40,9 @@ We built the [AI Lineage Explorer](https://huggingface.co/spaces/EQTYLab/lineage
40
 
41
  The backbone of the AI Lineage Explorer is what we call an Integrity Graph, a concept rooted in cryptographic attestations based on digital signatures and verifiable computing. This approach draws inspiration from our collaborations with organizations like [Content Authenticity Initiative](https://contentauthenticity.org/), and [Creative Commons](https://creativecommons.org/).
42
 
43
- It's not just another theoretical framework for Responsible AI; it's a practical foundation for bolstering the accountability of models. From there, you can do all sorts of awesome things, like design compliance, trigger automated reviews, and even pay people their fair share. There’s lots more to come.
44
 
45
- # But first, we prioritized user experience.
46
 
47
  We focused on creating a simple manifest format and an interface that aligns with typical ML engineering workflows to minimize workflow disruptions. A simple line of code is all you should need to kick off a smart tool that quietly works in the background.
48
 
@@ -72,7 +72,7 @@ The problem is that AI consists of really complex engineering processes. You mus
72
 
73
  That’s just the beginning. The reality is that most models are not just one model, but an ensemble of models and tasks. Take Stable Diffusion 2’s typical architecture. It consists of at least four core models (an image encoder like VAE, a CLIP text encoder, the U-Net denoising model, and an image decoder) that each have their own intensive training history with multiple data sets —not including Stable Diffusion’s own training data. The cool thing is that a bunch of people figured out how to make this model work, but there are lingering uncertainties about the model’s actual provenance.
74
 
75
- So it’s going to take time and a determined community (like the one we have here) to figure it out.
76
 
77
  *So how?*
78
 
@@ -80,11 +80,11 @@ Our design philosophy revolves around meeting people where they are and keeping
80
 
81
  We’re working to integrate into each step of ML workflows a bunch of new tools. But we don’t want you to focus on them. Just build AI. The steps should fall into place:
82
 
83
- 1. **Registration & Documentation**: Capture critical statements about data, model, compute, governance, and transformations.
84
 
85
- 2. **Integrity & Trust**: Digitally sign each statement to cryptographically establish attributability and tamper-resistance of all inputs, compute, and outputs.
86
 
87
- 3. **Verifiability**: Use graph data structures to establish chains of lineage for added transparency.
88
 
89
  4. **Federated Collaborations**: Registration statements can be produced by many parties, and composed together to describe collaborative processes. These statements can also optionally be anchored on public ledgers, fostering a decentralized approach to AI advancements and added dimensions for transparency and trust.
90
 
@@ -95,12 +95,12 @@ Afterward, show your work and easily bring in others to help make your models ev
95
  The bottom line —this isn’t an empty promise. While developing AI Lineage Explorer, we've seen ourselves become even more committed to the broader movement towards Responsible AI. Here’s a sketch of our emerging convictions:
96
 
97
  1. **Clarity in AI Creation**: By labeling AI-generated content and providing metadata for authentic content, we’re fostering an environment where origin and authenticity are clear BUT always up to the user on how to control.
98
-
99
- 2. **Ethical Foundation in Training Data**: Steering clear of problematic data, our tool encourages transparency and ethical practices in AI training.
100
 
101
- 3. **Streamlining Verification**: In line with emerging standards, our tool simplifies the process of verifying credentials, which is especially crucial as more people abuse AI tooling.
102
 
103
- 4. **Empowering Content Creators**: By automating digital signatures, we make it easier for creators to claim and protect their work without having to go to a centralized registry.
 
 
104
 
105
  5. **Opening Up AI Access**: We believe in democratizing AI, not centralizing it. Our tool is a step towards a future where AI is accessible, secure, and utilized for the greater good.
106
 
 
4
  colorFrom: blue
5
  colorTo: green
6
  sdk: docker
7
+ pinned: true
8
  ---
9
 
10
  # AI Lineage Explorer: A Step Towards AI Integrity.
11
 
12
+ We're [EQTY Lab](https://eqtylab.io), a team passionate about building trust in open source AI.
13
 
14
  Our years being in the mix of AI's evolving landscape led us to focus on a crucial element: the lineage of AI models. Things move fast when you’re developing AI. You experiment. You collaborate with many people. Keeping track of all the moving parts is tough.
15
 
16
+ But we need to get it right. There is a vital role for transparent and safe open research in democratizing AI technology — and allowing it to stay open.
17
 
18
  # Today we are introducing on Hugging Face Spaces an early version of our first product: The AI Lineage Explorer.
19
 
20
+ The tool is a practical solution designed to enhance transparency throughout the AI engineering process without getting in your way. It’s free and easily launches right from your model page on Hugging Face.
21
 
22
  We worked with our [favorite designers](https://www.gladeye.com/) to make this happen and brought to the table some really cool new cryptographic methods to make this a reliable tool that we hope anyone will want to use.
23
 
 
25
 
26
  1. Create a tamper-proof manifest that tags along when you ship to bolster trust in your creation.
27
 
28
+ 2. Transform boring attestations into an immersive blueprint that provides a real-time visual of your model’s entire lifecycle —now just one click away on Hugging Face.
29
 
30
  ![A screenshot of the AI Lineage Explorer](./screenshot.png)
31
 
 
40
 
41
  The backbone of the AI Lineage Explorer is what we call an Integrity Graph, a concept rooted in cryptographic attestations based on digital signatures and verifiable computing. This approach draws inspiration from our collaborations with organizations like [Content Authenticity Initiative](https://contentauthenticity.org/), and [Creative Commons](https://creativecommons.org/).
42
 
43
+ It's not just another theoretical framework for Responsible AI; it's a practical foundation for bolstering the accountability of models. From there, you can do all sorts of awesome things, like design compliance, trigger automated reviews, and even pay people their fair share. There’s lots more to come.
44
 
45
+ # But first, we prioritized user experience.
46
 
47
  We focused on creating a simple manifest format and an interface that aligns with typical ML engineering workflows to minimize workflow disruptions. A simple line of code is all you should need to kick off a smart tool that quietly works in the background.
48
 
 
72
 
73
  That’s just the beginning. The reality is that most models are not just one model, but an ensemble of models and tasks. Take Stable Diffusion 2’s typical architecture. It consists of at least four core models (an image encoder like VAE, a CLIP text encoder, the U-Net denoising model, and an image decoder) that each have their own intensive training history with multiple data sets —not including Stable Diffusion’s own training data. The cool thing is that a bunch of people figured out how to make this model work, but there are lingering uncertainties about the model’s actual provenance.
74
 
75
+ So it’s going to take time and a determined community (like the one we have here) to figure it out.
76
 
77
  *So how?*
78
 
 
80
 
81
  We’re working to integrate into each step of ML workflows a bunch of new tools. But we don’t want you to focus on them. Just build AI. The steps should fall into place:
82
 
83
+ 1. **Registration & Documentation**: Capture critical statements about data, model, compute, governance, and transformations.
84
 
85
+ 2. **Integrity & Trust**: Digitally sign each statement to cryptographically establish attributability and tamper-resistance of all inputs, compute, and outputs.
86
 
87
+ 3. **Verifiability**: Use graph data structures to establish chains of lineage for added transparency.
88
 
89
  4. **Federated Collaborations**: Registration statements can be produced by many parties, and composed together to describe collaborative processes. These statements can also optionally be anchored on public ledgers, fostering a decentralized approach to AI advancements and added dimensions for transparency and trust.
90
 
 
95
  The bottom line —this isn’t an empty promise. While developing AI Lineage Explorer, we've seen ourselves become even more committed to the broader movement towards Responsible AI. Here’s a sketch of our emerging convictions:
96
 
97
  1. **Clarity in AI Creation**: By labeling AI-generated content and providing metadata for authentic content, we’re fostering an environment where origin and authenticity are clear BUT always up to the user on how to control.
 
 
98
 
99
+ 2. **Ethical Foundation in Training Data**: Steering clear of problematic data, our tool encourages transparency and ethical practices in AI training.
100
 
101
+ 3. **Streamlining Verification**: In line with emerging standards, our tool simplifies the process of verifying credentials, which is especially crucial as more people abuse AI tooling.
102
+
103
+ 4. **Empowering Content Creators**: By automating digital signatures, we make it easier for creators to claim and protect their work without having to go to a centralized registry.
104
 
105
  5. **Opening Up AI Access**: We believe in democratizing AI, not centralizing it. Our tool is a step towards a future where AI is accessible, secure, and utilized for the greater good.
106