Added credits
Browse files
README.md
CHANGED
@@ -86,4 +86,18 @@ Running tokenizer on train dataset: 100%|βββββββββββββ
|
|
86 |
|
87 |
---
|
88 |
license: gpl-3.0
|
89 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
86 |
|
87 |
---
|
88 |
license: gpl-3.0
|
89 |
+
---
|
90 |
+
|
91 |
+
|
92 |
+
|
93 |
+
Credit / Acknowledgements / Blame
|
94 |
+
---------------------------------
|
95 |
+
|
96 |
+
Shawn Graham (https://huggingface.co/sgraham https://carleton.ca/history/people/shawn-graham/) figured out how to use
|
97 |
+
available open source tooling to fine-tune the CLIP model.
|
98 |
+
|
99 |
+
Eric Kansa (https://github.com/ekansa) put together the captioned training data and further streamlined and debugged the
|
100 |
+
open source tooling to fine-tune the CLIP model.
|
101 |
+
|
102 |
+
Many, many data contributors to Open Context did the hard work of excavating, surveying, and documenting archaeology
|
103 |
+
materials used in this training.
|