Update README.md
Browse files
README.md
CHANGED
@@ -25,27 +25,47 @@ model-index:
|
|
25 |
value: 0.8888888888888888
|
26 |
---
|
27 |
|
28 |
-
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
29 |
-
should probably proofread and complete it, then remove this comment. -->
|
30 |
-
|
31 |
# vit-dunham-carbonate-classifier
|
32 |
|
33 |
-
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the
|
34 |
It achieves the following results on the evaluation set:
|
35 |
- Loss: 0.7676
|
36 |
- Accuracy: 0.8889
|
37 |
|
38 |
## Model description
|
39 |
|
40 |
-
|
41 |
|
42 |
## Intended uses & limitations
|
43 |
|
44 |
-
|
|
|
|
|
|
|
45 |
|
46 |
## Training and evaluation data
|
47 |
|
48 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
49 |
|
50 |
## Training procedure
|
51 |
|
|
|
25 |
value: 0.8888888888888888
|
26 |
---
|
27 |
|
|
|
|
|
|
|
28 |
# vit-dunham-carbonate-classifier
|
29 |
|
30 |
+
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the [Lokier & Al Junaibi (2016)](https://onlinelibrary.wiley.com/doi/10.1111/sed.12293) data S1.
|
31 |
It achieves the following results on the evaluation set:
|
32 |
- Loss: 0.7676
|
33 |
- Accuracy: 0.8889
|
34 |
|
35 |
## Model description
|
36 |
|
37 |
+
The model captures the expertise of 177 volunteers from 33 countries with 3270 years of academic & industry experience in classifying 14 carbonates thin section samples by using the classical Dunham (1962) carbonate classification. In the original paper, the authors intended to objectively analyze whether these volunteers have the same standards in applying Dunham classification.
|
38 |
|
39 |
## Intended uses & limitations
|
40 |
|
41 |
+
- Input: Carbonate thin section image, can be either parallel-polarized (PPL) or cross-polarized (XPL)
|
42 |
+
- Output: Dunham classification (Mudstone/Wackestone/Packstone/Grainstone/Boundstone/Crystalline) and the probability value
|
43 |
+
|
44 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64ff0bce56243ce8cb6df456/r4aBwewYuL-WLfTdqqFL-.png)
|
45 |
|
46 |
## Training and evaluation data
|
47 |
|
48 |
+
Source: (Lokier & Al Junaibi (2016), Data S1)[https://onlinelibrary.wiley.com/action/downloadSupplement?doi=10.1111%2Fsed.12293&file=sed12293-sup-0001-SupInfo.zip]
|
49 |
+
|
50 |
+
The data consists of 14 samples. Each samples has 3 magnifications (x2, x4, and x10) and taken in PPL and XPL. Hence, there are 14 * 3 * 2 = 84 samples in the training dataset.
|
51 |
+
|
52 |
+
Classification for each sample is taken from the most popular respondent's response in Table 7:
|
53 |
+
- Sample 1: Packstone
|
54 |
+
- Sample 2: Grainstone
|
55 |
+
- Sample 3: Wackestone
|
56 |
+
- Sample 4: Packstone
|
57 |
+
- Sample 5: Wackestone
|
58 |
+
- Sample 6: Packstone
|
59 |
+
- Sample 7: Packstone
|
60 |
+
- Sample 8: Mudstone
|
61 |
+
- Sample 9: Crystalline
|
62 |
+
- Sample 10: Grainstone
|
63 |
+
- Sample 11: Wackestone
|
64 |
+
- Sample 12: Grainstone
|
65 |
+
- Sample 13: Grainstone
|
66 |
+
- Sample 14: Mudstone
|
67 |
+
|
68 |
+
Note that the original dataset is missing Boundstone sample.
|
69 |
|
70 |
## Training procedure
|
71 |
|