kenobi commited on
Commit
29d578b
1 Parent(s): f65c079

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +58 -9
README.md CHANGED
@@ -41,7 +41,7 @@ For the SDO team: this model is the first version for demonstration purposes. It
41
  We will include more technical details here soon.
42
 
43
  ## Example Images
44
- --> Use one of the images below for the inference API field on the upper right.
45
 
46
  #### High_Energy_Ion_Fe_Nuclei
47
 
@@ -58,26 +58,75 @@ The ViT model was pretrained on a dataset consisting of 14 million images and 21
58
  More information on the base model used can be found here: (https://huggingface.co/google/vit-base-patch16-224-in21k)
59
 
60
  ## How to use this Model
61
- (quick snippet to work on Google Colab - comment the pip install for local use if you have transformers already installed)
62
 
 
63
  ```python
64
- !pip install transformers --quiet
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65
  from transformers import AutoFeatureExtractor, AutoModelForImageClassification
66
  from PIL import Image
67
- import requests
68
 
69
- url = 'https://roosevelt.devron-systems.com/HF/P242_73665006707-A6_002_008_proj.tif'
70
- image = Image.open(requests.get(url, stream=True).raw)
 
71
 
 
 
 
 
72
  feature_extractor = AutoFeatureExtractor.from_pretrained("kenobi/NASA_GeneLab_MBT")
73
  model = AutoModelForImageClassification.from_pretrained("kenobi/NASA_GeneLab_MBT")
74
- inputs = feature_extractor(images=image, return_tensors="pt")
75
 
 
 
 
 
76
  outputs = model(**inputs)
77
  logits = outputs.logits
78
- # model predicts one of the two fine-tuned classes (High_Energy_Ion_Fe_Nuclei or XRay_irradiated_Nuclei)
 
79
  predicted_class_idx = logits.argmax(-1).item()
80
- print("Predicted class:", model.config.id2label[predicted_class_idx])
 
 
81
  ```
82
 
83
  ## BibTeX & References
 
41
  We will include more technical details here soon.
42
 
43
  ## Example Images
44
+ >>> Use one of the images below for the inference API field on the upper right.
45
 
46
  #### High_Energy_Ion_Fe_Nuclei
47
 
 
58
  More information on the base model used can be found here: (https://huggingface.co/google/vit-base-patch16-224-in21k)
59
 
60
  ## How to use this Model
61
+ (quick snippets to work on Google Colab)
62
 
63
+ First a snippet to downnload test images from an online repository:
64
  ```python
65
+ import urllib.request
66
+
67
+ def download_image(url, filename):
68
+ try:
69
+ # Define custom headers
70
+ headers = {
71
+ 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'
72
+ }
73
+
74
+ # Create a request with custom headers
75
+ req = urllib.request.Request(url, headers=headers)
76
+
77
+ # Open the URL and read the content
78
+ with urllib.request.urlopen(req) as response:
79
+ img_data = response.read()
80
+
81
+ # Write the content to a file
82
+ with open(filename, 'wb') as handler:
83
+ handler.write(img_data)
84
+
85
+ print(f"Image '{filename}' downloaded successfully")
86
+ except Exception as e:
87
+ print(f"Error downloading the image '{filename}':", e)
88
+
89
+ # List of URLs and corresponding filenames
90
+ urls = [
91
+ ('https://roosevelt.devron-systems.com/HF/P242_73665006707-A6_002_008_proj.tif', 'P242_73665006707-A6_002_008_proj.tif'),
92
+ ('https://roosevelt.devron-systems.com/HF/P278_73668090728-A7_003_027_proj.tif', 'P278_73668090728-A7_003_027_proj.tif')
93
+ ]
94
+
95
+ # Download each image
96
+ for url, filename in urls:
97
+ download_image(url, filename)
98
+ ```
99
+
100
+ Then use the images for inference:
101
+
102
+ ```python
103
+ #!pip install transformers --quiet # uncomment this pip install for local use if you do not have transformers installed
104
  from transformers import AutoFeatureExtractor, AutoModelForImageClassification
105
  from PIL import Image
 
106
 
107
+ # Load the image
108
+ #image = Image.open('P242_73665006707-A6_002_008_proj.tif') #First Image
109
+ image = Image.open('P278_73668090728-A7_003_027_proj.tif') #Second Image
110
 
111
+ # Convert grayscale image to RGB
112
+ image_rgb = image.convert("RGB")
113
+
114
+ # Load the pre-trained feature extractor and classification model
115
  feature_extractor = AutoFeatureExtractor.from_pretrained("kenobi/NASA_GeneLab_MBT")
116
  model = AutoModelForImageClassification.from_pretrained("kenobi/NASA_GeneLab_MBT")
 
117
 
118
+ # Extract features from the image
119
+ inputs = feature_extractor(images=image_rgb, return_tensors="pt")
120
+
121
+ # Perform classification
122
  outputs = model(**inputs)
123
  logits = outputs.logits
124
+
125
+ # Obtain the predicted class index and label
126
  predicted_class_idx = logits.argmax(-1).item()
127
+ predicted_class_label = model.config.id2label[predicted_class_idx]
128
+
129
+ print("Predicted class:", predicted_class_label)
130
  ```
131
 
132
  ## BibTeX & References