EdgeFM commited on
Commit
f27550f
1 Parent(s): a17caa3

Upload 2 files

Browse files
Files changed (2) hide show
  1. index.html +3 -2
  2. workloads.png +0 -0
index.html CHANGED
@@ -541,9 +541,10 @@
541
  <h2 id="evaluation">8. Experimental evaluation in ICDE 2024 submission</h2>
542
  <h3 id="81">8.1 Basic settings</h3>
543
  <p><b>Testbeds</b>. We evaluate EdgeFM on four heterogeneous edge devices: NVIDIA Jetson TX2 (8GB memory), NVIDIA Xavier NX (16GB memory), NVIDIA AGX Xavier (32GB memory), and NVIDIA AGX Orin (32GB memory).</p>
544
- <p><b>Models</b>. We evaluate EdgeFM on three representative FMs: ViT-B/16 (CV), BERT_base (NLP), and ViLT (multimodal). ViT-B/16 is added three different application heads respectively to perform image classification, object detection, and semantic segmentation application. BERT_base is added two different application heads respectively to perform sentence classification and pos-of-tagging classification application. ViLT perform visual question answering application. Finally, GPT-Neo is evaluated in the discussion of PEFT techniques.</p>
545
- <p><b>Datasets</b>. We evaluate EdgeFM on 11 different datasets: GTA5, SuperviselyPerson, MSCOCO2017, Cityscapes, BaiduPerson, HL5Domains, Liu3Domains, Ding9Domains, SemEval14, 20Newsgroups, and VQAv2.</p>
546
  <p><b>Baselines</b>. We compare EdgeFM with 13 adaptation methods, including 5 supervised continual learning methods and 8 unsupervised domain adaptation methods.</p>
 
 
 
547
 
548
  <h3 id="82">8.2 Additional details</h3>
549
  <p><b>Online adaptation</b>. For evolving domain shifts, EdgeFM uses the naive feature alignment (the most classical method for unsupervised domain adaptation) to retrain the proxy model. For evolving new tasks, EdgeFM uses the normal supervised learning to retrain the proxy model.</p>
 
541
  <h2 id="evaluation">8. Experimental evaluation in ICDE 2024 submission</h2>
542
  <h3 id="81">8.1 Basic settings</h3>
543
  <p><b>Testbeds</b>. We evaluate EdgeFM on four heterogeneous edge devices: NVIDIA Jetson TX2 (8GB memory), NVIDIA Xavier NX (16GB memory), NVIDIA AGX Xavier (32GB memory), and NVIDIA AGX Orin (32GB memory).</p>
 
 
544
  <p><b>Baselines</b>. We compare EdgeFM with 13 adaptation methods, including 5 supervised continual learning methods and 8 unsupervised domain adaptation methods.</p>
545
+ <p><b>Workloads</b>. We evaluate EdgeFM on three representative FMs: ViT-B/16 (CV), BERT_base (NLP), and ViLT (multimodal). ViT-B/16 is added three different application heads respectively to perform image classification, object detection, and semantic segmentation application. BERT_base is added two different application heads respectively to perform sentence classification and pos-of-tagging classification application. ViLT performs visual question answering application. Finally, GPT-Neo is evaluated in the discussion of PEFT techniques. We evaluate EdgeFM on 11 different datasets: GTA5, SuperviselyPerson, MSCOCO2017, Cityscapes, BaiduPerson, HL5Domains, Liu3Domains, Ding9Domains, SemEval14, 20Newsgroups, and VQAv2. More details refer to the table below.</p>
546
+ <img style="width: 80%; margin: 0 auto; margin-bottom: 10px;" src="workloads.png">
547
+
548
 
549
  <h3 id="82">8.2 Additional details</h3>
550
  <p><b>Online adaptation</b>. For evolving domain shifts, EdgeFM uses the naive feature alignment (the most classical method for unsupervised domain adaptation) to retrain the proxy model. For evolving new tasks, EdgeFM uses the normal supervised learning to retrain the proxy model.</p>
workloads.png ADDED