Chelsea707 commited on
Commit
4d913d1
·
verified ·
1 Parent(s): 6c300fa

Add Batch 6c17c774-667b-4e48-b5e3-743d3ffd38f7

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. NeurIPS/2025/$_boldsymbol{_lambda}$-Orthogonality Regularization for Compatible Representation Learning/c36592bf-d5fb-4809-b5a5-c95787696dbf_content_list.json +3 -0
  2. NeurIPS/2025/$_boldsymbol{_lambda}$-Orthogonality Regularization for Compatible Representation Learning/c36592bf-d5fb-4809-b5a5-c95787696dbf_model.json +3 -0
  3. NeurIPS/2025/$_boldsymbol{_lambda}$-Orthogonality Regularization for Compatible Representation Learning/c36592bf-d5fb-4809-b5a5-c95787696dbf_origin.pdf +3 -0
  4. NeurIPS/2025/$_boldsymbol{_lambda}$-Orthogonality Regularization for Compatible Representation Learning/full.md +0 -0
  5. NeurIPS/2025/$_boldsymbol{_lambda}$-Orthogonality Regularization for Compatible Representation Learning/images.zip +3 -0
  6. NeurIPS/2025/$_boldsymbol{_lambda}$-Orthogonality Regularization for Compatible Representation Learning/layout.json +3 -0
  7. NeurIPS/2025/$_epsilon$-Seg_ Sparsely Supervised Semantic Segmentation of Microscopy Data/8833d9e0-1854-4f4d-bdfe-51f3284cf8e9_content_list.json +3 -0
  8. NeurIPS/2025/$_epsilon$-Seg_ Sparsely Supervised Semantic Segmentation of Microscopy Data/8833d9e0-1854-4f4d-bdfe-51f3284cf8e9_model.json +3 -0
  9. NeurIPS/2025/$_epsilon$-Seg_ Sparsely Supervised Semantic Segmentation of Microscopy Data/8833d9e0-1854-4f4d-bdfe-51f3284cf8e9_origin.pdf +3 -0
  10. NeurIPS/2025/$_epsilon$-Seg_ Sparsely Supervised Semantic Segmentation of Microscopy Data/full.md +719 -0
  11. NeurIPS/2025/$_epsilon$-Seg_ Sparsely Supervised Semantic Segmentation of Microscopy Data/images.zip +3 -0
  12. NeurIPS/2025/$_epsilon$-Seg_ Sparsely Supervised Semantic Segmentation of Microscopy Data/layout.json +3 -0
  13. NeurIPS/2025/$_mathcal{X}^2$-DFD_ A framework for e$_mathcal{X}$plainable and e$_mathcal{X}$tendable Deepfake Detection/bcd9cdba-c978-4228-b2cb-d5da71ce3406_content_list.json +3 -0
  14. NeurIPS/2025/$_mathcal{X}^2$-DFD_ A framework for e$_mathcal{X}$plainable and e$_mathcal{X}$tendable Deepfake Detection/bcd9cdba-c978-4228-b2cb-d5da71ce3406_model.json +3 -0
  15. NeurIPS/2025/$_mathcal{X}^2$-DFD_ A framework for e$_mathcal{X}$plainable and e$_mathcal{X}$tendable Deepfake Detection/bcd9cdba-c978-4228-b2cb-d5da71ce3406_origin.pdf +3 -0
  16. NeurIPS/2025/$_mathcal{X}^2$-DFD_ A framework for e$_mathcal{X}$plainable and e$_mathcal{X}$tendable Deepfake Detection/full.md +0 -0
  17. NeurIPS/2025/$_mathcal{X}^2$-DFD_ A framework for e$_mathcal{X}$plainable and e$_mathcal{X}$tendable Deepfake Detection/images.zip +3 -0
  18. NeurIPS/2025/$_mathcal{X}^2$-DFD_ A framework for e$_mathcal{X}$plainable and e$_mathcal{X}$tendable Deepfake Detection/layout.json +3 -0
  19. NeurIPS/2025/$_mu$PC_ Scaling Predictive Coding to 100+ Layer Networks/fbf329ca-83e7-4d17-ac37-18f99b180880_content_list.json +3 -0
  20. NeurIPS/2025/$_mu$PC_ Scaling Predictive Coding to 100+ Layer Networks/fbf329ca-83e7-4d17-ac37-18f99b180880_model.json +3 -0
  21. NeurIPS/2025/$_mu$PC_ Scaling Predictive Coding to 100+ Layer Networks/fbf329ca-83e7-4d17-ac37-18f99b180880_origin.pdf +3 -0
  22. NeurIPS/2025/$_mu$PC_ Scaling Predictive Coding to 100+ Layer Networks/full.md +0 -0
  23. NeurIPS/2025/$_mu$PC_ Scaling Predictive Coding to 100+ Layer Networks/images.zip +3 -0
  24. NeurIPS/2025/$_mu$PC_ Scaling Predictive Coding to 100+ Layer Networks/layout.json +3 -0
  25. NeurIPS/2025/$_textit{HiMaCon_}$ Discovering Hierarchical Manipulation Concepts from Unlabeled Multi-Modal Data/1c795ae1-a64d-41eb-8ed5-e0f5ebeac451_content_list.json +3 -0
  26. NeurIPS/2025/$_textit{HiMaCon_}$ Discovering Hierarchical Manipulation Concepts from Unlabeled Multi-Modal Data/1c795ae1-a64d-41eb-8ed5-e0f5ebeac451_model.json +3 -0
  27. NeurIPS/2025/$_textit{HiMaCon_}$ Discovering Hierarchical Manipulation Concepts from Unlabeled Multi-Modal Data/1c795ae1-a64d-41eb-8ed5-e0f5ebeac451_origin.pdf +3 -0
  28. NeurIPS/2025/$_textit{HiMaCon_}$ Discovering Hierarchical Manipulation Concepts from Unlabeled Multi-Modal Data/full.md +0 -0
  29. NeurIPS/2025/$_textit{HiMaCon_}$ Discovering Hierarchical Manipulation Concepts from Unlabeled Multi-Modal Data/images.zip +3 -0
  30. NeurIPS/2025/$_textit{HiMaCon_}$ Discovering Hierarchical Manipulation Concepts from Unlabeled Multi-Modal Data/layout.json +3 -0
  31. NeurIPS/2025/$_textit{Hyper-GoalNet}$_ Goal-Conditioned Manipulation Policy Learning with HyperNetworks/c7074468-e0ed-4a67-900c-b1a066f37372_content_list.json +3 -0
  32. NeurIPS/2025/$_textit{Hyper-GoalNet}$_ Goal-Conditioned Manipulation Policy Learning with HyperNetworks/c7074468-e0ed-4a67-900c-b1a066f37372_model.json +3 -0
  33. NeurIPS/2025/$_textit{Hyper-GoalNet}$_ Goal-Conditioned Manipulation Policy Learning with HyperNetworks/c7074468-e0ed-4a67-900c-b1a066f37372_origin.pdf +3 -0
  34. NeurIPS/2025/$_textit{Hyper-GoalNet}$_ Goal-Conditioned Manipulation Policy Learning with HyperNetworks/full.md +0 -0
  35. NeurIPS/2025/$_textit{Hyper-GoalNet}$_ Goal-Conditioned Manipulation Policy Learning with HyperNetworks/images.zip +3 -0
  36. NeurIPS/2025/$_textit{Hyper-GoalNet}$_ Goal-Conditioned Manipulation Policy Learning with HyperNetworks/layout.json +3 -0
  37. NeurIPS/2025/$_texttt{BetaConform}$_ Efficient MAP Estimation of LLM Ensemble Judgment Performance with Prior Transfer/ec15ba4b-643b-443b-a5cc-470c3ec49ae5_content_list.json +3 -0
  38. NeurIPS/2025/$_texttt{BetaConform}$_ Efficient MAP Estimation of LLM Ensemble Judgment Performance with Prior Transfer/ec15ba4b-643b-443b-a5cc-470c3ec49ae5_model.json +3 -0
  39. NeurIPS/2025/$_texttt{BetaConform}$_ Efficient MAP Estimation of LLM Ensemble Judgment Performance with Prior Transfer/ec15ba4b-643b-443b-a5cc-470c3ec49ae5_origin.pdf +3 -0
  40. NeurIPS/2025/$_texttt{BetaConform}$_ Efficient MAP Estimation of LLM Ensemble Judgment Performance with Prior Transfer/full.md +0 -0
  41. NeurIPS/2025/$_texttt{BetaConform}$_ Efficient MAP Estimation of LLM Ensemble Judgment Performance with Prior Transfer/images.zip +3 -0
  42. NeurIPS/2025/$_texttt{BetaConform}$_ Efficient MAP Estimation of LLM Ensemble Judgment Performance with Prior Transfer/layout.json +3 -0
  43. NeurIPS/2025/$_texttt{G1}$_ Teaching LLMs to Reason on Graphs with Reinforcement Learning/34acc6fe-738e-4fc8-bf3b-dc455aa6102a_content_list.json +3 -0
  44. NeurIPS/2025/$_texttt{G1}$_ Teaching LLMs to Reason on Graphs with Reinforcement Learning/34acc6fe-738e-4fc8-bf3b-dc455aa6102a_model.json +3 -0
  45. NeurIPS/2025/$_texttt{G1}$_ Teaching LLMs to Reason on Graphs with Reinforcement Learning/34acc6fe-738e-4fc8-bf3b-dc455aa6102a_origin.pdf +3 -0
  46. NeurIPS/2025/$_texttt{G1}$_ Teaching LLMs to Reason on Graphs with Reinforcement Learning/full.md +0 -0
  47. NeurIPS/2025/$_texttt{G1}$_ Teaching LLMs to Reason on Graphs with Reinforcement Learning/images.zip +3 -0
  48. NeurIPS/2025/$_texttt{G1}$_ Teaching LLMs to Reason on Graphs with Reinforcement Learning/layout.json +3 -0
  49. NeurIPS/2025/$_texttt{STRCMP}$_ Integrating Graph Structural Priors with Language Models for Combinatorial Optimization/3e772e3d-1de4-44c2-a5f8-ed39ed045157_content_list.json +3 -0
  50. NeurIPS/2025/$_texttt{STRCMP}$_ Integrating Graph Structural Priors with Language Models for Combinatorial Optimization/3e772e3d-1de4-44c2-a5f8-ed39ed045157_model.json +3 -0
NeurIPS/2025/$_boldsymbol{_lambda}$-Orthogonality Regularization for Compatible Representation Learning/c36592bf-d5fb-4809-b5a5-c95787696dbf_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19cef4a0ea991f419012ce9cc650d7a31e890ad2ac6f430582aed5c9de187baa
3
+ size 174916
NeurIPS/2025/$_boldsymbol{_lambda}$-Orthogonality Regularization for Compatible Representation Learning/c36592bf-d5fb-4809-b5a5-c95787696dbf_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4715a381048ebf4709c5d484ff10c165be84f96498b3c358d4bfeb5b04d2a6b7
3
+ size 222595
NeurIPS/2025/$_boldsymbol{_lambda}$-Orthogonality Regularization for Compatible Representation Learning/c36592bf-d5fb-4809-b5a5-c95787696dbf_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:377397f1af6409e83afb2fddf9d752a809e60a71b6eb887d34d9ade64186dcc6
3
+ size 2771191
NeurIPS/2025/$_boldsymbol{_lambda}$-Orthogonality Regularization for Compatible Representation Learning/full.md ADDED
The diff for this file is too large to render. See raw diff
 
NeurIPS/2025/$_boldsymbol{_lambda}$-Orthogonality Regularization for Compatible Representation Learning/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b58fc1c19561fabd2f762982c965096450c4e6a4ecde227372f8ba797c24b64
3
+ size 824468
NeurIPS/2025/$_boldsymbol{_lambda}$-Orthogonality Regularization for Compatible Representation Learning/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8f1ce93aa69462dd0b482e08b7616cec6354fe6bd0e0b50f00b005ada3d226a5
3
+ size 976727
NeurIPS/2025/$_epsilon$-Seg_ Sparsely Supervised Semantic Segmentation of Microscopy Data/8833d9e0-1854-4f4d-bdfe-51f3284cf8e9_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:54fe8cc91d589f4a9b5863fd5046277cd0d0635dc2342cc37aa8afe64f8930ec
3
+ size 145745
NeurIPS/2025/$_epsilon$-Seg_ Sparsely Supervised Semantic Segmentation of Microscopy Data/8833d9e0-1854-4f4d-bdfe-51f3284cf8e9_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c39a43aa11050c425dda0e12d9e8feadc8ec7ed620097d300c531fffaf340a99
3
+ size 192630
NeurIPS/2025/$_epsilon$-Seg_ Sparsely Supervised Semantic Segmentation of Microscopy Data/8833d9e0-1854-4f4d-bdfe-51f3284cf8e9_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:548e5e023cea0b6b29cc2aaa42fd170b157dd35fb4506d74de4edd0f88fc6fe0
3
+ size 6366888
NeurIPS/2025/$_epsilon$-Seg_ Sparsely Supervised Semantic Segmentation of Microscopy Data/full.md ADDED
@@ -0,0 +1,719 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # $\epsilon$ -Seg: Sparsely Supervised Semantic Segmentation of Microscopy Data
2
+
3
+ Sheida Rahnamai Kordasiabi $^{1,2}$ , Damian Dalle Nogare $^{1}$ , Florian Jug $^{1}$
4
+
5
+ $^{1}$ Human Technopole, Milan, Italy
6
+
7
+ $^{2}$ Technical University of Dresden, Germany
8
+
9
+ # Abstract
10
+
11
+ Semantic segmentation of electron microscopy (EM) images of biological samples remains a challenge in the life sciences. EM data captures details of biological structures, sometimes with such complexity that even human observers can find it overwhelming. We introduce $\epsilon$ -Seg, a method based on hierarchical variational autoencoders (HVAEs), employing center-region masking, sparse label contrastive learning (CL), a Gaussian mixture model (GMM) prior, and clustering-free label prediction. Center-region masking and the inpainting loss encourage the model to learn robust and representative embeddings to distinguish the desired classes, even if training labels are sparse (0.05% of the total image data or less). For optimal performance, we employ CL and a GMM prior to shape the latent space of the HVAE such that encoded input patches tend to cluster w.r.t. the semantic classes we wish to distinguish. Finally, instead of clustering latent embeddings for semantic segmentation, we propose a MLP semantic segmentation head to directly predict class labels from latent embeddings. We show empirical results of $\epsilon$ -Seg and baseline methods on 2 dense EM datasets of biological tissues and demonstrate the applicability of our method also on fluorescence microscopy data. Our results show that $\epsilon$ -Seg is capable of achieving competitive sparsely-supervised segmentation results on complex biological image data, even if only limited amounts of training labels are available. Code available at https://github.com/juglab/eps-Seg.
12
+
13
+ # 1 Introduction
14
+
15
+ Electron Microscopy (EM) comes in multiple flavors and is without doubt the tool of choice for high-resolution investigations of biological samples [12]. Today, microscopists can capture fine cellular structures at nanometer resolution [22, 3]. Although this opens unprecedented possibilities for studying the very fabric of life, it also means that such microscopes are producing an unfathomable amount of raw image data that then are available to be analyzed [36].
16
+
17
+ A key module of nearly every analysis pipeline is the segmentation step, where specific structures of interest must be found in the entire body of captured image data. Performing this step manually, is typically not feasible as it takes an impossibly long time [16, 36, 22]. Unfortunately, even semantic segmentation of EM data of biological samples remains a challenge [3, 31].
18
+
19
+ Ideally, methods for segmenting EM data should $(i)$ lead to sufficiently good segmentation results for the downstream analysis tasks at hand with as few training labels as possible, $(ii)$ generalize well to different imaging conditions and image tissue types and/or be able to fine-tune on moderate amounts of new training data [9], $(iii)$ be able to benefit from sparse labeled data via supervised contrastive learning approaches, and if possible $(iv)$ operate on a hierarchy of spatial scales to distinguish objects not only by either detailed textures or larger scale shapes, but both.
20
+
21
+ With this in mind, we introduce $\epsilon$ -Seg, a novel and sparsely supervised semantic segmentation framework for EM images that reduces the 'hunger' for labeled data by using a powerful hierarchical VAE (HVAE) [28, 21] with a GMM prior instead of a regular Gaussian one. Furthermore, our method uses center-region inpainting and contrastive learning to enhance feature consistency and segmentation robustness, even when training data is scarce. Hence, $\epsilon$ -Seg learns structured latent space representations with effective feature separation for the semantic classes of interest. Once such features are learned, they can be clustered to obtain meaningful semantic segmentations. However, since this process is computationally intensive, we integrate a dedicated semantic segmentation head that directly produces segmentation labels, improving both accuracy and runtime.
22
+
23
+ # 2 Related Work
24
+
25
+ Sparse Supervision. Deep learning has transformed microscopy image segmentation. The U-Net [26] has long been a standard architecture, achieving strong results when trained in a fully supervised setting. However, such approaches rely on dense annotations, which are costly and time-consuming to obtain. At the other extreme, self-supervised methods such as MAESTER [34] learn directly from raw data without labels, offering excellent scalability but typically at the cost of reduced segmentation accuracy compared to fully supervised approaches. Between these extremes lies a growing body of work on sparse or weak supervision, which seeks to achieve label efficiency while maintaining good performance. We aim to surpass self-supervised methods in accuracy while requiring only a fraction of the annotations needed by fully supervised methods. Comprehensive reviews on segmentation methods in large-scale EM with deep learning are available [3], with representative examples including slice-wise pseudo-label propagation for neuronal membranes (4S) [30], or domain adaptation variants of U-Net designed for limited-annotation settings [4].
26
+
27
+ Hierarchical Variational Autoencoders. Hierarchical architectures, like HVAEs [28, 21, 32, 7, 24], appear to be an interesting choice for segmenting biological microscopy data. Based on variational autoencoders [20], these powerful models learn a full approximate posterior, but are limited by the typically used Gaussian prior, making us wonder if a Gaussian mixture would not be a more suitable choice for the semantic segmentation task at hand. While the above-mentioned methods pursue label efficiency through different strategies, they do not explicitly enforce semantically disentangled latent representations. In contrast, we explicitly enforce semantically disentangled latent representations by combining a GMM prior with contrastive learning, ensuring that each latent component aligns with a distinct object class. This motivates our focus on HVAEs, which progressively encode features from fine to coarse across network layers. As higher-level semantic structure emerges in deeper layers, the latent space can be disentangled and aligned with semantic classes, enabling efficient segmentation and downstream biological analysis.
28
+
29
+ Gaussian Mixture Models (GMMs). GMMs have been extensively used to model multimodal distributions and are a key component for many clustering methods [8, 27, 5, 10]. Many approaches integrate GMMs within autoencoder-based architectures, either explicitly as a clustering module [5] or by enforcing multimodal latent structure through a GMM prior [8, 10]. In VAEs, GMM priors enable structured latent spaces where each mixture component represents a distinct cluster or class [10, 8]. Some methods employ direct optimization of GMM objectives alongside autoencoders [5], while others leverage categorical latent variables within GMVAE frameworks, using discrete reparameterization techniques such as the Gumbel-Softmax [19] relaxation to improve scalability [8]. These techniques effectively combine deep generative models with Gaussian mixture priors, enhancing unsupervised representation learning and clustering performance in high-dimensional data spaces.
30
+
31
+ Contrastive Learning (CL). CL has gained attention for its ability to refine feature representations by maximizing similarities between related samples and minimizing them between unrelated ones. Methods like SimCLR [6] and MoCo [15] demonstrated their effectiveness in many applications. In the context of EM segmentation, CL enables better alignment of latent representations with subcellular structures. We will use CL to ensure that each GMM component corresponds to a distinct semantic class, not just in the highest level of the hierarchy we learn.
32
+
33
+ Next, we present our proposed method, which integrates hierarchical variational autoencoders with GMM-based priors and contrastive learning to achieve accurate and label-efficient EM segmentation.
34
+
35
+ ![](images/3d699add13f898d834320a520deac9daea279692777e2b5c83b936ac3e031a2f.jpg)
36
+ Figure 1: The overall pipeline of $\epsilon$ -Seg which is trained on an inpainting task (of center-region masked inputs). $\phi$ and $\theta$ are encoder and decoder of the network, respectively. Dotted arrows show sampling from a distribution (Gumbel-Softmax (Categorical-like distribution) for segmentation head and Normal distribution for conditional posterior). $h$ is an intermediate feature embedding of input $x$ coming from the encoder $\phi$ . $f(h)$ is a logit vector and $|f(h)| = C$ with $C$ being the number of different classes/GMM prior components (equal to 4 for "BetaSeg" [22]). $\beta$ and $\gamma$ are feature-wise linear modulation (FiLM's [23]) parameters (shifting and scaling factors) of features $h$ . $h'$ are the posterior distribution's parameters and are divided into two chunks shown as $\mu_L(x)$ and $\sigma_L(x)$ by c being the corresponding label of the masked center region of each input patch $x$ in the batch. $z_L$ is a sample from $\mathcal{N}(\mu_L(x), \sigma_L^2(x))$ . $\pmb{y}'$ is a differentiable sample from a Gumbel-Softmax [19] distribution. Green arrow shows positive pair of patches having similar labels, and red arrows show negative pairs of patches having dissimilar labels. $\mathcal{L}_{CL}$ is then computed on $\pmb{\mu}s$ (further explanation can be found in section 3). For $\mathcal{L}_I$ inpainting loss, $\mathcal{L}_{CL}$ contrastive loss, $\mathcal{L}_{CE}$ cross-entropy loss and $\mathcal{L}_{KL}$ refer to Equations 1, 16, 14 and 15 respectively.
37
+
38
+ # 3 Methods
39
+
40
+ The method we propose is based on a Hierarchical VAE (HVAE) backbone similar to the ones described in [28, 24]. We modify the standard HVAE setup by $(i)$ using a Gaussian mixture model (GMM) instead of the default Gaussian, so every semantic class we want to distinguish has its own predetermined Gaussian region, and by $(ii)$ adding a contrastive loss (CL), we further ensure that latent encodings are grouped by their semantic similarity through all hierarchy levels.
41
+
42
+ As the basis for our work, we used the openly available HVAE backbone of Hierarchical DivNoising (HDN) [24]. HVAEs, as introduced elsewhere [28, 32, 21, 24], consist of a bottom-up path (encoder) and a top-down path (decoder) with trainable parameters $\phi$ and $\theta$ , respectively. The encoder extracts features from a given input $x$ at progressively coarser scales, creating a hierarchical latent encoding $z$ that splits into sub-spaces $z_{i}, i = 1 \dots L$ , with $L$ being the number of hierarchy levels, or latent layers, in the HVAE. The decoder network in regular HVAEs reconstructs $x$ , starting from the topmost latent variables $z_{L}$ . Here, we first switch from reconstructing $x$ to inpainting a masked central region in $x$ , as described next.
43
+
44
+ Autoencoding vs. Inpainting. In contrast to regular VAEs and HVAEs that use a reconstruction loss on full input patches $\mathbf{x}$ , we are using masked autoencoding instead [18]. Since our aim is to learn semantic features that can be used for pixel-level semantic segmentation, the zero-masking we employed asks the network to only reconstruct the masked region, effectively learning features that best represent the masked semantic class. We conducted experiments with masked regions of various sizes and have always ensured that all masked pixels were from the same semantic class, see Table 8.
45
+
46
+ The model is trained to reconstruct the masked center pixel(s) using an MSE-based inpainting loss on $X$ , a training batch of inputs, of size $B$ , as
47
+
48
+ $$
49
+ \mathcal {L} _ {\mathrm {I}} = \frac {1}{B} \sum_ {\boldsymbol {x} \in X} \left(\boldsymbol {x} ^ {\text {m a s k}} - \hat {\boldsymbol {x}} ^ {\text {m a s k}}\right) ^ {2}, \tag {1}
50
+ $$
51
+
52
+ where $\hat{x}^{\mathrm{mask}}$ is the inpainted masked region the decoder predicted, and $x^{\mathrm{mask}}$ is the mask region of the respective input patch prior to zero-masking.
53
+
54
+ HVAEs with Gaussian Priors. The Gaussian prior of regular VAEs only applies to the topmost hierarchy level in HVAEs, where it remains $\mathcal{N}(0,I)$ as depicted in Figure S1.
55
+
56
+ The latent variables $\mathbf{z}$ of a HVAE are split into $L$ layers $\mathbf{z}_i, i \in [1, \dots, L]$ so that
57
+
58
+ $$
59
+ p _ {\theta} (\boldsymbol {z}) = p _ {\theta} \left(\boldsymbol {z} _ {L}\right) \prod_ {i = 1} ^ {L - 1} p _ {\theta} \left(\boldsymbol {z} _ {i} \mid \boldsymbol {z} _ {i + 1}\right), \tag {2}
60
+ $$
61
+
62
+ $$
63
+ p _ {\theta} \left(\boldsymbol {z} _ {L}\right) = \mathcal {N} \left(\boldsymbol {z} _ {L} \mid \boldsymbol {0}, \boldsymbol {I}\right), \tag {3}
64
+ $$
65
+
66
+ $$
67
+ p _ {\theta} \left(\boldsymbol {z} _ {i} \mid \boldsymbol {z} _ {i + 1}\right) = \mathcal {N} \left(\boldsymbol {z} _ {i} \mid \mu_ {p, i} \left(\boldsymbol {z} _ {i + 1}\right), \sigma_ {p, i} ^ {2} \left(\boldsymbol {z} _ {i + 1}\right)\right) \text {a n d} \tag {4}
68
+ $$
69
+
70
+ $$
71
+ p _ {\theta} (\boldsymbol {x} \mid \boldsymbol {z} _ {1}) = \mathcal {N} (\boldsymbol {x} \mid \mu_ {p, 0} (\boldsymbol {z} _ {1}), \sigma_ {p, 0} ^ {2} (\boldsymbol {z} _ {1})), \tag {5}
72
+ $$
73
+
74
+ where $\mu_{\theta}(\pmb{z}_i)$ and $\sigma_{\theta}^{2}(\pmb{z}_{i})$ represent the mean and the variance of the latent encoding, parameterized by $\theta$ .
75
+
76
+ For each layer $i$ , the approximate posterior $q_{\phi}(z_i|\boldsymbol{x}, \boldsymbol{z}_{<i})$ , computed by the encoder, is defined as
77
+
78
+ $$
79
+ q _ {\phi} \left(\boldsymbol {z} _ {i} \mid \boldsymbol {x}, \boldsymbol {z} _ {< i}\right) = \mathcal {N} \left(\boldsymbol {z} _ {i}; \mu_ {\phi} (\boldsymbol {x}, \boldsymbol {z} _ {< i}), \sigma_ {\phi} ^ {2} (\boldsymbol {x}, \boldsymbol {z} _ {< i})\right), \tag {6}
80
+ $$
81
+
82
+ where $\mu_{\phi}(\pmb{x},\pmb{z}_{< i})$ and $\sigma_{\phi}(\pmb{x},\pmb{z}_{< i})$ are functions parameterized by $\phi$ , and are the mean and variance conditioned on the input $\pmb{x}$ and the latent variables from lower layers $j < i$ , denoted by $\pmb{z}_{< i}$ .
83
+
84
+ The KL divergence term for each layer in the Evidence Lower Bound (ELBO) is
85
+
86
+ $$
87
+ \mathbb {E} _ {q _ {\phi} \left(\boldsymbol {z} > _ {i} | \boldsymbol {x}\right)} \left[ \mathrm {K L} \left(q _ {\phi} \left(\boldsymbol {z} _ {i} | \boldsymbol {x}, \boldsymbol {z} _ {< i}\right) \| p _ {\theta} \left(\boldsymbol {z} _ {i} | \boldsymbol {z} _ {i + 1}\right)\right) \right], \tag {7}
88
+ $$
89
+
90
+ where $z_{>i}$ are all $z_{j}$ for $j > i$ .
91
+
92
+ HVAEs with a GMM Prior. When replacing the topmost prior $p_{\theta}(\boldsymbol{z}_L)$ in an HVAE with a Gaussian mixture model (GMM), the prior becomes a weighted sum of Gaussians
93
+
94
+ $$
95
+ p _ {\theta} \left(\boldsymbol {z} _ {L}\right) = \sum_ {c = 1} ^ {C} \pi_ {c} \mathcal {N} \left(\boldsymbol {z} _ {L}; \mu_ {c}, \sigma_ {c} ^ {2}\right), \tag {8}
96
+ $$
97
+
98
+ where $C$ is the total number of Gaussian components and also the number of semantic classes we want to distinguish, $\pi_c$ are the mixing coefficients of the GMM with $\sum_{c=1}^{C} \pi_c = 1$ , and $\mathcal{N}(z_L; \mu_c, \sigma_c^2)$ is a Gaussian component with mean $\mu_c$ and standard deviation $\sigma_c$ .
99
+
100
+ Note that there is a one-to-one correspondence between Gaussian components of the GMM and the semantic classes $\epsilon$ -Seg is supposed to distinguish. This would ensure that the latent variable follows a categorical distribution over the semantic classes; we ideally want the mixture assignment $\pi = (\pi_1, \dots, \pi_C)$ to act as a one-hot vector, i.e. one $\pi_c$ should be 1, and the rest should be 0.
101
+
102
+ However, in practice, learning a fully discrete $\pi$ is challenging because the standard VAE framework with a GMM prior typically results in soft assignments [10]. To encourage hard assignments, one could $(i)$ use a Gumbel-Softmax [19] trick to approximate categorical sampling while maintaining differentiability [8], $(ii)$ introduce an entropy loss to encourage $\pi_c$ values to be closer to either 0 or 1. In our experiments, we used the Gumbel-Softmax during training, while reverting to the standard softmax at inference time. We also introduced an entropy loss term as a form of self-supervision, which yielded moderate improvements in the Gumbel-Softmax-based results (see Supplementary Material), but did not lead to significant gains w.r.t. the best-performing softmax configuration. We therefore report the softmax-based results as our main findings, without the additional training phase using the entropy loss. In future work, we plan to investigate alternative self-supervision strategies to
103
+
104
+ further enhance the segmentation performance, leveraging the vast amount of available unlabeled data, within the proposed framework.
105
+
106
+ The approximate posterior for the topmost latent $z_{L}$ , can now be expressed as
107
+
108
+ $$
109
+ q _ {\phi} \left(\boldsymbol {z} _ {L} \mid \boldsymbol {x}\right) = \sum_ {l = 1} ^ {C} q _ {\phi} (c = l \mid \boldsymbol {x}) q _ {\phi} \left(\boldsymbol {z} _ {L} \mid \boldsymbol {x}, c = l\right), \tag {9}
110
+ $$
111
+
112
+ where $q_{\phi}(c|\pmb{x})$ is the approximate posterior probability of the GMM component $c$ set to label $l$ given input $\pmb{x}$ and $q_{\phi}(\pmb{z}_L|\pmb{x},c)$ is the topmost approximate posterior conditioned on $\pmb{x}$ and component $c$ . We model $q_{\phi}(\pmb{z}_L|\pmb{x},c)$ over all possible labels itself with a Gaussian
113
+
114
+ $$
115
+ q _ {\phi} \left(\boldsymbol {z} _ {L} \mid \boldsymbol {x}, c\right) = \mathcal {N} \left(\boldsymbol {z} _ {L}; \mu_ {L} (\boldsymbol {x}), \sigma_ {L} (\boldsymbol {x})\right), \tag {10}
116
+ $$
117
+
118
+ by predicting $\mu_{L}(\pmb{x})$ and $\sigma_{L}(\pmb{x})$ (see boxes labeled with "posterior" in Figure 1). In practice, the parameters $\mu_{L}(\pmb{x})$ and $\sigma_{L}(\pmb{x})$ are computed once from the FiLM-conditioned encoder output and are shared across all components $l$ . As a result, the mixture in Equation 9 reduces to
119
+
120
+ $$
121
+ q _ {\phi} \left(\boldsymbol {z} _ {L} | \boldsymbol {x}\right) = \mathcal {N} \left(\boldsymbol {z} _ {L}; \mu_ {L} (\boldsymbol {x}), \sigma_ {L} (\boldsymbol {x})\right), \tag {11}
122
+ $$
123
+
124
+ as depicted in Figure 1. In order to predict $\mu_{L}(\pmb {x})$ and $\sigma_L(x)$ , we must compute the conditional posterior.
125
+
126
+ Computing the Conditional Posterior. In this section, we describe the main backbone of our method leading from a given input patch $\pmb{x} \in \pmb{X}$ to the computed posteriors $q_{\phi} = \mathcal{N}(\pmb{\mu}(\pmb{x}), \pmb{\sigma}^2(\pmb{x}))$ . Figure 1 illustrates the overall pipeline of $\epsilon$ -Seg.
127
+
128
+ The encoder, parametrized by $\phi$ , processes $x$ , leading to intermediate features $h$ in the topmost hierarchy level $L$ . These features are then passed through an MLP classifier (rouge box in Figure 1), producing a vector of logits $f(h)$ with dimensionality $C$ , coinciding with the number of classes $\epsilon$ -Seg is tasked to distinguish.
129
+
130
+ Instead of directly using $h$ as our posterior distribution parameters, as done in our Vanilla HVAE baseline, we are using $f(h)$ , fed through two additional MLPs, $g_{\gamma}$ and $g_{\beta}$ (see violet boxes in Figure 1), to compute parameters, $\gamma$ and $\beta$ such that $\gamma = g_{\gamma}(f(h))$ and $\beta = g_{\beta}(f(h))$ .
131
+
132
+ Those MLPs are mapping logits $f(h)$ into feature-wise scaling and shifting factors. In this way, the encoded features $h$ are modulated via these FiLM [23] parameters $\gamma$ and $\beta$ into $h'$ via computing $h' = \gamma \odot h + \beta$ , where $\odot$ denotes the Hadamard product (element-wise multiplication). The modulated feature representation $h'$ is then chunked into two parts, $\pmb{\mu}_L(\pmb{x})$ and $\pmb{\sigma}_L(\pmb{x})$ , and used to parameterize the conditional Gaussian posterior in Equation 11.
133
+
134
+ The Latent Semantic Segmentation Head. To avoid computationally costly downstream latent space clustering to perform the semantic segmentation task (as done in Xie et al. [34] and Han et al. [14] using K-Means clustering), we are introducing a segmentation head tasked to perform the semantic pixel classification tasks directly from the computed logits $f(h)$ .
135
+
136
+ To compute $q_{\phi}(c|\pmb{x})$ of Equation 9, we use a categorical reparameterization trick via Gumbel-Softmax [19].
137
+
138
+ The standard Gumbel-Softmax formula using the class probabilities $\pi_{i}$ is
139
+
140
+ $$
141
+ y _ {i} ^ {\prime} = \frac {\exp \left(\left(\log \pi_ {i} + g _ {i}\right) / \tau\right)}{\sum_ {j = 1} ^ {C} \exp \left(\left(\log \pi_ {j} + g _ {j}\right) / \tau\right)}, \tag {12}
142
+ $$
143
+
144
+ where $g_{i} \sim \mathrm{Gumbel}(0,1)$ are Gumbel noise samples. Instead of probabilities $\pi_{i}$ , we work with logits $f(h)$ (raw scores before softmax). The equivalent formula becomes
145
+
146
+ $$
147
+ y _ {i} ^ {\prime} = \frac {\exp \left(\left(f _ {i} (h) + g _ {i}\right) / \tau\right)}{\sum_ {j = 1} ^ {C} \exp \left(\left(f _ {j} (h) + g _ {j}\right) / \tau\right)}. \tag {13}
148
+ $$
149
+
150
+ The temperature parameter $\tau$ in the Gumbel-Softmax distribution plays a crucial role in controlling the degree of discreteness in the sampled values. During training, $\tau$ is often annealed from a higher value to a lower one, gradually transitioning from a smooth approximation to a discrete categorical distribution.
151
+
152
+ In $\epsilon$ -Seg, we use a typical annealing schedule $\tau = \max(\tau_{\min}, \exp(-rt))$ , where $r = 0.999$ is the decay rate, $\tau_{\min} = 0.5$ , and $t$ is the training step. Therefore, Gumbel enables the differentiable sampling of categorical variables, improving gradient estimation, and semi-supervised classification [19].
153
+
154
+ Next, we draw a vector $\mathbf{y}'$ , representing the class assignment (segmentation prediction) for an input patch $\mathbf{x}^{(i)}$ in the batch $\mathbf{X}$ , by sampling from the Gumbel-Softmax distribution parameterized by logits $f(h)$ with temperature $\tau$ .
155
+
156
+ For input patches $\pmb{x}^{(i)}\in \pmb{X}$ for which we know the class label $l_{i}$ , we want to ensure that $y_l^{\prime (i)}\in \pmb{y}^{\prime (i)}$ is the largest entry. We do so using the cross-entropy loss
157
+
158
+ $$
159
+ \mathcal {L} _ {C E} = - \sum_ {\boldsymbol {x} ^ {(i)} \in \boldsymbol {X}} \log y _ {l} ^ {\prime (i)}. \tag {14}
160
+ $$
161
+
162
+ Computing the Kullback Leibler Divergence. As it is commonly done in VAEs [20], the KL-divergence term is regularizing the parameters of our encoder, $\phi$ , such that the approximate posterior will be close to our prior $p_{\theta}(z)$ . In HVAEs, KL is computed at each hierarchy level. Changing from a standard Gaussian prior at the highest hierarchy level $L$ to using a GMM prior, as described earlier in this section, requires us to define a strategy to compute the KL-divergence appropriately.
163
+
164
+ Hershey and Olsen [17] address the challenge of efficiently approximating the KL divergence between two GMMs, and Durrieu et al. [13] propose lower and upper bounds to estimate this divergence. While these approaches can be needed in practical setups [10, 8], we only need to compute the KL divergence between the posterior $q_{\phi}(z_L|\pmb{x})$ (Equation 11) and the $l$ -th GMM component, where $l$ is either the known class label for an input patch $\pmb{x}^{(i)}$ , or $l = \arg \max_{y^{\prime (j)}\in y^{\prime (j)}}y^{\prime (j)}$ for a patch $\pmb{x}^{(j)}$ for which we do not have a ground truth class label.
165
+
166
+ Hence, Equation 8 becomes $p_{\theta,c}(z_L) = \mathcal{N}(z_L; \mu_l, \sigma_l^2)$ , and $\mathcal{L}_{KL}$ is therefore still computed as the divergence between two normal distributions. The KL loss over all hierarchy levels is therefore
167
+
168
+ $$
169
+ \mathcal {L} _ {K L} = - \left(\mathrm {K L} \left(q _ {\phi} \left(z _ {1} \mid \boldsymbol {x}\right) \| p _ {\theta} \left(z _ {1} \mid z _ {2}\right)\right) + \sum_ {i = 2} ^ {L - 1} \mathrm {K L} \left(q _ {\phi} \left(z _ {i} \mid z _ {i - 1}\right) \| p _ {\theta} \left(z _ {i} \mid z _ {i + 1}\right)\right) + \mathrm {K L} \left(q _ {\phi} \left(z _ {L} \mid z _ {L - 1}, c\right) \| p _ {\theta , c} \left(z _ {L}\right)\right)\right). \tag {15}
170
+ $$
171
+
172
+ Contrastive Loss. The contrastive loss consists of two terms, positive pair loss $\mathcal{L}_{+}$ , which encourages proximity between samples belonging to the same class, and negative pair loss $\mathcal{L}_{-}$ , that penalizes proximity between samples of different classes, ensuring inter-class separation. We define boolean matrices $P$ and $N$ for positive pairs and negative pairs, respectively, as $P_{ij} = \left\{ \begin{array}{ll}1 & \text{if } l_i = l_j \text{ and } i \neq j, \\ 0 & \text{otherwise} \end{array} \right.$ and $N_{ij} = \left\{ \begin{array}{ll}1 & \text{if } l_i \neq l_j, \\ 0 & \text{otherwise,} \end{array} \right.$ with $l_i$ and $l_j$ being the labels of patches $i$ and $j$ , respectively. These loss terms then become $\mathcal{L}_{+} = \frac{1}{\sum_{i,j} P_{ij}} \sum_{i,j} P_{ij} \cdot \mathcal{D}(\boldsymbol{\mu}^{(i)}, \boldsymbol{\mu}^{(j)})$ and $\mathcal{L}_{-} = \sum_{i,j} Nij \cdot \ell_{-}(\mathcal{D}(\boldsymbol{\mu}^{(i)}, \boldsymbol{\mu}^{(j)}))$ , with $\boldsymbol{\mu}^{(i)}$ being the predicted means of the posterior distribution over all hierarchy levels for a patch $i$ in batch $\mathbf{X}$ , and $\mathcal{D}(\boldsymbol{\mu}^{(i)}, \boldsymbol{\mu}^{(j)})$ a distance function. In our experiments, we used the Euclidean distance. Note that for $\mathcal{L}_{-}$ we define the penalty function $\ell_{-}(d) = \left\{ \begin{array}{ll}0 & \text{if } d \geq m, \\ (m - d)^2 & \text{otherwise}, \end{array} \right.$ with $m$ being the so-called margin, a hyperparameter that must be set appropriately, e.g. using grid-search.
173
+
174
+ The full contrastive loss term is finally defined as
175
+
176
+ $$
177
+ \mathcal {L} _ {C L} = \lambda \mathcal {L} _ {+} + (1 - \lambda) \mathcal {L} _ {-}, \tag {16}
178
+ $$
179
+
180
+ with $\lambda$ being a hyperparameter that balances the positive and negative pair loss with each other.
181
+
182
+ Readers might wonder why a contrastive loss is useful when a GMM prior is used, where for each structure to be classified (i.e. for each label) we have defined a Gaussian component in its own right. The main reason is that the GMM prior only takes effect at the uppermost hierarchy level $L$ . At all levels $i < L$ , $\mathcal{L}_{CL}$ is taking care of the desired label-wise segregation of latent encodings.
183
+
184
+ The Overall Loss of $\epsilon$ -Seg. Taken all together, the overall loss of $\epsilon$ -Seg is
185
+
186
+ $$
187
+ \mathcal {L} = \mathcal {L} _ {I} + \alpha_ {1} \mathcal {L} _ {C E} + \alpha_ {2} \mathcal {L} _ {K L} + \alpha_ {3} \mathcal {L} _ {C L}, \tag {17}
188
+ $$
189
+
190
+ <table><tr><td>Learning Paradigm</td><td>Model</td><td>U</td><td>N</td><td>G</td><td>M</td><td>Avg DSC</td></tr><tr><td rowspan="3">Self-Supervised</td><td>Vanilla HVAE* [24]</td><td>0.44</td><td>0.55</td><td>0.34</td><td>0.13</td><td>0.37</td></tr><tr><td>Han et al.* [14]</td><td>-</td><td>-</td><td>-</td><td>-</td><td>0.66</td></tr><tr><td>MAESTER* [34]</td><td>0.84</td><td>0.95</td><td>0.56</td><td>0.79</td><td>0.79</td></tr><tr><td rowspan="3">Sparsely Supervised</td><td>Labkit [2]</td><td>0.85</td><td>0.44</td><td>0.68</td><td>0.61</td><td>0.65</td></tr><tr><td>U-Net</td><td>0.90</td><td>0.96</td><td>0.78</td><td>0.66</td><td>0.83</td></tr><tr><td>ε-Seg (ours)</td><td>0.91</td><td>0.96</td><td>0.82</td><td>0.86</td><td>0.89</td></tr><tr><td rowspan="3">Fully Supervised</td><td>Vanilla ViT [11]</td><td>0.91</td><td>0.98</td><td>0.77</td><td>0.87</td><td>0.88</td></tr><tr><td>Segmenter [29]</td><td>0.91</td><td>0.99</td><td>0.86</td><td>0.90</td><td>0.92</td></tr><tr><td>U-Net [26]</td><td>0.94</td><td>0.99</td><td>0.90</td><td>0.87</td><td>0.93</td></tr></table>
191
+
192
+ Table 1: Dice similarity coefficient per class and average across all classes on the "BetaSeg" dataset [22]. Methods marked with an asterisk use K-Means clustering on latent features to conduct semantic segmentation (see Section 3). U: Unrecognized, N: Nucleus, G: Granules, M: Mitochondria.
193
+
194
+ ![](images/5232133b04ad03391a4961cee3766cb5f89d6331df314aef3fadf1168ff2aec9.jpg)
195
+ Image
196
+
197
+ ![](images/1f0ed72da65651040afa5ad57eaa286754b846b6de4f15c6c030fb471fc14964.jpg)
198
+ MAESTER
199
+
200
+ ![](images/76021068a94dc3489052a82931f383cf5a3d12a2d3908f35d8c7fac08e13bc5e.jpg)
201
+ Labkit
202
+
203
+ ![](images/b9c95aeaae8e9781dd5b6ed1e704291e29a9a5bbd1f61fa18b859380f53fd132.jpg)
204
+ GT Labels
205
+
206
+ ![](images/e230907205e441c749bbda542b8657bc13b8987be76103237a6e61a443fa16d5.jpg)
207
+ $\epsilon$ -Seg (ours)
208
+ Figure 2: Qualitative segmentation result on part of the test image stack (here we show section 627 of high_c4 of the "BetaSeg" dataset [22]).
209
+
210
+ U-Net
211
+ ![](images/0e39160a3805be343a6d816795955e204e9bf2d63b31a5c8f61a32a59ff85bc7.jpg)
212
+ nucleus granules
213
+ mitochondria
214
+ unrecognized
215
+
216
+ where $\alpha_{i}$ 's are hyperparameters to adjust the contribution of each loss with each other. We tuned those hyperparameters using grid search and by manual tuning.
217
+
218
+ Next, we show empirical results we obtained using $\epsilon$ -Seg and comparisons to several baseline methods on two dense EM datasets and one fluorescence microscopy dataset.
219
+
220
+ # 4 Experiments and Results
221
+
222
+ Datasets. One of the datasets used in this study is the "BetaSeg" [22] dataset from OpenOrganelle [16], a public repository of high-resolution cellular imaging data. Acquired via Focused Ion Beam Scanning Electron Microscopy (FIB-SEM), the dataset focuses on primary mouse pancreatic islet $\beta$ cells from a high-glucose-dosage group, chosen for comparison with prior works. It underwent preprocessing, including rescaling each stack to form $4\times 4\times 4$ nm isotropic voxels, which can be viewed in any arbitrary orientations, and generating reference segmentations through human annotation or manually corrected deep learning models. The final dataset consists of four cell volumes with binary segmentation masks for seven subcellular structures, centrioles, nucleus, plasma membrane, microtubules, golgi body, granules, and mitochondria, along with an eighth "unrecognized" category. Notably, the nucleus, granules, mitochondria, and unrecognized regions dominate the dataset. For evaluation, cells 1, 2, and 3 were used for training, while cell 4 served as an independent test set.
223
+
224
+ Next, We used "liver FIBSEM" dataset that samples were fresh needle biopsies fixed with $4\%$ PFA and $2\%$ GA in phosphate buffer. High contrast staining was performed with reduced osmium and Waltons lead aspartate stain [33] and embedded in Epon. Sample preparation and imaging was done on a ZEISS GeminiSEM according to prior reports [35]. The final dataset consists of one cell volume with 11 crops that have been extracted from a cell volume, annotated manually and used for training, validation and testing. The segmentation masks consist of six subcellular structures, mitochondria,
225
+
226
+ ![](images/d0e8f4829cb9470fbe194d444ede758b9f07da8991f6389b7ee0c17c6626fb6c.jpg)
227
+ (a)
228
+
229
+ ![](images/10c17108a0e4427c3b6392aeeec3d955bdf5afded9c511d81d50fd8419e37635.jpg)
230
+ Image
231
+
232
+ ![](images/97772fb136b5277bdbfe0e2301cd31572fefa651524629c44325cb8ea4f8790f.jpg)
233
+ GT Labels
234
+
235
+ ![](images/f149662bc7c8f4d4fdcc5d0304f7915980048488d1d3a870197a0b1644ba0f8c.jpg)
236
+ $\epsilon$ -Seg (ours)
237
+
238
+ ![](images/70bd60dca3139bdc03e06882d3607ba605247c7cdc2bca08218bfa8a08d4e41b.jpg)
239
+ U-Net
240
+ Figure 3: Qualitative segmentation result on two crops of the whole 3D volume. (a) and (b) are section 80 and 26 of crop00 and crop10 in "liver FIBSEM" dataset respectively. The U-Net is sparsely-supervised (for the fully-supervised U-Net result, see Figure S4).
241
+
242
+ <table><tr><td>Model</td><td>B</td><td>M</td><td>P</td><td>L</td><td>BM</td><td>OBC</td><td>CBC</td><td>Avg DSC</td></tr><tr><td>U-net [26]-Fully Supervised</td><td>0.97</td><td>0.95</td><td>0.85</td><td>0.79</td><td>0.52</td><td>0.87</td><td>0.90</td><td>0.84</td></tr><tr><td>U-net-Sparsely Supervised</td><td>0.94</td><td>0.81</td><td>0.68</td><td>0.81</td><td>0.49</td><td>0.39</td><td>0.00</td><td>0.59</td></tr><tr><td>ε-Seg-Sparsely Supervised</td><td>0.91</td><td>0.82</td><td>0.63</td><td>0.81</td><td>0.39</td><td>0.70</td><td>0.46</td><td>0.67</td></tr></table>
243
+
244
+ peroxisomes, lipofuscin, basolateral membrane, open bile canaliculus and closed bile canaliculus, along with a seventh "background" category.
245
+
246
+ Table 4: Dice similarity coefficient per class and average across all classes comparing our model with baselines for "liver FIBSEM" dataset. B: Background, M: Mitochondria, P: Peroxisomes, L: Lipofuscin, BM: Basolateral Membrane, OBC: Open Bile Canaliculus, CBC: Closed Bile Canaliculus.
247
+
248
+ <table><tr><td colspan="3">Per-Class Dice Coefficient</td><td rowspan="2">Avg DSC</td></tr><tr><td>Background</td><td>Cytoplasm</td><td>Nuclei</td></tr><tr><td>0.94</td><td>0.86</td><td>0.90</td><td>0.90</td></tr></table>
249
+
250
+ Table 2: Dice similarity coefficient per class and average for "Aitslab-bioimaging" datasets.
251
+
252
+ <table><tr><td rowspan="2">RLF</td><td colspan="4">Per-Class Dice Coefficient</td><td rowspan="2">Avg DSC</td></tr><tr><td>U</td><td>N</td><td>G</td><td>M</td></tr><tr><td>20</td><td>0.89</td><td>0.98</td><td>0.81</td><td>0.83</td><td>0.88</td></tr><tr><td>15</td><td>0.88</td><td>0.98</td><td>0.81</td><td>0.78</td><td>0.86</td></tr><tr><td>10</td><td>0.86</td><td>0.98</td><td>0.80</td><td>0.75</td><td>0.85</td></tr><tr><td>5</td><td>0.85</td><td>0.96</td><td>0.77</td><td>0.76</td><td>0.84</td></tr><tr><td>1</td><td>0.79</td><td>0.95</td><td>0.69</td><td>0.69</td><td>0.78</td></tr></table>
253
+
254
+ Table 5: DSC per class and average across all classes. The "RLF" column (Relative Labeling Factor) specify a scaling factor where 20 corresponds to $0.05\%$ and 1 as small as $0.0025\%$ of the total labeles available. U: Unrecognized, N:Nucleus, G:Granules, M:Mitochondria.
255
+
256
+ <table><tr><td rowspan="2">Trained on</td><td colspan="4">Per-Class Dice Coefficient</td><td rowspan="2">Avg DSC</td></tr><tr><td>U</td><td>N</td><td>G</td><td>M</td></tr><tr><td>high_c1</td><td>0.85</td><td>0.38</td><td>0.68</td><td>0.61</td><td>0.63</td></tr><tr><td>high_c2</td><td>0.80</td><td>0.33</td><td>0.58</td><td>0.56</td><td>0.57</td></tr><tr><td>high_c3</td><td>0.82</td><td>0.44</td><td>0.63</td><td>0.42</td><td>0.58</td></tr></table>
257
+
258
+ Table 3: Labkit results. Due to different image sizes, Labkit was trained on individual volumes. U: Unrecognized, N: Nucleus, G: Granules, M: Mitochondria.
259
+
260
+ <table><tr><td rowspan="2">Entropy Loss</td><td colspan="4">Per-Class Dice Coefficient</td><td rowspan="2">Avg DSC</td></tr><tr><td>U</td><td>N</td><td>G</td><td>M</td></tr><tr><td>X</td><td>0.81</td><td>0.97</td><td>0.74</td><td>0.71</td><td>0.81</td></tr><tr><td>✓</td><td>0.86</td><td>0.98</td><td>0.80</td><td>0.75</td><td>0.85</td></tr></table>
261
+
262
+ Table 6: Effect of entropy loss: The best checkpoint of a sparsely supervised model was further trained using batches with $50\%$ unlabeled data. U: Unrecognized, N:Nucleus, G:Granules, M:Mitochondria.
263
+
264
+ While it is true that FIB-SEM datasets like "BetaSeg" [22] offer isotropic resolution suitable for 3D processing, this is not always the case in EM imaging, where data often comes in 2D slices (especially in higher-throughput screens).
265
+
266
+ ![](images/8d50b85d1933e80ff1a785c533cbdb51bde362ebc82470a928e8947bc5a879e6.jpg)
267
+ Cytoplasm
268
+
269
+ ![](images/5ddec6dd6c88f3284af2072fa4a5f6abcd691f77b9a1149fe7fac9fb7bdda3f8.jpg)
270
+ Nuclei
271
+
272
+ ![](images/3ed07d152fa9841a2b682fd8e85e501eb392496b0d4b8819aa6cfa1e311bed60.jpg)
273
+ GT Labels
274
+ Figure 4: Qualitative results on a representative 2-channel image from the overlapping subset of the "Aitslab-bioimaging1" and "Aitslab-bioimaging2" datasets. The first two panels show the fluorescence microscopy channels: EGFP-Galectin-3-labeled cytoplasm (left) and Hoechst 33342-stained nuclei (center-left). The center-right panel (GT) displays the ground truth semantic segmentation with nuclei (cyan) and cytoplasm (magenta). The rightmost panel ( $\epsilon$ -Seg) shows the prediction from our method.
275
+
276
+ ![](images/8601e794dfa96e52a38fef9d406da20182c25899632b125190b09caaf7247465.jpg)
277
+ $\epsilon$ -Seg
278
+
279
+ Furthermore, we conducted an experiment on overlapping subset of two datasets Aitslab-bioimaging1 [1] and Aitslab-bioimaging2 [25]. The Aitslab-bioimaging1 dataset is a benchmarking fluorescence microscopy dataset containing 50 images of Hoechst 33342-stained U2OS osteosarcoma cell nuclei, including annotations for nuclei, nuclear fragments, and micronuclei, designed for training and evaluating neural networks for instance and semantic segmentation and the Aitslab-bioimaging2 dataset is a fluorescence microscopy dataset containing 60 images of EGFP-Galectin-3 labeled U2OS osteosarcoma cells with hand-annotated cell outlines, designed for training and benchmarking neural networks for instance and semantic segmentation, with over 2200 annotated cell objects and compatibility with object detection tasks. The overlapping subset of them contains 30, 2-channel images for training and 10 for testing.
280
+
281
+ Evaluation Metrics. We used Dice Similarity Coefficient (DSC) to evaluate the segmentation performance. DSC is a widely used metric in image segmentation and measures the similarity between the predicted and actual segmentation masks.
282
+
283
+ Let $A$ and $B$ be two sets representing the binary segmentation masks of the ground truth and the predicted segmentation. The Dice coefficient is defined as $\text{Dice}(A, B) = \frac{2|A \cap B|}{|A| + |B|}$ , where $|A \cap B|$ is the number of overlapping pixels between the predicted and ground truth masks, $|A|$ , the number of pixels in the ground truth mask, and $|B|$ , the number of pixels in the predicted mask.
284
+
285
+ Experiments. We use an architecture similar to the one used in the HDN work [24]. For all hyperparameters we have introduced, we used grid searches to find a good balance between performance and stability. We first evaluate our method on "BetaSeg" dataset [22] and compare its performance against baseline methods shown in Table S1. They demonstrate that our approach outperforms existing baselines in terms of DSC (F1-score). For the Labkit baseline we trained per cell and show the results in Table 3 and report the best class-wise performance in Table S1. Quantitative segmentation result are shown in Figure 2 (complete Figure S3).
286
+
287
+ To further validate the robustness of our method, we conduct experiments on the "liver FIBSEM" dataset, comparing it with U-Net baselines (fully and sparsely-supervised). Quantitative and qualitative results are shown in Table 4 and Figure 3, respectively (complete Figure S4). Additionally, we show $\epsilon$ -Seg results also on a fluorescent microscopy dataset (see Table 2 and Figure 4).
288
+
289
+ Model Ablations. We strip our model down to a vanilla HVAE and then re-introduce one component at a time, showing how each of the modules we have introduced above contributes to the overall performance we report. These results on the "BetaSeg" dataset are shown in Table 7.
290
+
291
+ Additionally, we evaluate how the quality of the results depends on the amount of available training labels. To this end, we are starting from $0.05\%$ of the total image data available in the "BetaSeg" dataset and gradually decreasing the used training labels down to $0.0025\%$ . The results of these experiments can be found in Table 5. As discussed in Section 3, $\mathcal{L}_H$ helps us to gain additional performance also from the unlabeled data, which we measure and report in Table 6. Finally, we measured the effect of differently sized masking regions in Table 8.
292
+
293
+ <table><tr><td colspan="3">Loss</td><td rowspan="2">Prior Distribution</td><td colspan="4">Per-Class Dice Coefficient</td><td rowspan="2">Avg DSC</td></tr><tr><td>KL</td><td>CL</td><td>CE</td><td>U</td><td>N</td><td>G</td><td>M</td></tr><tr><td>✓</td><td>✗</td><td>✗</td><td>N</td><td>0.44</td><td>0.55</td><td>0.34</td><td>0.13</td><td>0.37</td></tr><tr><td>✓</td><td>✓</td><td>✗</td><td>N</td><td>0.83</td><td>0.95</td><td>0.69</td><td>0.76</td><td>0.81</td></tr><tr><td>✓</td><td>✗</td><td>✓</td><td>N</td><td>0.81</td><td>0.97</td><td>0.80</td><td>0.75</td><td>0.83</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td>N</td><td>0.81</td><td>0.97</td><td>0.73</td><td>0.72</td><td>0.81</td></tr><tr><td>✓</td><td>✗</td><td>✓</td><td>GMM</td><td>0.82</td><td>0.97</td><td>0.72</td><td>0.75</td><td>0.82</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td>GMM</td><td>0.86</td><td>0.98</td><td>0.80</td><td>0.75</td><td>0.85</td></tr></table>
294
+
295
+ Table 7: Loss components and prior distribution ablation on "BetaSeg" dataset. U: Unrecognized, N: Nucleus, G: Granules, M: Mitochondria.
296
+
297
+ <table><tr><td rowspan="2">Mask Size</td><td colspan="4">Per-Class Dice Coefficient</td><td rowspan="2">Avg DSC</td></tr><tr><td>Unrecognized</td><td>Nucleus</td><td>Granules</td><td>Mitochondria</td></tr><tr><td>9x9</td><td>0.83</td><td>0.95</td><td>0.65</td><td>0.73</td><td>0.79</td></tr><tr><td>7x7</td><td>0.84</td><td>0.97</td><td>0.72</td><td>0.75</td><td>0.82</td></tr><tr><td>5x5</td><td>0.87</td><td>0.94</td><td>0.78</td><td>0.80</td><td>0.85</td></tr><tr><td>3x3</td><td>0.88</td><td>0.97</td><td>0.81</td><td>0.80</td><td>0.87</td></tr><tr><td>1x1</td><td>0.86</td><td>0.98</td><td>0.80</td><td>0.75</td><td>0.85</td></tr></table>
298
+
299
+ Table 8: Label consistency ablation on "BetaSeg" dataset. The "Mask Size" column indicates the size of the center-region mask, within which the pixel-wise ground truth labels are consistent.
300
+
301
+ Limitations. While $\epsilon$ -Seg achieves competitive segmentation results using only sparse supervision, several limitations do remain. First, all experiments we present are conducted on 2D images. Extending the presented framework to operate in full 3D is an important next step, especially for volume EM data analysis. Second, we noticed that the effectiveness of our entropy-based loss must be improved, e.g. by replacing it with a more adaptive or data-driven strategy. Finally, in the presented form, hyperparameters such as the contrastive loss margin still require manual tuning, which is not ideal for the ease of use by biological experts.
302
+
303
+ # 5 Discussion
304
+
305
+ Here we presented $\epsilon$ -Seg, a novel semantic segmentation approach that leverages the variational latent representation of hierarchical variational autoencoders (HVAEs) trained on a limited amount of pixel-labels in an inpainting setup. We used a GMM prior instead of the traditionally employed Gaussian prior and introduced a novel segmentation head that incorporates both a cross-entropy loss and an entropy loss to leverage available data for which no ground truth (GT) class-labels are available. The integration of contrastive loss, combined with the structural advantages of the GMM prior, provides a means to effectively distinguish biological structures directly from the latent space encoding.
306
+
307
+ Transformer-based architectures, as used in MAESTER [34], usually have a rather large number of trainable parameters (i.e. 328, 452, 352 trainable parameters in MAESTER). This makes such approaches less applicable to life-scientists since they require rather powerful compute setups. Even our biggest network, in contrast, only employs 3,800,869 trainable parameters (see Tables S2 and S3), making it fast to train and easy to use. Our experiments also highlight an interesting fact, namely that smaller mask sizes with consistent labels emerged as the best strategy. This stands in contrast to Transformer-based approaches where a relatively large fraction of the input images is masked during training [34].
308
+
309
+ By combining hierarchical representations with advanced regularization techniques such as contrastive learning, we have shown that we can achieve competitive segmentation performance on complex microscopy data, even with relatively small models and limited training data. The proposed approach tackles the challenge of label scarcity, enhances latent space representations tailored to structured biological data, and lays the groundwork for future exploration of semi-supervised learning techniques and adaptive latent priors.
310
+
311
+ Overall, this work bridges the gap between fully supervised and unsupervised methods by offering a scalable approach for large-scale biomedical semantic image data segmentation.
312
+
313
+ # References
314
+
315
+ [1] Malou Arvidsson, Salma Kazemi Rashed, and Sonja Aits. An annotated high-content fluorescence microscopy dataset with Hoechst 33342-stained nuclei and manually labelled outlines. Data Brief, 46: 108769, 2023.
316
+ [2] Matthias Arzt, Joran Deschamps, Christopher Schmied, Tobias Pietzsch, Deborah Schmidt, Pavel Tomancak, Robert Haase, and Florian Jug. Labkit: Labeling and segmentation toolkit for big image data. Frontiers in Computer Science, 4, 2022.
317
+ [3] Abhinav Aswath, Abdulrahman Alsahaf, Ben N. G. Giepmans, and George Azzopardi. Segmentation in large-scale cellular electron microscopy with deep learning: A literature survey. Medical Image Analysis, 89:102920, 2023.
318
+
319
+ [4] Róger Bermúdez-Chacón, Okan Altingövde, Carlos Becker, Mathieu Salzmann, and Pascal Fua. Visual correspondences for unsupervised domain adaptation on electron microscopy images. IEEE Trans. Med. Imaging, 39(4):1256-1267, 2020.
320
+ [5] Ahcène Boubekki, Michael Kampffmeyer, Robert Jenssen, and Ulf Brefeld. Joint optimization of an autoencoder for clustering and embedding. Machine Learning, 110(6):1901-1937, 2021.
321
+ [6] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In Proceedings of the 37th International Conference on Machine Learning (ICML), pages 1597-1607. PMLR, 2020.
322
+ [7] Rewon Child. Very deep VAEs generalize autoregressive models and can outperform them on images. In International Conference on Learning Representations (ICLR), 2021.
323
+ [8] Mark Collier and Hector Urdiales. Scalable deep unsupervised clustering with concrete GMVAEs. In 1st Workshop on Deep Continuous-Discrete Machine Learning, ECML, 2019.
324
+ [9] Ryan Conrad and Kedar Narayan. CEM500K, a large-scale heterogeneous unlabeled cellular electron microscopy image dataset for deep learning. eLife, 10:e65894, 2021.
325
+ [10] Nat Dilokthanakul, Pedro A. M. Mediano, Marta Garnelo, Matthew C. H. Lee, Hugh Salimbeni, Kai Arulkumaran, and Murray Shanahan. Deep unsupervised clustering with gaussian mixture variational autoencoders. arXiv preprint arXiv:1611.02648, 2016. Under review at ICLR 2017.
326
+ [11] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations (ICLR), 2021.
327
+ [12] Damjana Drobne. 3D imaging of cells and tissues by focused ion beam/scanning electron microscopy (FIB/SEM). Methods in Molecular Biology, 950:275-292, 2013.
328
+ [13] Jean-Louis Durrieu, Jean-Philippe Thiran, and Francis Kelly. Lower and upper bounds for approximation of the kullback-leibler divergence between gaussian mixture models. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4833–4836, 2012.
329
+ [14] Hongqing Han, Mariia Dmitrieva, Alexander Sauer, Ka Chun Tam, and Jens Rittscher. Self-supervised voxel-level representation rediscovers subcellular structures in volume electron microscopy. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 2276-2285, 2022.
330
+ [15] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9726-9735, 2020.
331
+ [16] Lars Heinrich, Daniel Bennett, David Ackerman, William Park, John Bogovic, Nico Eckstein, Alexander Petruncio, Joe Clements, Sharmistha Pang, Chao-Shun Xu, Jan Funke, Walter Korff, Harald F. Hess, Jennifer Lippincott-Schwartz, Stephan Saalfeld, Andrew V. Weigel, and COSEM Project Team. Whole-cell organelle segmentation in volume electron microscopy. Nature, 599(7883):141-146, 2021.
332
+ [17] John R. Hershey and Peder A. Olsen. Approximating the kullback–leibler divergence between gaussian mixture models. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages IV-317-IV-320, 2007.
333
+ [18] Zhicheng Huang, Xiaojie Jin, Chengze Lu, Qibin Hou, Ming-Ming Cheng, Dongmei Fu, Xiaohui Shen, and Jiashi Feng. Contrastive masked autoencoders are stronger vision learners. IEEE Trans. Pattern Anal. Mach. Intell., 2023.
334
+ [19] Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. In International Conference on Learning Representations (ICLR), 2017.
335
+ [20] Durk P. Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In Conference on Neural Information Processing Systems (NeurIPS), 2014.
336
+ [21] Lars Maaløe, Marco Fraccaro, Valentin Lievin, and Ole Winther. BIVA: a very deep hierarchy of latent variables for generative modeling. In Advances in Neural Information Processing Systems (NeurIPS), pages 6548-6559. Curran Associates, Inc., 2019.
337
+
338
+ [22] Andreas Müller, Daniel Schmidt, Chao-Shun Xu, Sharmistha Pang, Justin V. D'Costa, Stefan Kretschmar, Christian Munster, Thorsten Kurth, Florian Jug, Martin Weigert, Harald F. Hess, and Michele Solimena. 3D FIB-SEM reconstruction of microtubule-organelle interaction in whole primary mouse $\beta$ cells. Journal of Cell Biology, 220(2):e202010039, 2021.
339
+ [23] Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, and Aaron C. Courville. Film: Visual reasoning with a general conditioning layer. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pages 3942-3951, 2018.
340
+ [24] Mangal Prakash, Maurizio Delbrazio, Peyman Milanfar, and Florian Jug. Interpretable unsupervised diversity denoising and artefact removal. In International Conference on Learning Representations (ICLR), 2022.
341
+ [25] Salma Kazemi Rashed, Malou Arvidsson, Rafsan Ahmed, and Sonja Aits. An annotated high-content fluorescence microscopy dataset with EGFP-galectin-3-stained cells and manually labelled outlines. Data Brief, 58:111148, 2025.
342
+ [26] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention (MICCAI), pages 234-241. Springer, 2015.
343
+ [27] Marek Śmieja, Maciej Wolczyk, Jacek Tabor, and Bernhard C. Geiger. SeGMA: Semi-supervised gaussian mixture autoencoder. IEEE Trans. Neural Netw. Learn. Syst., 32(9):3930-3941, 2021.
344
+ [28] Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. Ladder variational autoencoders. In Advances in Neural Information Processing Systems (NeurIPS), pages 3745-3753. Curran Associates, Inc., 2016.
345
+ [29] Robin Strudel, Ricardo Garcia, Ivan Laptev, and Cordelia Schmid. Segmenter: Transformer for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 7262-7272, 2021.
346
+ [30] Eichi Takaya, Yusuke Takeichi, Mamiko Ozaki, and Satoshi Kurihara. Sequential semi-supervised segmentation for serial electron microscopy image with small number of labels. J. Neurosci. Methods, 351: 109066, 2021.
347
+ [31] Kai Philipp Treder, Chenyang Huang, Jinseok S. Kim, and Angus I. Kirkland. Applications of deep learning in electron microscopy. Microscopy (Oxford), 71(Supplement_1):i100-i115, 2022.
348
+ [32] Arash Vahdat and Jan Kautz. NVAE: A deep hierarchical variational autoencoder. In Advances in Neural Information Processing Systems (NeurIPS), pages 19667-19679, 2020.
349
+ [33] J. Walton. Lead aspartate, an en bloc contrast stain particularly useful for ultrastructural enzymology. J. Histochem. Cytochem., 27(10):1337-1342, 1979.
350
+ [34] Ronald Xie, Kuan Pang, Gary D. Bader, and Bo Wang. MAESTER: Masked autoencoder guided segmentation at pixel resolution for accurate, self-supervised subcellular structure recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 17521-17531, 2023.
351
+ [35] C. Shan Xu, Kenneth J. Hayworth, Zhiyuan Lu, Peter Grob, Ana M. Hassan, José G. García-Cerdán, Krishna K. Niyogi, Eva Nogales, Richard J. Weinberg, and Harald F. Hess. Enhanced FIB-SEM systems for large-volume 3D imaging. eLife, 6:e25916, 2017.
352
+ [36] Chao-Shun Xu, Sharmistha Pang, Gleb Shtengel, Andreas Müller, Anna T. Ritter, Heather K. Hoffman, Shin-Ya Takemura, Zhipeng Lu, Helene A. Pasolli, Nikhil Iyer, Jihoon Chung, Daniel Bennett, Andrew V. Weigel, Michael Freeman, Sean B. van Engelenburg, Tobias C. Walther, Robert V. Farese Jr, Jennifer Lippincott-Schwartz, Ira Mellman, Michele Solimena, and Harald F. Hess. An open-access volume electron microscopy atlas of whole cells and tissues. Nature, 599(7883):147-151, 2021. Erratum in: Nature, vol. 599, no. 7885, p. E5, 2021, doi:10.1038/s41586-021-04132-8.
353
+
354
+ # $\epsilon$ -Seg: Sparsely Supervised Semantic Segmentation of Microscopy Data
355
+
356
+ ![](images/361d0272d55368c13a79d57d1e8b3b9182debfca89e0918b40dacc4bdb7ea04b.jpg)
357
+ Supplementary Material
358
+ Figure S1: The overall pipeline of Vanilla HVAE in Table S1 (first row in Table 7), which is trained on an inpainting task (of the center-region masked inputs). $\phi$ and $\theta$ are encoder and decoder of the network, respectively. Dotted arrows show sampling from a distribution. $h$ is an intermediate feature embedding of input $\pmb{x}$ coming from the encoder $\phi$ and it is posterior distribution's parameters which is divided into two chunks shown as $\mu_{L}$ and $\sigma_{L}$ . $z_{L}$ is a sample from $\mathcal{N}(\pmb{\mu}_{L}(\pmb{x}),\pmb{\sigma}_{L}^{2}(\pmb{x}))$ . For $\mathcal{L}_I$ inpainting loss and $\mathcal{L}_{KL}$ refer to Equations 1 and 7 respectively.
359
+
360
+ ![](images/f85d38b7d43b978180063b15c35551970ef333b9ad5f872522ccc8249017726c.jpg)
361
+ Figure S2: The overall pipeline of Vanilla HVAE with only CL added in the pipeline in the second row in Table 7, which is trained on an inpainting task (of the center-region masked inputs). Green and red arrows are showing positive and negative pair respectively, in a batch. $\phi$ and $\theta$ are encoder and decoder of the network, respectively. Dotted lines show sampling from a distribution. $h$ is an intermediate feature embedding of input $\pmb{x}$ coming from the encoder $\phi$ and it is posterior distribution's parameters which is divided into two chunks shown as $\mu_{L}$ and $\sigma_{L}$ . $\pmb{z}_{L}$ is a sample from $\mathcal{N}(\pmb{\mu}_L(\pmb{x}),\pmb{\sigma}_L^2 (\pmb{x}))$ . For $\mathcal{L}_I$ inpainting loss, $\mathcal{L}_{CL}$ contrastive loss and $\mathcal{L}_{KL}$ refer to Equations 1, 16 and 7 respectively.
362
+
363
+ <table><tr><td>Model</td><td>Learning Paradigm</td><td>U</td><td>N</td><td>G</td><td>M</td><td>Avg DSC</td></tr><tr><td>Vanilla HVAE* [24]</td><td>Self-Supervised</td><td>0.44</td><td>0.55</td><td>0.34</td><td>0.13</td><td>0.37</td></tr><tr><td>Labkit [2]</td><td>Sparsely Supervised</td><td>0.85</td><td>0.44</td><td>0.68</td><td>0.61</td><td>0.65</td></tr><tr><td>U-net [26]</td><td>Fully Supervised</td><td>0.94</td><td>0.99</td><td>0.90</td><td>0.87</td><td>0.93</td></tr><tr><td>U-net</td><td>Sparsely Supervised</td><td>0.90</td><td>0.96</td><td>0.78</td><td>0.66</td><td>0.83</td></tr><tr><td>Vanilla ViT [11]</td><td>Fully Supervised</td><td>0.91</td><td>0.98</td><td>0.77</td><td>0.87</td><td>0.88</td></tr><tr><td>Segmenter [29]</td><td>Fully Supervised</td><td>0.91</td><td>0.99</td><td>0.86</td><td>0.90</td><td>0.92</td></tr><tr><td>MAESTER* [34]</td><td>Self-Supervised</td><td>0.84</td><td>0.95</td><td>0.56</td><td>0.79</td><td>0.79</td></tr><tr><td>Han et al* [14]</td><td>Self-Supervised</td><td>-</td><td>-</td><td>-</td><td>-</td><td>0.66</td></tr><tr><td>ε-Seg (+LH)</td><td>Sparsely Supervised</td><td>0.89</td><td>0.98</td><td>0.81</td><td>0.83</td><td>0.88</td></tr></table>
364
+
365
+ Table S1: Dice similarity coefficient per class and average across all classes comparing our model with baselines on the "BetaSeg" dataset [22]. Methods marked with an asterisk use K-Means clustering on latent features to conduct semantic segmentation (more explanation can be found in Section 3). U: Unrecognized, N:Nucleus, G:Granules, M:Mitochondria.
366
+
367
+ <table><tr><td rowspan="2"># res.
368
+ blocks</td><td colspan="4">Per-Class Dice Coefficient</td><td rowspan="2">Avg
369
+ DSC</td></tr><tr><td>U</td><td>N</td><td>G</td><td>M</td></tr><tr><td>5</td><td>0.86</td><td>0.98</td><td>0.80</td><td>0.75</td><td>0.85</td></tr><tr><td>4</td><td>0.85</td><td>0.97</td><td>0.80</td><td>0.74</td><td>0.84</td></tr><tr><td>3</td><td>0.88</td><td>0.96</td><td>0.81</td><td>0.80</td><td>0.86</td></tr><tr><td>2</td><td>0.87</td><td>0.97</td><td>0.81</td><td>0.77</td><td>0.86</td></tr><tr><td>1</td><td>0.85</td><td>0.97</td><td>0.80</td><td>0.72</td><td>0.84</td></tr></table>
370
+
371
+ Entropy-based Loss. When the sample $y'$ of the Gumbel-Softmax distribution is uniform, the network is maximally unsure about which class to predict for the current input patch. We noticed that this is commonly the case early during training, where the network has not yet seen a lot of patches for which ground truth labels are available.
372
+
373
+ To encourage the network not to predict a uniform $\pmb{y}^{\prime}$ , we introduced an entropy loss for all patches $\pmb{x}^{(j)}\in \pmb{X}$ for which we do not have a ground truth class label.
374
+
375
+ $$
376
+ \mathcal {L} _ {H} = - \sum_ {\boldsymbol {x} ^ {(j)} \in \boldsymbol {X}} \boldsymbol {y} ^ {\prime (j)} \log \left(\boldsymbol {y} ^ {\prime (j)}\right). \tag {18}
377
+ $$
378
+
379
+ Table S2: Residual blocks ablation (3 latent variables). U: Unrecognized, N: Nucleus, G: Granules, M: Mitochondria.
380
+
381
+ <table><tr><td rowspan="2">#latent</td><td colspan="4">Per-Class Dice Coefficient</td><td rowspan="2">Avg DSC</td></tr><tr><td>U</td><td>N</td><td>G</td><td>M</td></tr><tr><td>2</td><td>0.87</td><td>0.98</td><td>0.81</td><td>0.76</td><td>0.86</td></tr><tr><td>3</td><td>0.86</td><td>0.98</td><td>0.80</td><td>0.75</td><td>0.85</td></tr></table>
382
+
383
+ Table S3: Latent variables ablation (5 res. blocks/layer). U: Unrecognized, N:Nucleus, G:Granules, M:Mitochondria.
384
+
385
+ ![](images/8e8eb18026e3e5d65a0ece7e0044f96a010da4cef17b5dfd697f67ea7e106b0d.jpg)
386
+ Image
387
+
388
+ ![](images/93dc66719bf002ecb3b9336f1d96f22cf30002ab277eb944f5725817484a92e7.jpg)
389
+ MAESTER
390
+
391
+ ![](images/6821904a8efd03421caecd00dc92869955abdd72204f171911ab44a63213eec1.jpg)
392
+ GT Labels
393
+
394
+ ![](images/92098fed9d1af3c28bfd43a23c762bc2b1219e1b745a1338b07d7450f616dab9.jpg)
395
+ $\epsilon$ -Seg $(+\mathcal{L}_H)$
396
+
397
+ ![](images/3b311e1cf32ac4706a3828cae8a52419a6aac0018095e4e5609c82556d0e6c0d.jpg)
398
+ Labkit
399
+
400
+ ![](images/9e93f5b50dfee566aa7dd17a92a87ffea10f40ff63fda810b7c681aae2f0cd90.jpg)
401
+ U-Net
402
+
403
+ Vanilla HVAE
404
+ ![](images/c5b73cb3b4a13c337649790d2615d030d0a7bb212d9734c6360442c94c8c62a0.jpg)
405
+ nucleus
406
+ mitochondria
407
+ unrecognized
408
+
409
+ Han et al.
410
+ Figure S3: Qualitative segmentation result on part of the test image stack (section 627 of high_c4 in "BetaSeg" dataset).
411
+ ![](images/633115e2a7611fc836d3b610e05404e63fb51cc3d1dfb1d7ca866a4d2842fbf2.jpg)
412
+ granules
413
+
414
+ ![](images/f92ece2eaf545e6c94b39bd870779b8ebd0aac06dfa0c0240285f8db89fc4189.jpg)
415
+ (a)
416
+ Image
417
+
418
+ mitochondria
419
+ peroxisomes
420
+ lipofuscin
421
+ open bile canaliculus
422
+ closed bile canaliculus
423
+ basolateral membrane
424
+ $\bigcirc$ background
425
+
426
+ ![](images/6dc07f037a3cdb61a00032fa78a8bbb34d62f9bd4609825939ca19a322d6f577.jpg)
427
+
428
+ ![](images/d9c7f097ca0f0a9b9a1f5619f3afb7f856b165c6fa80d345324f5245ad45a7b9.jpg)
429
+ GT Labels
430
+ U-Net (sparse)
431
+
432
+ ![](images/ede399dd3ad416f1cb974a397ebb9785014d73a25fad892d61a527c506ba480f.jpg)
433
+ Image
434
+
435
+ mitochondria
436
+ peroxisomes
437
+ lipofuscin
438
+ open bile canaliculus
439
+ closed bile canaliculus
440
+ basolateral membrane
441
+ $\bigcirc$ background
442
+
443
+ ![](images/ffce00d4c4cc095898b9797b7dacd9428dae309d592081bae6cfc34856add15b.jpg)
444
+ U-Net (sparse)
445
+ U-Net (full)
446
+ $\epsilon$ -Seg $(+\mathcal{L}_H)$
447
+ U-Net (full)
448
+ Figure S4: Qualitative segmentation result on two crops of the whole 3D volume. (a) and (b) are section 80 and 26 of crop00 and crop10 in "liver FIBSEM" dataset respectively. U-Net (sparse) and (full) is sparsely-supervised and fully-supervised respectively.
449
+
450
+ <table><tr><td>RLF</td><td>Model</td><td>U</td><td>N</td><td>G</td><td>M</td><td>Avg DSC</td></tr><tr><td rowspan="2">20</td><td>U-net</td><td>0.63</td><td>0.75</td><td>0.51</td><td>0.12</td><td>0.50</td></tr><tr><td>ε-Seg</td><td>0.89</td><td>0.98</td><td>0.81</td><td>0.83</td><td>0.88</td></tr><tr><td rowspan="2">15</td><td>U-net</td><td>0.53</td><td>0.64</td><td>0.41</td><td>0.14</td><td>0.43</td></tr><tr><td>ε-Seg</td><td>0.88</td><td>0.98</td><td>0.81</td><td>0.78</td><td>0.86</td></tr><tr><td rowspan="2">10</td><td>U-net</td><td>0.30</td><td>0.20</td><td>0.42</td><td>0.34</td><td>0.31</td></tr><tr><td>ε-Seg</td><td>0.86</td><td>0.98</td><td>0.80</td><td>0.75</td><td>0.85</td></tr><tr><td rowspan="2">5</td><td>U-net</td><td>0.71</td><td>0.00</td><td>0.00</td><td>0.03</td><td>0.18</td></tr><tr><td>ε-Seg</td><td>0.85</td><td>0.96</td><td>0.77</td><td>0.76</td><td>0.84</td></tr><tr><td rowspan="2">1</td><td>U-net</td><td>0.17</td><td>0.00</td><td>0.37</td><td>0.02</td><td>0.14</td></tr><tr><td>ε-Seg</td><td>0.79</td><td>0.95</td><td>0.69</td><td>0.69</td><td>0.78</td></tr></table>
451
+
452
+ Table S4: Comparison between U-Net and $\epsilon$ -Seg on the "BetaSeg" dataset under varying label sparsity levels. "RLF" (Relative Labeling Factor) specifies the fraction of available labels, where 20 corresponds to $0.05\%$ and 1 to $0.0025\%$ of total labels. U: Unrecognized, N: Nucleus, G: Granules, M: Mitochondria. Although both models were trained with balanced supervision, using patches selected to include all classes, the U-Net still fails to segment the nucleus at very low labeling levels (RLF 1 and 5). This illustrates a key limitation of discriminative models such as U-Net, under extreme supervision sparsity, even balanced examples may not suffice to generalize fine-grained or context-sensitive structures like the nucleus. In contrast, $\epsilon$ -Seg benefits from its class-aware latent modeling via the GMM prior, which enables it to extract meaningful representations for different structures and distinguish them semantically. We note that the sparse U-Net reported earlier was trained on slice numbers 800, 600, and 500 of the "high_c1", "high_c2", and "high_c3" volumes of the "BetaSeg" dataset. For selecting the same amount of data used in $\epsilon$ -Seg, to train the 2D U-Net on, as reported in the table above, we extracted 64x64 patches where except background, different classes are approximately well balanced.
453
+
454
+ # NeurIPS Paper Checklist
455
+
456
+ # 1. Claims
457
+
458
+ Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
459
+
460
+ Answer: [Yes]
461
+
462
+ Justification: The abstract and introduction clearly state the paper's contributions, including the design of $\epsilon$ -Seg, an HVAE-based segmentation framework with a GMM prior, center-region inpainting, contrastive learning, and a dedicated semantic segmentation head. These claims are appropriately scoped and supported by the methodology and experiments presented in the rest of the paper. The text also specifies that the method works with extremely limited supervision and addresses common practical challenges in EM segmentation, which are demonstrated through empirical results.
463
+
464
+ # Guidelines:
465
+
466
+ - The answer NA means that the abstract and introduction do not include the claims made in the paper.
467
+ - The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers.
468
+ - The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings.
469
+ - It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper.
470
+
471
+ # 2. Limitations
472
+
473
+ Question: Does the paper discuss the limitations of the work performed by the authors?
474
+
475
+ Answer: [Yes]
476
+
477
+ Justification: We included a dedicated Limitations mini-headline at the end of the Experiments and Result (Section 4). There, we discuss that the current method is restricted to 2D data and would likely benefit from a 3D extension. We also note that the entropy-based loss could be further optimized, and that dataset-specific tuning is required for some hyperparameters, such as the contrastive loss margin.
478
+
479
+ # Guidelines:
480
+
481
+ - The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper.
482
+ - The authors are encouraged to create a separate "Limitations" section in their paper.
483
+ - The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be.
484
+ - The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated.
485
+ - The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon.
486
+ - The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size.
487
+ - If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness.
488
+
489
+ - While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations.
490
+
491
+ # 3. Theory assumptions and proofs
492
+
493
+ Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof?
494
+
495
+ Answer: [NA]
496
+
497
+ Justification: The paper does not include formal theoretical results or proofs (e.g., theorems or lemmas). However, it provides detailed derivations and explanations of the model components and loss functions (see Section 3), including the use of a GMM prior in the HVAE framework and KL divergence formulation.
498
+
499
+ # Guidelines:
500
+
501
+ - The answer NA means that the paper does not include theoretical results.
502
+ - All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.
503
+ - All assumptions should be clearly stated or referenced in the statement of any theorems.
504
+ - The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition.
505
+ - Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material.
506
+ - Theorems and Lemmas that the proof relies upon should be properly referenced.
507
+
508
+ # 4. Experimental result reproducibility
509
+
510
+ Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)?
511
+
512
+ Answer: [Yes]
513
+
514
+ Justification: The paper provides all necessary implementation details, including model architecture (Figure 1), training settings, dataset descriptions and evaluation metrics (Section 4). Loss terms, and component configurations are also disclosed to allow faithful reproduction of the reported results.
515
+
516
+ # Guidelines:
517
+
518
+ - The answer NA means that the paper does not include experiments.
519
+ - If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not.
520
+ - If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable.
521
+ - Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general, releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed.
522
+ - While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example
523
+
524
+ (a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm.
525
+ (b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully.
526
+ (c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset).
527
+ (d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results.
528
+
529
+ # 5. Open access to data and code
530
+
531
+ Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material?
532
+
533
+ # Answer: [Yes]
534
+
535
+ Justification: Two of the datasets used in our experiments are publicly available and referenced in the paper. The third dataset is private and cannot be shared due to data access restrictions. We will publicly release the code on GitHub along with detailed instructions to reproduce all experiments based on the public datasets.
536
+
537
+ # Guidelines:
538
+
539
+ - The answer NA means that paper does not include experiments requiring code.
540
+ - Please see the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
541
+ - While we encourage the release of code and data, we understand that this might not be possible, so "No" is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark).
542
+ - The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
543
+ - The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc.
544
+ - The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why.
545
+ - At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable).
546
+ - Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted.
547
+
548
+ # 6. Experimental setting/details
549
+
550
+ Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results?
551
+
552
+ # Answer: [Yes]
553
+
554
+ Justification: We provide all relevant training and evaluation details, including data splits, optimizer type, learning rate, batch size, and other key hyperparameters. Where appropriate, we explain how hyperparameters were chosen, either based on prior work or grid search.
555
+
556
+ # Guidelines:
557
+
558
+ - The answer NA means that the paper does not include experiments.
559
+ - The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them.
560
+
561
+ - The full details can be provided either with the code, in appendix, or as supplemental material.
562
+
563
+ # 7. Experiment statistical significance
564
+
565
+ Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments?
566
+
567
+ Answer: [No]
568
+
569
+ Justification: While our main experiment (Table S1) includes 5-fold cross-validation to mitigate variability due to data splits, we did not report error bars or perform statistical significance tests. Given the limited size of our dataset and the exploratory nature of our work, our focus was on assessing the feasibility of the proposed method rather than establishing statistically significant performance differences.
570
+
571
+ # Guidelines:
572
+
573
+ - The answer NA means that the paper does not include experiments.
574
+ - The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper.
575
+ - The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions).
576
+ - The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.)
577
+ - The assumptions made should be given (e.g., Normally distributed errors).
578
+ - It should be clear whether the error bar is the standard deviation or the standard error of the mean.
579
+ - It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a $96\%$ CI, if the hypothesis of Normality of errors is not verified.
580
+ - For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates).
581
+ - If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text.
582
+
583
+ # 8. Experiments compute resources
584
+
585
+ Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments?
586
+
587
+ Answer: [Yes]
588
+
589
+ Justification: In this work, we mentioned the number of parameters in our largest model and the efficiency of our approach. Our method improves upon previous techniques by eliminating the need for K-Means clustering, allowing the model to directly generate segmentation labels from the segmentation head. This change significantly accelerates the inference process, resulting in faster segmentation without sacrificing accuracy.
590
+
591
+ # Guidelines:
592
+
593
+ - The answer NA means that the paper does not include experiments.
594
+ - The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage.
595
+ - The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute.
596
+ - The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper).
597
+
598
+ # 9. Code of ethics
599
+
600
+ Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines?
601
+
602
+ Answer: [Yes]
603
+
604
+ Justification: Yes, the research conducted in this paper conforms with the NeurIPS Code of Ethics. We have adhered to all relevant ethical guidelines, ensuring transparency, fairness, and respect for privacy in our work.
605
+
606
+ Guidelines:
607
+
608
+ - The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.
609
+ - If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics.
610
+ - The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction).
611
+
612
+ # 10. Broader impacts
613
+
614
+ Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed?
615
+
616
+ Answer: [NA]
617
+
618
+ Justification: The focus of this paper is primarily on technical advancements in segmentation, and while it does not explicitly address societal impacts, the method may have positive implications in fields like medical imaging. However, the societal implications are, if at all, only indirect and we believe the answer 'NA' is most appropriate.
619
+
620
+ Guidelines:
621
+
622
+ - The answer NA means that there is no societal impact of the work performed.
623
+ - If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact.
624
+ - Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations.
625
+ - The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster.
626
+ - The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology.
627
+ - If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML).
628
+
629
+ # 11. Safeguards
630
+
631
+ Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)?
632
+
633
+ Answer: [NA]
634
+
635
+ Justification: This paper does not involve models or data with a high risk for misuse, and thus does not describe any specific safeguards related to their release.
636
+
637
+ Guidelines:
638
+
639
+ - The answer NA means that the paper poses no such risks.
640
+
641
+ - Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters.
642
+ - Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images.
643
+ - We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort.
644
+
645
+ # 12. Licenses for existing assets
646
+
647
+ Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected?
648
+
649
+ Answer: [Yes]
650
+
651
+ Justification: Yes, all creators and original owners of assets used in this paper, including datasets, code, and models, have been properly credited. Additionally, the licenses and terms of use associated with these assets have been explicitly mentioned and respected.
652
+
653
+ # Guidelines:
654
+
655
+ - The answer NA means that the paper does not use existing assets.
656
+ - The authors should cite the original paper that produced the code package or dataset.
657
+ - The authors should state which version of the asset is used and, if possible, include a URI.
658
+ - The name of the license (e.g., CC-BY 4.0) should be included for each asset.
659
+ - For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided.
660
+ - If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset.
661
+ - For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided.
662
+ - If this information is not available online, the authors are encouraged to reach out to the asset's creators.
663
+
664
+ # 13. New assets
665
+
666
+ Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets?
667
+
668
+ Answer: [Yes]
669
+
670
+ Justification: Yes, the private dataset used in this paper is well documented, including details on its structure, size, and usage. However, due to privacy and confidentiality constraints, the dataset is not publicly available. Access to the dataset is restricted, but interested parties can contact the authors to be connected to the dataset owners.
671
+
672
+ # Guidelines:
673
+
674
+ - The answer NA means that the paper does not release new assets.
675
+ - Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc.
676
+ - The paper should discuss whether and how consent was obtained from people whose asset is used.
677
+ - At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file.
678
+
679
+ # 14. Crowdsourcing and research with human subjects
680
+
681
+ Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)?
682
+
683
+ Answer: [NA]
684
+
685
+ Justification: This paper does not involve crowdsourcing or research with human subjects.
686
+
687
+ Guidelines:
688
+
689
+ - The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
690
+ - Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper.
691
+ - According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector.
692
+
693
+ # 15. Institutional review board (IRB) approvals or equivalent for research with human subjects
694
+
695
+ Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained?
696
+
697
+ Answer: [NA]
698
+
699
+ Justification: This paper does not involve research with human subjects, and therefore, no IRB or equivalent approvals were required.
700
+
701
+ Guidelines:
702
+
703
+ - The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
704
+ - Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper.
705
+ - We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution.
706
+ - For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.
707
+
708
+ # 16. Declaration of LLM usage
709
+
710
+ Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required.
711
+
712
+ Answer: [NA]
713
+
714
+ Justification: No large language models (LLMs) were used as part of the core methods in this research.
715
+
716
+ Guidelines:
717
+
718
+ - The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components.
719
+ - Please refer to our LLM policy (https://neurips.cc/Conferences/2025/LLM) for what should or should not be described.
NeurIPS/2025/$_epsilon$-Seg_ Sparsely Supervised Semantic Segmentation of Microscopy Data/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f04c378ac4818b10635d5250a2398b06128595c09ec26e19b105633050b45a25
3
+ size 1160048
NeurIPS/2025/$_epsilon$-Seg_ Sparsely Supervised Semantic Segmentation of Microscopy Data/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5cbe1d7ae49549c77687bff1cea6a48240c91cfd7805da2a5b3549e5231a27e1
3
+ size 911037
NeurIPS/2025/$_mathcal{X}^2$-DFD_ A framework for e$_mathcal{X}$plainable and e$_mathcal{X}$tendable Deepfake Detection/bcd9cdba-c978-4228-b2cb-d5da71ce3406_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0957a8904e5eaac94fcaac2b93fd1d59f9c2b17a311aa188593f0cdc761acfe6
3
+ size 270220
NeurIPS/2025/$_mathcal{X}^2$-DFD_ A framework for e$_mathcal{X}$plainable and e$_mathcal{X}$tendable Deepfake Detection/bcd9cdba-c978-4228-b2cb-d5da71ce3406_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2ddc2f614aa3ea9c6d86ca46449ed15354b419ae5f387cb2267e0b516fcf128
3
+ size 327773
NeurIPS/2025/$_mathcal{X}^2$-DFD_ A framework for e$_mathcal{X}$plainable and e$_mathcal{X}$tendable Deepfake Detection/bcd9cdba-c978-4228-b2cb-d5da71ce3406_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:abd9426f0db8c0f4b97f0e95d3b199b5aaafe3f25ede4e486468abe9a9a90959
3
+ size 14647175
NeurIPS/2025/$_mathcal{X}^2$-DFD_ A framework for e$_mathcal{X}$plainable and e$_mathcal{X}$tendable Deepfake Detection/full.md ADDED
The diff for this file is too large to render. See raw diff
 
NeurIPS/2025/$_mathcal{X}^2$-DFD_ A framework for e$_mathcal{X}$plainable and e$_mathcal{X}$tendable Deepfake Detection/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:049b07da2b98698e31381fba2a20e789343638d2204c26c7dcbd59605f7cc957
3
+ size 3316867
NeurIPS/2025/$_mathcal{X}^2$-DFD_ A framework for e$_mathcal{X}$plainable and e$_mathcal{X}$tendable Deepfake Detection/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:856713a41d8ae34ba531bcc27a71af0842aeef77986fd54f07c2ab00abee9145
3
+ size 1138415
NeurIPS/2025/$_mu$PC_ Scaling Predictive Coding to 100+ Layer Networks/fbf329ca-83e7-4d17-ac37-18f99b180880_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2970bb69efc1380dd85622d56e2dbf3697b4f93503b0f340161ed773c56fa31f
3
+ size 182910
NeurIPS/2025/$_mu$PC_ Scaling Predictive Coding to 100+ Layer Networks/fbf329ca-83e7-4d17-ac37-18f99b180880_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f69d606a2599630425595ab6585e07b38387158f72363f39fb7d924a79f2fa9
3
+ size 218599
NeurIPS/2025/$_mu$PC_ Scaling Predictive Coding to 100+ Layer Networks/fbf329ca-83e7-4d17-ac37-18f99b180880_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ccf0af576d32bf90246139bc3e094e8834373b60bd090d5dc2af0031b39c966f
3
+ size 19707802
NeurIPS/2025/$_mu$PC_ Scaling Predictive Coding to 100+ Layer Networks/full.md ADDED
The diff for this file is too large to render. See raw diff
 
NeurIPS/2025/$_mu$PC_ Scaling Predictive Coding to 100+ Layer Networks/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90b100db9f59dd7ac433526912851f13379353b7fe7b5f116348d9bafcddb193
3
+ size 2066241
NeurIPS/2025/$_mu$PC_ Scaling Predictive Coding to 100+ Layer Networks/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ddd369fca73e6f17412a788829e741d851ca5a23e32396691cb5f587ef33da2
3
+ size 1195800
NeurIPS/2025/$_textit{HiMaCon_}$ Discovering Hierarchical Manipulation Concepts from Unlabeled Multi-Modal Data/1c795ae1-a64d-41eb-8ed5-e0f5ebeac451_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3df2d622af1c74944e217ad8bed7ea49def29d66862428ef2521cf5514728363
3
+ size 221988
NeurIPS/2025/$_textit{HiMaCon_}$ Discovering Hierarchical Manipulation Concepts from Unlabeled Multi-Modal Data/1c795ae1-a64d-41eb-8ed5-e0f5ebeac451_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a31698edfce1beab2ab798d696ced4adc77418e881a10bd710612967ce559e54
3
+ size 285363
NeurIPS/2025/$_textit{HiMaCon_}$ Discovering Hierarchical Manipulation Concepts from Unlabeled Multi-Modal Data/1c795ae1-a64d-41eb-8ed5-e0f5ebeac451_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95b45bf29e50bbc6a3d6804566afd1835f24c1702454aace51e97b621dd173b3
3
+ size 17601020
NeurIPS/2025/$_textit{HiMaCon_}$ Discovering Hierarchical Manipulation Concepts from Unlabeled Multi-Modal Data/full.md ADDED
The diff for this file is too large to render. See raw diff
 
NeurIPS/2025/$_textit{HiMaCon_}$ Discovering Hierarchical Manipulation Concepts from Unlabeled Multi-Modal Data/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c7b575e990e300d3c68ef5c38193a3d63d0710e072c0ad9caca4b0258174917
3
+ size 1159697
NeurIPS/2025/$_textit{HiMaCon_}$ Discovering Hierarchical Manipulation Concepts from Unlabeled Multi-Modal Data/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c9ec43f0aafbb2f6730cf0fce645478d9a318829faa58c4cad1883c69624990
3
+ size 1159180
NeurIPS/2025/$_textit{Hyper-GoalNet}$_ Goal-Conditioned Manipulation Policy Learning with HyperNetworks/c7074468-e0ed-4a67-900c-b1a066f37372_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ada6de2b991142dd37a96202487cfc15cd82e1226b7ef6d5f377dec3c5fb4ba
3
+ size 200658
NeurIPS/2025/$_textit{Hyper-GoalNet}$_ Goal-Conditioned Manipulation Policy Learning with HyperNetworks/c7074468-e0ed-4a67-900c-b1a066f37372_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6a2be193f4619b65bfc4ab843a0e78be3733c71917da3728ca8d323e3ac6935
3
+ size 262910
NeurIPS/2025/$_textit{Hyper-GoalNet}$_ Goal-Conditioned Manipulation Policy Learning with HyperNetworks/c7074468-e0ed-4a67-900c-b1a066f37372_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae99d3a048178225550f350fbabcf7646d0ae1ec873964c579dcc6a97bfd921c
3
+ size 7716312
NeurIPS/2025/$_textit{Hyper-GoalNet}$_ Goal-Conditioned Manipulation Policy Learning with HyperNetworks/full.md ADDED
The diff for this file is too large to render. See raw diff
 
NeurIPS/2025/$_textit{Hyper-GoalNet}$_ Goal-Conditioned Manipulation Policy Learning with HyperNetworks/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c2c11e5e81109bc1ea052337cd8f44fa06c43046ea24a20cd467eedcc6b00904
3
+ size 1028638
NeurIPS/2025/$_textit{Hyper-GoalNet}$_ Goal-Conditioned Manipulation Policy Learning with HyperNetworks/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:394b4ec8282eac4f4f49eb90e46cc984f3b7647601fca1790f2ef4dc88dd545c
3
+ size 1074152
NeurIPS/2025/$_texttt{BetaConform}$_ Efficient MAP Estimation of LLM Ensemble Judgment Performance with Prior Transfer/ec15ba4b-643b-443b-a5cc-470c3ec49ae5_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19e5cf670ebb71ed7e2045f7603a4afd55ef410ec166cc2b0b728922134302d7
3
+ size 194273
NeurIPS/2025/$_texttt{BetaConform}$_ Efficient MAP Estimation of LLM Ensemble Judgment Performance with Prior Transfer/ec15ba4b-643b-443b-a5cc-470c3ec49ae5_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1aaa4221138b9cf6cbb7c19765aefc0d5bb84cf3f783aa2fc40399c54b53578
3
+ size 247685
NeurIPS/2025/$_texttt{BetaConform}$_ Efficient MAP Estimation of LLM Ensemble Judgment Performance with Prior Transfer/ec15ba4b-643b-443b-a5cc-470c3ec49ae5_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ddd05c9557841010677d62a6548693fd576c4b69414745276879f8fedfc977e8
3
+ size 1111972
NeurIPS/2025/$_texttt{BetaConform}$_ Efficient MAP Estimation of LLM Ensemble Judgment Performance with Prior Transfer/full.md ADDED
The diff for this file is too large to render. See raw diff
 
NeurIPS/2025/$_texttt{BetaConform}$_ Efficient MAP Estimation of LLM Ensemble Judgment Performance with Prior Transfer/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:64eb8926da976a2613cb630fec95006c1a648be662dac7547f1df6ea3eca4101
3
+ size 900998
NeurIPS/2025/$_texttt{BetaConform}$_ Efficient MAP Estimation of LLM Ensemble Judgment Performance with Prior Transfer/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a63304d27c9d488922414957997c9c64d4bbf92983b4d76c412a6d0851bb3272
3
+ size 1016328
NeurIPS/2025/$_texttt{G1}$_ Teaching LLMs to Reason on Graphs with Reinforcement Learning/34acc6fe-738e-4fc8-bf3b-dc455aa6102a_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71b5520c02ecbba94d1c847201132c08c227a3c2ec1bd4a000fb6361e916ad30
3
+ size 211922
NeurIPS/2025/$_texttt{G1}$_ Teaching LLMs to Reason on Graphs with Reinforcement Learning/34acc6fe-738e-4fc8-bf3b-dc455aa6102a_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd937a4a27775c112b6155a41b61cadfb3f48e67c413f8fbf3ca30e3d17b040c
3
+ size 246110
NeurIPS/2025/$_texttt{G1}$_ Teaching LLMs to Reason on Graphs with Reinforcement Learning/34acc6fe-738e-4fc8-bf3b-dc455aa6102a_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d2ae310d38eb8ec698715faa7ab592ca8d629fd164ec85072776942e3410e6b7
3
+ size 574707
NeurIPS/2025/$_texttt{G1}$_ Teaching LLMs to Reason on Graphs with Reinforcement Learning/full.md ADDED
The diff for this file is too large to render. See raw diff
 
NeurIPS/2025/$_texttt{G1}$_ Teaching LLMs to Reason on Graphs with Reinforcement Learning/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5ad5b575c47112d4a0f6deb55c39d17430409fb9bb233609ed75679423086701
3
+ size 4322174
NeurIPS/2025/$_texttt{G1}$_ Teaching LLMs to Reason on Graphs with Reinforcement Learning/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:33a9a5446c17d0441142e4210f71362e6753e75621835fe1538afcf82dc69358
3
+ size 790146
NeurIPS/2025/$_texttt{STRCMP}$_ Integrating Graph Structural Priors with Language Models for Combinatorial Optimization/3e772e3d-1de4-44c2-a5f8-ed39ed045157_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3643772ca371700daf8bec8eb0326530cac57a5bc8e2c36280ef5cae69c62f71
3
+ size 242986
NeurIPS/2025/$_texttt{STRCMP}$_ Integrating Graph Structural Priors with Language Models for Combinatorial Optimization/3e772e3d-1de4-44c2-a5f8-ed39ed045157_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4fc58f36766b3878174be074d1488988c8dfd20aa04b4a8f60cd842262c784d1
3
+ size 312458