Speaker
Description
Diagnosing mental disorders remains one of the most intricate challenges in
neuroscience and clinical practice. Unlike many physical illnesses, mental health
conditions often lack well-defined biological markers or clear-cut diagnostic thresh-
olds. Instead, diagnoses are typically informed by subjective assessments, be-
havioral evaluations, and social indicators, making the process highly dependent
on the clinician’s experience and judgment. Compounding this complexity is the
nature of the brain itself: a highly dynamic and interconnected system, where
even subtle variations in neural activity can reflect significant cognitive or be-
havioral differences. Technologies like functional Magnetic Resonance Imaging
(fMRI) allow researchers to capture snapshots of this activity, but the high-
dimensional, noisy, and temporally complex data they produce require sophis-
ticated analytical approaches to extract meaningful patterns.
In this study, we explore the use of unsupervised contrastive learning tech-
niques to identify latent neural structures associated with mental health con-
ditions, specifically autism spectrum disorder. The data consists of multiplex
brain networks derived from fMRI scans [1], where each subject is represented
as a 12-layer graph with 264 regions of interest (ROIs) as nodes. Unlike super-
vised approaches that rely on diagnostic labels—which are themselves subject to
ambiguity and inconsistency—our method seeks to uncover intrinsic patterns in
the data without label information, thereby avoiding potential bias and enabling
a more objective analysis.
A central innovation of our approach is the generation of positive sample
pairs for contrastive learning in the absence of explicit features or labels. To
this end, we employ a link prediction mechanism that operates mainly on the
graph topology. Given a subject’s brain network (represented as a node embed-
dings pooling in the latent space), we use a shared GAT to map nodes in the
embedding space in order to identify links that are likely to exist within the same
distribution, effectively synthesizing structurally similar yet independent graph
instances. These positive pairs allow the model to learn representations that bring similar network topologies closer in the embedding space, while naturally
pushing dissimilar ones apart.
This structure-aware training strategy enables the model to capture mean-
ingful connectivity patterns that correlate with mental health conditions, with-
out requiring hand-crafted features or domain-specific annotations. Addition-
ally, the model’s design ensures that the learning process focuses exclusively on
the underlying graph structure, thus enhancing its generalizability to different
data sources and mental health scenarios. Validation across multiple datasets,
including real-world network collections and citation graphs like CORA, further
demonstrates the method’s robustness and versatility. In particular, embeddings
generated from real-world networks show clear class separation, as illustrated
in Figure 1, highlighting the ability of the model to learn discriminative repre-
sentations even in an unsupervised setting.
Our findings suggest that unsupervised graph-based contrastive learning of-
fers a promising direction for developing data-driven, interpretable tools to sup-
port mental disorder diagnostics, potentially contributing to more standardized
and reproducible assessments in clinical neuroscience.
Figure 1: Plot of the embedding of the real world networks after the model
training.
References
[1] Manlio De Domenico. Multilayer modeling and analysis of human brain
networks. GigaScience, 6(5):gix004, 02 2017.