Digitala Vetenskapliga Arkivet

Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Representation Learning for Computational Pathology and Spatial Omics
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division Vi3.ORCID iD: 0000-0001-6852-6605
2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Description
Abstract [en]

Artificial intelligence (AI) advancements have enhanced the analysis and interpretation of computational pathology. Through representation learning, deep learning models can automatically identify complex patterns and extract meaningful features from raw data, revealing subtle spatial relationships. Spatial omics, which captures spatially resolved molecular data, naturally aligns with these approaches, enabling a deeper examination of tissue architecture and cellular heterogeneity. However, early spatial omics methods often overlooked the morphological and spatial context inherent in tissues.

The integration of spatial omics with imaging AI and representation learning provides a comprehensive view for understanding complex tissue environments, providing deeper insights into disease mechanisms and molecular landscapes. This thesis investigates how deep learning-derived representations from biological images can be utilized in the context of spatial omics and disease processes.

Key contributions of this work include: (i) investigating the correlation between representations learned from models trained on hematoxylin-eosin (H&E)-stained images and underlying gene expression profiles; (ii) applying self-supervised learning to identify genetically relevant patterns across H&E and DAPI staining; and (iii) developing a framework that leverages self-supervised representations to refine cell-type assignments obtained from spatial transcriptomics deconvolution methods. As a culmination of this part of the thesis, this research introduces (iv) a conceptual framework for understanding representations within spatial omics and provides a survey of the current literature through this lens.

The thesis further includes practical applications such as (v) developing a tool for annotation of whole-slide images (WSI) using self-supervised representations and (vi) exploring the use of weakly-supervised learning to identify early tumor-indicating morphological changes in benign prostate biopsies.

Place, publisher, year, edition, pages
Uppsala: Acta Universitatis Upsaliensis, 2024. , p. 63
Series
Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology, ISSN 1651-6214 ; 2470
Keywords [en]
artificial intelligence, representation learning, computational pathology, spatial omics, spatial transcriptomics
National Category
Computer graphics and computer vision Medical Imaging
Research subject
Computerized Image Processing
Identifiers
URN: urn:nbn:se:uu:diva-542989ISBN: 978-91-513-2298-8 (print)OAI: oai:DiVA.org:uu-542989DiVA, id: diva2:1913908
Public defence
2025-01-24, Siegbahnsalen, Ångströmlaboratoriet, Lägerhyddsvägen 1, Uppsala, 09:15 (English)
Opponent
Supervisors
Available from: 2024-12-18 Created: 2024-11-18 Last updated: 2025-02-09
List of papers
1. Morphological Features Extracted by AI Associated with Spatial Transcriptomics in Prostate Cancer
Open this publication in new window or tab >>Morphological Features Extracted by AI Associated with Spatial Transcriptomics in Prostate Cancer
Show others...
2021 (English)In: Cancers, ISSN 2072-6694, ISSN 2072-6694, Vol. 13, no 19, article id 4837Article in journal (Refereed) Published
Abstract [en]

Prostate cancer is a common cancer type in men, yet some of its traits are still under-explored. One reason for this is high molecular and morphological heterogeneity. The purpose of this study was to develop a method to gain new insights into the connection between morphological changes and underlying molecular patterns. We used artificial intelligence (AI) to analyze the morphology of seven hematoxylin and eosin (H & E)-stained prostatectomy slides from a patient with multi-focal prostate cancer. We also paired the slides with spatially resolved expression for thousands of genes obtained by a novel spatial transcriptomics (ST) technique. As both spaces are highly dimensional, we focused on dimensionality reduction before seeking associations between them. Consequently, we extracted morphological features from H & E images using an ensemble of pre-trained convolutional neural networks and proposed a workflow for dimensionality reduction. To summarize the ST data into genetic profiles, we used a previously proposed factor analysis. We found that the regions were automatically defined, outlined by unsupervised clustering, associated with independent manual annotations, in some cases, finding further relevant subdivisions. The morphological patterns were also correlated with molecular profiles and could predict the spatial variation of individual genes. This novel approach enables flexible unsupervised studies relating morphological and genetic heterogeneity using AI to be carried out.

Place, publisher, year, edition, pages
MDPIMDPI AG, 2021
Keywords
Cancer Research, Oncology
National Category
Cancer and Oncology Medical Imaging
Identifiers
urn:nbn:se:uu:diva-458304 (URN)10.3390/cancers13194837 (DOI)000707769300001 ()34638322 (PubMedID)
Funder
EU, European Research CouncilSwedish Foundation for Strategic Research Swedish Cancer Society
Available from: 2021-11-08 Created: 2021-11-08 Last updated: 2025-02-09Bibliographically approved
2. Self-Supervised Learning for Genetically Relevant Domain Identification in Morphological Images
Open this publication in new window or tab >>Self-Supervised Learning for Genetically Relevant Domain Identification in Morphological Images
2024 (English)In: IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI 2024, Institute of Electrical and Electronics Engineers (IEEE), 2024, article id 10635503Conference paper, Published paper (Refereed)
Abstract [en]

Spatial-omics techniques are used for profiling the local gene expression of a sample, while imaging is typically used for capturing the phenotype, in turn determined by the gene expression. However, the correlation between them may not necessarily be direct. Tissue morphology can also reflect genes expressed during development, or gene expression may not yet have resulted in tissue restructuring. Thus, recent efforts have shown that integration of spatial-omics with imaging provides additional information of biological relevance. In this work, we show that morphological feature extraction on H&E images using self-supervised learning is more versatile for capturing relevant tissue domains compared with the previously proposed approaches. Furthermore, self-supervised learning allows the novel use of DAPI for domain identification, which enables morphological integration also for experiments where H&E staining is not available.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Series
IEEE International Symposium on Biomedical Imaging, ISSN 1945-7928, E-ISSN 1945-8452
National Category
Computer graphics and computer vision Medical Imaging
Identifiers
urn:nbn:se:uu:diva-542764 (URN)10.1109/isbi56570.2024.10635503 (DOI)001305705102016 ()2-s2.0-85202187131 (Scopus ID)979-8-3503-1333-8 (ISBN)979-8-3503-1334-5 (ISBN)
Conference
21st IEEE International Symposium on Biomedical Imaging (ISBI), Athens, GREECE, MAY 27-30, 2024
Available from: 2024-11-13 Created: 2024-11-13 Last updated: 2025-06-23Bibliographically approved
3. Learned morphological features guide cell type assignment of deconvolved spatial transcriptomics
Open this publication in new window or tab >>Learned morphological features guide cell type assignment of deconvolved spatial transcriptomics
Show others...
2024 (English)Conference paper, Published paper (Refereed)
National Category
Medical Imaging Computer graphics and computer vision
Identifiers
urn:nbn:se:uu:diva-542767 (URN)
Conference
2024 Medical Imaging with Deep Learning (MIDL)
Available from: 2024-11-13 Created: 2024-11-13 Last updated: 2025-02-09
4. What makes for good morphology representations for spatial omics?
Open this publication in new window or tab >>What makes for good morphology representations for spatial omics?
(English)Manuscript (preprint) (Other academic)
National Category
Computer graphics and computer vision Medical Imaging
Identifiers
urn:nbn:se:uu:diva-542768 (URN)10.48550/arXiv.2407.20660 (DOI)
Available from: 2024-11-13 Created: 2024-11-13 Last updated: 2025-02-09
5. DEPICTER: Deep representation clustering for histology annotation
Open this publication in new window or tab >>DEPICTER: Deep representation clustering for histology annotation
2024 (English)In: Computers in Biology and Medicine, ISSN 0010-4825, E-ISSN 1879-0534, Vol. 170, article id 108026Article in journal (Refereed) Published
Abstract [en]

Automatic segmentation of histopathology whole -slide images (WSI) usually involves supervised training of deep learning models with pixel -level labels to classify each pixel of the WSI into tissue regions such as benign or cancerous. However, fully supervised segmentation requires large-scale data manually annotated by experts, which can be expensive and time-consuming to obtain. Non -fully supervised methods, ranging from semi -supervised to unsupervised, have been proposed to address this issue and have been successful in WSI segmentation tasks. But these methods have mainly been focused on technical advancements in algorithmic performance rather than on the development of practical tools that could be used by pathologists or researchers in real -world scenarios. In contrast, we present DEPICTER (Deep rEPresentatIon ClusTERing), an interactive segmentation tool for histopathology annotation that produces a patch -wise dense segmentation map at WSI level. The interactive nature of DEPICTER leverages self- and semi -supervised learning approaches to allow the user to participate in the segmentation producing reliable results while reducing the workload. DEPICTER consists of three steps: first, a pretrained model is used to compute embeddings from image patches. Next, the user selects a number of benign and cancerous patches from the multi -resolution image. Finally, guided by the deep representations, label propagation is achieved using our novel seeded iterative clustering method or by directly interacting with the embedding space via feature space gating. We report both real-time interaction results with three pathologists and evaluate the performance on three public cancer classification dataset benchmarks through simulations. The code and demos of DEPICTER are publicly available at https://github.com/eduardchelebian/depicter.

Place, publisher, year, edition, pages
Elsevier, 2024
Keywords
Interactive annotation, Histology, Self-supervised learning, Clustering
National Category
Computer graphics and computer vision Computer Sciences Medical Imaging
Identifiers
urn:nbn:se:uu:diva-528262 (URN)10.1016/j.compbiomed.2024.108026 (DOI)001179010100001 ()38308865 (PubMedID)
Funder
EU, European Research Council, CoG 682810
Available from: 2024-05-20 Created: 2024-05-20 Last updated: 2025-02-09Bibliographically approved
6. Discovery of tumour indicating morphological changes in benign prostate biopsies through AI
Open this publication in new window or tab >>Discovery of tumour indicating morphological changes in benign prostate biopsies through AI
Show others...
(English)Manuscript (preprint) (Other academic)
National Category
Medical Imaging Computer graphics and computer vision
Identifiers
urn:nbn:se:uu:diva-542769 (URN)10.1101/2024.06.18.24309064 (DOI)
Available from: 2024-11-13 Created: 2024-11-13 Last updated: 2025-02-09

Open Access in DiVA

UUThesis_E-Chelebian-2024(2082 kB)215 downloads
File information
File name FULLTEXT01.pdfFile size 2082 kBChecksum SHA-512
8d22c205b3f165d4cc211bb055373bf805fe7c3ebcc33fa73b072040b3ba2c598f40f4044ec13fcbc4949a301e82d3d29d33c42d528c6d870d542e5bdf46a2c3
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Chelebian, Eduard
By organisation
Division Vi3
Computer graphics and computer visionMedical Imaging

Search outside of DiVA

GoogleGoogle Scholar
Total: 215 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 1466 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf