A novel framework based on deep learning techniques enables the automated virtual staining, segmentation, and classification of histological images acquired through label-free photoacoustic microscopy.

Pathological anatomy studies the impact of diseases on human tissues, focusing on identifying the changes that guide diagnosis and inform the most effective treatment pathways. While this branch of medicine is involved in detecting various pathological states, such as autoimmune diseases and infections, its core role is in the identification and classification of numerous types of cancer. It’s essential not to confuse pathological anatomy with “clinical pathology” (or “laboratory medicine”), which focuses on analysing the patient’s biological material (blood and other fluids). Pathological anatomy is more concerned with histopathology (tissue analysis) and cytopathology, the examination of cells collected through tissue scraping or fluid aspiration, with the pap smear being the most well-known example [source: “Anatomical Pathology” – Science Direct].

Histopathology specifically refers to histological examination, a procedure that relies on the microscopic analysis of cells within tissue fragments obtained via biopsy. Observing histological images under a microscope allows the identification of diagnostic information in the excised samples [source: National Library of Medicine]. Let’s explore the methodologies involved.


In pathological anatomy, histological images reveal diagnostic information from tissue samples taken from patients. However, these samples require complex and costly staining procedures to make the transparent cells visible during microscopic examination. Despite recent advances in label-free photoacoustic microscopy, technical limitations persist, particularly in terms of video data visualisation.
A recent study led by a team from Pohang University of Science and Technology, South Korea, promises to automate the processing and analysis of histological images acquired through label-free photoacoustic microscopy using a series of artificial intelligence techniques.
In the future, the evolution of this AI-integrated label-free photoacoustic microscopy approach could result in faster, more accurate, and reliable diagnoses – especially for cancers – leading to more effective treatment planning.

Techniques for acquiring histological images

Cells are naturally colourless and transparent, so tissue fragments must be stained for visibility during histological examination under a microscope. Haematoxylin and eosin, first introduced over a century ago, remain the primary staining agents—haematoxylin being plant-derived, and eosin synthetic. While this method is technically simple and effective, it is time-consuming due to the preparation of tissue slides and subsequent staining. Furthermore, inaccuracies can arise when there is an imbalance between the number of slides produced and the volume of histological samples, leading to misdiagnoses [source: “Bancroft’s Theory and Practice of Histological Techniques” – Elsevier].

More recently, to address the shortcomings of slide-based microscopy – «which suffers from high variability between observers and limited prognostic value due to sampling limitations and the inability to view tissue structures and molecular targets in their native 3D context» – various optical microscopy techniques have entered histology labs. These methods, which use magnification lenses to examine samples, include “light-sheet microscopy,” which «rapidly captures large-scale images of samples with intrinsic optical sectioning». However, this method requires «additional chemical procedures such as optical clearing and fluorescent staining» [source: “Light-sheet microscopy for slide-free non-destructive pathology of large clinical specimens” – National Library of Medicine].

Other optical microscopy techniques, such as “bright-field microscopy,” “optical coherence tomography,” and “autofluorescence microscopy,” also provide histological images without the need for staining. Yet, «these methods are less effective than haematoxylin and eosin staining in identifying specific biomolecules and struggle to deliver sufficient clinical information» [source: “Optical coherence tomography” – Nature Reviews].

Photoacoustic microscopy

In the realm of label-free imaging techniques (which rely on the intrinsic properties of biological samples and use natural, non-destructive methods for visualisation), several systems have been developed to acquire selective histological images using specific wavelengths. Among these, Deep-Ultraviolet Microscopy (DUV) and Photoacoustic Microscopy (PAM) «use endogenous contrasts to visualise individual atomic clusters within molecules» [source: “An Ultraviolet-Transparent Ultrasound Transducer Enables High-Resolution Label-Free Photoacoustic Histopathology” – Laser & Photonics Reviews].

Photoacoustic microscopy is one of the most versatile histological imaging techniques because it combines «optical absorption contrast with the high spatial resolution of ultrasound, allowing for deeper tissue penetration» [source: “Switchable Acoustic and Optical Resolution Photoacoustic Microscopy for In Vivo Small-animal Blood Vasculature Imaging” – Bioengineering].

However, in clinical applications, as noted by the authors of the paper “Deep learning-based virtual staining, segmentation, and classification in label-free photoacoustic histology of human specimens” (Light Science and Applications, September 2024), label-free photoacoustic microscopy techniques «are still far from providing high-resolution, coloured histological images comparable to the familiar slides stained with haematoxylin and eosin».

To address this issue, the research team from Pohang University of Science and Technology, South Korea, suggests that label-free images need to be transformed into «interpretable images” containing enough information to support clinical diagnoses». But how is this achieved?

Label-free histological image processing supported by deep learning

The Korean researchers succeeded by integrating label-free photoacoustic microscopy with a deep learning model capable of virtually staining, segmenting, and classifying images of human biopsy tissue. Let’s break this down.

The first step is “virtual staining,” where black-and-white images of cellular nuclei and cytoplasm within tissues – obtained from label-free photoacoustic microscopy – are transformed into images that mimic the morphological features revealed by various histochemical staining methods.

However, the researchers note that traditional deep learning models use supervised learning algorithms, which require paired images for training. In the case of histological images, this would involve a rather complex processing workflow.

The solution, therefore, was to adopt unsupervised methods, such as Cycle Generative Adversarial Networks (CycleGAN), to train a convolutional neural network to convert black-and-white histological images into coloured ones, using unpaired datasets.

The “segmentation” phase then follows, where label-free histological images and virtual staining data are used to identify and isolate key tissue characteristics such as «cellular areas, cell counts, and intercellular distances».

Finally, «in the “classification” phase, the developed framework uses the label-free images, virtual staining images, and segmentation data to classify tissues as either cancerous or non-cancerous».

Focus on virtual staining techniques

Returning to the use of CycleGAN neural networks – employed to train the AI model to virtually stain histological images – the research team highlighted a significant issue. This arises when images from one domain (such as cell size) contain more detailed information than those from another histopathological domain (such as cell count). In such cases, «Cycle Generative Adversarial Networks could lead the model to reconstruct the entire set of acquired images inaccurately and imprecisely». On the other hand, they added, «using two separate generators and two distinct discriminators for the images would require an intensive consumption of system memory and time».

To mitigate this risk, the researchers introduced a second virtual staining method using Contrastive Unpaired Translation (CUT) technology. This approach maximises the cross-referencing of information between the black-and-white input histological images and the coloured output images.

CUT technology leverages “contrastive learning,” an AI technique that trains the system to extract features from all input images without labels, regardless of their domain. It does this by calculating their differences (contrasts) and minimising “contrastive loss,” the outcome of this calculation.

Now, let’s dive deeper into the potential future development of this AI solution and its broader implications across various fields.

Glimpses of Futures

The framework developed by the South Korean scientists, if implemented, tested, and validated, could represent a significant leap forward in histopathology, which plays a critical role in detecting the presence of cancerous cells within biopsy samples.

With the aim of anticipating possible future scenarios, we try to analyze – using the STEPS matrix – the impacts that the evolution of the methodology proposed by the Pohang University of Science and Technology of South Korea for the processing of histological images acquired using microscopy photoacoustic without label, could have a social, technological, economic, political and sustainability profile

S – SOCIAL: the initial, brief trial of this deep learning-based framework for automated virtual staining, segmentation, and classification of label-free histological images focused on liver tissues from patients diagnosed with liver cancer. The results, according to the research team, revealed a high level of accuracy (around 98%) in distinguishing between cancerous and non-cancerous liver cells. Notably, «the model demonstrated 100% sensitivity when reviewed by three pathologists, underscoring its potential for clinical application». However, it’s important to highlight that, in label-free photoacoustic microscopy of human samples, the acquired images may present not only familiar histological elements to pathologists but also less familiar, misleading ones, potentially complicating the diagnostic process. Looking ahead, the integration of AI with label-free photoacoustic microscopy could lead to faster, more accurate, and reliable diagnoses, ultimately resulting in more effective treatment plans for patients. By exploring this method’s future development, we can expect significant improvements in clinical outcomes, with reduced variability in diagnosis and more consistent decision-making across healthcare professionals. This could enhance patient trust in the diagnostic process and contribute to more personalised and timely cancer treatments.

T – TECHNOLOGICAL: from a technological standpoint, the future development of this framework will deepen our understanding of a persistent challenge in diagnostic imaging, particularly in histopathology using label-free photoacoustic microscopy: the “black box” nature of machine learning algorithms. Deep learning, a subset of machine learning, often lacks transparency, as the process by which it generates its outputs from input data is not easily understood. This challenge is particularly pertinent when considering the use of Cycle Generative Adversarial Networks (CycleGANs) and Contrastive Unpaired Translation (CUT) for the virtual staining of black-and-white images (representing cellular nuclei and cytoplasm in tissue samples). The Korean research team’s work contributes significantly to making these machine learning techniques more understandable and accessible to users, supporting the broader goal of Explainable Artificial Intelligence (XAI). By advancing transparency, this framework could pave the way for AI systems in medical diagnostics that are not only powerful but also explainable, allowing clinicians to trust and interpret their outputs more confidently.

E – ECONOMIC: in oncology, the clinical value of a highly sensitive, precise, and rapid histopathological examination – enabled by AI that automates the analysis of histological images obtained from label-free photoacoustic microscopy – lies in its potential to reduce false positives and allow quicker transitions to treatment planning. This acceleration improves efficiency and leads to cost savings, as faster and more reliable diagnoses lower overall healthcare expenses. This is particularly significant in Western countries, where healthcare systems bear substantial costs related to cancer diagnosis and treatment. In Italy, for instance, the National Council for Economics and Labour (CNEL) reported that €20 billion was spent in 2022 to cover the costs of diagnostic tests (including histology), hospitalisations, and medications for cancer patients. By enhancing diagnostic speed and precision, AI-driven tools like this framework could help alleviate some of the financial burden on healthcare systems.

P – POLITICAL: in histopathology, particularly in cancer cases, having access to faster and more accurate diagnoses means earlier intervention, increasing the likelihood of long-term positive outcomes. This aligns with the European Council’s Recommendation on strengthening cancer prevention through early detection and the European Beating Cancer Plan. The latter allocated €4 billion in funding to reverse the concerning trend seen in 2020, when 2.7 million cancer diagnoses and 1.3 million cancer-related deaths occurred across the EU. «Without decisive action now, cancer cases are projected to increase by 24% by 2035, becoming the leading cause of death in the European Union». The framework proposed by Pohang University of Science and Technology, if validated and widely adopted, could support these global efforts by improving the speed and accuracy of cancer diagnoses, playing a crucial role in achieving the goals set out in these EU policies.

S – SUSTAINABILITY: on the social sustainability front, more timely and accurate cancer diagnoses – through histological imaging from label-free photoacoustic microscopy – should become accessible to all communities worldwide, especially in economically disadvantaged regions. This is critical in respecting the World Health Organization’s (WHO) assertion that everyone has the right to health and necessary medical care. However, health prevention and medical treatment often remain areas fraught with inequality, as access is linked to the economic interests of wealthier nations. The future of histopathology, particularly with AI-driven innovations in video data analysis from label-free techniques (which avoid manual chemical staining), will need to be supported by more inclusive policies that ensure equitable access to clinical diagnostics for all, irrespective of economic status.

Written by: