The Deep Learning Nuclei Detection engine replaces traditional threshold-based segmentation with a convolutional neural network (CNN) that has been trained to recognize nuclear boundaries across diverse tissue morphologies. The process operates in two phases:
- Inference — The pre-trained model processes each image tile through multiple convolutional layers. Early layers detect simple features (edges, intensity gradients). Deeper layers combine these into complex patterns (nuclear boundaries, touching-cell junctions, cytoplasmic-nuclear transitions). The output is a probability map of nuclear regions and a boundary prediction map.
- Instance Segmentation — Probability maps are thresholded at the user-specified confidence level, and touching nuclei are separated using the predicted boundaries. Each resulting connected component receives a unique integer label, producing the same coded image output as classical detection.
The network has learned features that are difficult to express as rules: the subtle intensity gradient at a nuclear boundary, the texture difference between chromatin and cytoplasm, the characteristic shape of a lymphocyte versus a tumor cell. This learned representation makes it more robust to staining variability, mixed cell populations, and complex tissue architecture.