ScientiaLux
strataquest Glossary Manual Correction
Post-Processing

Manual Correction

Interactively correcting detection results

View
Definition
No automated detection is perfect. Manual Correction lets you fix the mistakes — split nuclei that were incorrectly merged, merge fragments that were incorrectly split, add cells the algorithm missed, and remove false detections. It bridges the gap between automated efficiency (processing thousands of cells in seconds) and human judgment (recognizing the errors that algorithms can't see), ensuring that the final detection is as accurate as possible before downstream analysis proceeds.
Human-in-the-Loop
Expert review of automated results
Split & Merge
Fix segmentation errors
Add & Delete
Fix detection errors
Cascade Updates
Changes propagate downstream

How It Works

Manual Correction provides an interactive editing interface for coded images:

  1. Visual overlay — The coded image is displayed with color-coded labels overlaid on the original fluorescence image, making it easy to identify where detection boundaries don't match visible nuclear boundaries.
  2. Correction tools — Interactive tools for: Split (draw a line to divide one object into two), Merge (click two adjacent objects to combine them), Add (draw a new object boundary), Delete (remove a selected object).
  3. Label management — New objects receive new unique labels. Merged objects inherit one label. Deleted objects' labels are retired. The coded image maintains unique, non-overlapping labels throughout.
  4. Propagation — After correction, all dependent engines (measurements, classification, spatial analysis) can be recalculated on the corrected coded image.
Simplified

Manual Correction shows the detection overlay on the original image so you can see where the algorithm got it wrong. You can split merged cells, merge split fragments, add missed cells, and delete false detections. All downstream measurements update automatically.

Science Behind It

Why automated analysis needs human review: Pawley's Confocal Handbook chapter on automated 3D analysis emphasizes that validation requires multiple independent assessments: "A single observer is insufficient" for establishing ground truth. Automated detection, like a single observer, has systematic biases — it consistently mishandles the same types of edge cases (touching cells, dim nuclei, unusual morphologies). Manual correction doesn't just fix individual errors; it compensates for the systematic blind spots of the algorithm.

The efficiency argument: A typical tissue section might contain 50,000 cells. Manual segmentation of each would take days. Automated detection processes them in seconds but may make 500 errors (1% error rate). Manual correction of those 500 errors takes minutes. The combination achieves near-perfect accuracy at a fraction of the time — 99%+ of the work is automated, and human expertise handles only the exceptions.

Error types in segmentation: Roysam et al. categorize segmentation errors as: false positives (detecting non-cells), false negatives (missing real cells), separation errors (merging or splitting), and boundary errors (incorrect object shape). Each type has different downstream consequences. False positives introduce noise into population statistics. False negatives bias measurements by excluding certain cell types. Separation errors distort per-cell measurements. Manual correction can address all four types, but separation errors are the most common and have the largest impact on measurement accuracy.

When is correction necessary? Not always. For large-scale studies where population-level statistics (mean, median, distribution) are more important than individual cell accuracy, a 1-2% detection error rate may be acceptable — the errors average out across thousands of cells. Manual correction becomes critical when: (1) individual cell identity matters (tracking specific cells across serial sections), (2) rare populations are being quantified (a few false positives could double the apparent count), or (3) the results will inform clinical decisions.

Simplified

No algorithm is perfect — automated detection typically makes errors on 1-2% of cells. Manual correction lets experts fix these errors, combining the speed of automation (processing 50,000 cells in seconds) with the accuracy of human judgment (correcting the 500 the algorithm got wrong). This is especially important when rare cell populations are being quantified, where even a few errors could significantly alter the results.

Practical Example

Preparing a tissue section for a clinical trial biomarker analysis:

  1. Automated Nuclei Detection finds 32,000 cells
  2. Review reveals ~300 merged doublets (two cells counted as one) in dense tumor regions
  3. Manual split corrections separate each doublet → 32,300 individual cells
  4. ~50 false positives (debris at tissue edges) are deleted → 32,250 validated cells
  5. ~20 missed lymphocytes in dim regions are manually added → 32,270 final cells

Total correction time: approximately 15 minutes. The corrected detection ensures accurate phenotyping of the tumor microenvironment, where undercounting immune cells by 1% could affect the spatial interaction analysis that guides therapy selection.

Simplified

From 32,000 automated detections, a 15-minute review corrects ~370 errors: splitting merged cells, removing debris, and adding missed lymphocytes. The final 32,270 validated cells provide the accuracy needed for clinical biomarker analysis, where every missed immune cell matters.

Connected Terms

Share This Term
Term Connections