phenotypic.abc_#
Abstract interfaces for fungal colony image operations.
Defines the base contracts that power the processing pipeline: enhancers, detectors, refiners, grid operations, and measurement classes. Implement these to add new steps tailored to agar plate imaging, building on MeasurementInfo, MeasureFeatures, ImageOperation, GridOperation, and the prefab pipeline foundation.
Classes
Extract quantitative measurements from detected colony objects in images. |
|
Core abstract base class for all single-image transformation operations in PhenoTypic. |
|
Abstract base class for preprocessing operations that improve colony detection through enhanced grayscale. |
|
Abstract base class for whole-image transformation operations affecting all components. |
|
Abstract base class for colony detection operations on agar plate images. |
|
Abstract base class for post-detection refinement operations that modify object masks and maps. |
|
Marker ABC for threshold-based colony detection strategies. |
|
Abstract base class for operations on grid-aligned plate images. |
|
Abstract base class for detecting grid structure and assigning objects to wells. |
|
Apply whole-image transformations (rotation, alignment, perspective) to GridImage objects. |
|
Abstract base class for post-detection refinement operations on grid-aligned plate images. |
|
Extract feature measurements from detected colonies in GridImage objects. |
|
Root abstract base class for all operations in PhenoTypic. |
|
An enumeration. |
|
Detect and label colonies in GridImage objects using grid structure. |
|
Marker class for pre-built, validated image processing pipelines from the PhenoTypic team. |
- class phenotypic.abc_.BaseOperation[source]#
Bases:
ABCRoot abstract base class for all operations in PhenoTypic.
BaseOperation is the foundation of PhenoTypic’s operation system. It provides automatic memory tracking, logging integration, and utilities for parallel execution. All operations in PhenoTypic inherit from BaseOperation (either directly or through intermediate ABCs like ImageOperation and MeasureFeatures).
This class is a blueprint for extending the framework: when you create a new operation, BaseOperation automatically handles memory profiling and logging so you can focus on the algorithm implementation.
What it provides automatically:
Memory Tracking: BaseOperation automatically initiates tracemalloc when the logger is enabled for INFO level or higher. This enables per-operation memory usage monitoring without explicit instrumentation. Three levels of memory tracking are available:
Object memory (via pympler if available): Detailed breakdown of memory used by Python objects in your operation.
Process memory (via psutil if available): System-level memory usage (RSS - resident set size).
Tracemalloc snapshots: Python’s built-in memory tracking showing current and peak allocations.
Logging Integration: A logger is created automatically for each operation class with the name format: module.ClassName. Subclasses can log messages and memory usage without additional setup.
Parallel Execution Support: The _get_matched_operation_args() method enables serialization of operation state for parallel execution by extracting operation attributes that match the _operate() method’s parameters.
Inheritance hierarchy:
BaseOperation (this class) ├── ImageOperation │ ├── ImageEnhancer (preprocessing filters, noise reduction) │ ├── ImageCorrector (rotation, alignment, quality fixes) │ └── ObjectDetector (colony detection algorithms) │ ├── MeasureFeatures (feature extraction from detected objects) │ └── GridOperation (grid detection and refinement)
How to subclass BaseOperation:
When extending BaseOperation, you typically implement one of its subclasses (ImageOperation, MeasureFeatures, etc.) which provides the specific interface for your operation type. All the memory tracking and logging happens automatically in the parent class.
Example: Creating a custom operation (without image details):
from phenotypic.abc_ import BaseOperation import logging
- class MyCustomOperation(BaseOperation):
- def __init__(self, param1, param2=5):
# Always call parent __init__ first super().__init__()
# Store your parameters as attributes self.param1 = param1 self.param2 = param2
- def _operate(self, data):
# Your algorithm here # Logger available as self._logger self._logger.info(f”Processing with param1={self.param1}”)
# Log memory usage after expensive operations self._log_memory_usage(“after processing”)
return result
- _logger#
Logger instance created automatically with the format module.ClassName. Use _logger.info(), _logger.debug() to log messages during operation execution.
- Type:
- _tracemalloc_started#
Internal flag indicating whether tracemalloc was started. Set to True automatically if logger is enabled for INFO level or higher.
- Type:
Notes
Memory tracking is only enabled if the logger is configured to handle INFO level messages or higher. If you want to disable memory tracking, set the logger level to WARNING or higher.
Tracemalloc is automatically stopped when the operation object is deleted (in __del__), even if an exception occurs.
The _get_matched_operation_args() method is used internally by the pipeline system for parallel execution. It extracts operation attributes that match the _operate() method signature, enabling operations to be serialized and executed in worker processes.
On Windows, pympler may not be available, so object memory tracking will fall back gracefully. psutil is available on all platforms.
Examples
Enabling memory tracking for an operation
import logging from phenotypic.detect import OtsuDetector # Set up logging to see memory usage logging.basicConfig(level=logging.INFO) # Create detector instance detector = OtsuDetector() # Apply operation - memory usage is logged automatically result = detector.apply(image) # Console output shows: # INFO: Memory usage after <step>: XX.XX MB (objects), YY.YY MB (process)
Accessing memory information programmatically
import logging from phenotypic.enhance import GaussianBlur # Create custom logger to capture memory messages logger = logging.getLogger('phenotypic.enhance.GaussianBlur') logger.setLevel(logging.INFO) handler = logging.StreamHandler() handler.setLevel(logging.INFO) logger.addHandler(handler) # Use operation blur = GaussianBlur(sigma=2) enhanced = blur.apply(image) # Memory tracking happens automatically during operation
Custom operation with parameter matching for parallel execution
from phenotypic.abc_ import ImageOperation from phenotypic import Image class CustomThreshold(ImageOperation): def __init__(self, threshold_value: int): super().__init__() self.threshold_value = threshold_value @staticmethod def _operate(image: Image, threshold_value: int = 128) -> Image: # Apply threshold algorithm image.enh_gray[:] = image.enh_gray[:] > threshold_value return image # When operation is applied via pipeline: operation = CustomThreshold(threshold_value=100) # _get_matched_operation_args() automatically extracts: # {'threshold_value': 100} # This enables parallel execution in pipelines
- class phenotypic.abc_.GridCorrector[source]#
Bases:
ImageCorrector,GridOperation,ABCApply whole-image transformations (rotation, alignment, perspective) to GridImage objects.
GridCorrector is a type-safe wrapper around ImageCorrector that enforces GridImage input and output types. It is specialized for grid-aware image corrections on arrayed plate images.
Purpose
Use GridCorrector when implementing transformations that modify entire GridImage objects while respecting their grid structure. Like ImageCorrector, it updates all image components (rgb, gray, enh_gray, objmask, objmap) together to maintain synchronization. The difference is that it requires GridImage input and output, making explicit that your transformation works in the context of grid-structured plate images.
What GridCorrector modifies
GridCorrector operations modify ALL image components simultaneously:
Color data: rgb, gray (pixel coordinates change due to rotation/perspective)
Preprocessed data: enh_gray (enhanced grayscale also rotates/transforms)
Detection results: objmask, objmap (colony masks and labels transform identically)
Grid structure: Grid rotation angle and alignment state (optional, depends on operation)
This ensures that a rotated colony mask aligns perfectly with the rotated rgb and gray data.
GridImage vs Image
Image: Generic image with optional, unvalidated grid information.
GridImage: Specialized Image subclass with validated grid structure (row/column layout, well positions, grid alignment angle). Typically used after GridFinder detects the grid structure.
When to use GridCorrector vs ImageCorrector
ImageCorrector: Transformation works on any Image. Examples: rotation, perspective correction for individual (non-gridded) images. Use when grid structure is irrelevant.
GridCorrector: Transformation assumes or modifies grid structure. Examples: aligning colonies to grid rows/columns, rotating to match grid axes, per-well perspective correction. Use when the transformation is grid-aware or affects well-level alignment.
Typical Use Cases
Grid alignment: Rotate the entire image so detected colonies align with grid rows and columns. Improves downstream grid-based analysis. Example: GridAligner rotates to make colony rows parallel to image axes.
Perspective correction: Correct camera tilt or lens distortion that skews the grid.
Plate reorientation: Rotate plate image to canonical orientation for consistent analysis.
Color calibration per well: Apply per-well color correction that respects grid boundaries.
Implementation Pattern
Inherit from GridCorrector and implement
_operate()as normal:from phenotypic.abc_ import GridCorrector from phenotypic import GridImage class GridAligner(GridCorrector): '''Rotate GridImage to align colonies with grid rows/columns.''' def __init__(self, axis: int = 0): super().__init__() self.axis = axis @staticmethod def _operate(image: GridImage, axis: int = 0) -> GridImage: # image is guaranteed to be GridImage # Rotate all components together rotation_angle = calculate_grid_rotation(image, axis) image.rotate(angle_of_rotation=rotation_angle, mode='edge') return image
Critical Implementation Detail
Ensure ALL image components are transformed identically:
@staticmethod def _operate(image: GridImage, **kwargs) -> GridImage: # Apply transformation to rgb/gray angle = kwargs.get('angle', 0) image.rotate(angle_of_rotation=angle, mode='edge') # The image.rotate() method automatically handles: # - Rotating enh_gray identically # - Rotating objmask and objmap with same angle # - Updating grid rotation state if applicable return image
Interpolation Considerations
When rotating or warping:
Color data (rgb, gray): Use smooth interpolation (order=1+) to preserve colony edges
Detection data (objmask, objmap): Use nearest-neighbor interpolation (order=0) to preserve discrete object labels (must remain integers)
Enhanced grayscale: Use same interpolation as color data for consistency
Notes
GridCorrector has no integrity checks (@validate_operation_integrity), by design. All components are intentionally modified together; there is nothing to validate.
Grid rotation angle and alignment state may be updated after the transformation. Downstream grid-aware operations will work with the updated grid structure.
GridImage must have valid grid structure before correction. Use GridFinder or specify grid manually before applying GridCorrector.
Output is always GridImage (type-safe). Attempting to apply to plain Image raises error.
Examples
GridAligner: rotate to align colonies with grid axes
from phenotypic import GridImage, Image from phenotypic.detect import RoundPeaksDetector from phenotypic.correction import GridAligner # Load and detect colonies image = Image.from_image_path('plate.jpg') image = RoundPeaksDetector().operate(image) # Create GridImage with grid structure grid_image = GridImage(image) grid_image.detect_grid() # Align entire image to grid rows/columns aligner = GridAligner(axis=0) # Align rows horizontally aligned = aligner.apply(grid_image) # All components (rgb, gray, masks, map) rotated together # Grid structure updated to reflect rotation print(f"Rotation angle: {aligned.grid.rotation_angle}")
Custom perspective correction (conceptual)
from phenotypic.abc_ import GridCorrector from phenotypic import GridImage class GridPerspectiveCorrector(GridCorrector): """Correct camera tilt or lens distortion on grid plate.""" def __init__(self, tilt_angle: float): super().__init__() self.tilt_angle = tilt_angle @staticmethod def _operate(image: GridImage, tilt_angle: float) -> GridImage: # Apply perspective transform to all components # (Implementation depends on specific correction needed) # image.apply_perspective(...) or similar return image # Usage: correct skewed plate image corrector = GridPerspectiveCorrector(tilt_angle=10.0) corrected = corrector.apply(grid_image)
- __del__()#
Automatically stop tracemalloc when the object is deleted.
- __getstate__()#
Prepare the object for pickling by disposing of any widgets.
This ensures that UI components (which may contain unpickleable objects like input functions or thread locks) are cleaned up before serialization.
Note
This method modifies the object state by calling dispose_widgets(). Any active widgets will be detached from the object.
- apply(image: GridImage, inplace=False) GridImage[source]#
Calculates the optimal rotation angle and applies it to a grid image for alignment along the specified axis.
The method performs alignment of a GridImage object along either nrows or columns based on the specified axis. It calculates the linear regression slope and intercept for the axis, determines geometric properties of the grid vertices, and computes rotation angles needed to align the image. The optimal angle is found by minimizing the error across all computed angles, and the image is rotated accordingly.
- Raises:
ValueError – If the axis is not 0 (row-wise) or 1 (column-wise).
- Parameters:
image (ImageGridHandler) – The arr grid image object to be aligned.
- Returns:
The rotated grid image object after alignment.
- Return type:
ImageGridHandler
- widget(image: Image | None = None, show: bool = False) Widget#
Return (and optionally display) the root widget.
- Parameters:
- Returns:
The root widget.
- Return type:
ipywidgets.Widget
- Raises:
ImportError – If ipywidgets or IPython are not installed.
- class phenotypic.abc_.GridFinder(nrows: int, ncols: int)[source]#
Bases:
GridMeasureFeatures,ABCAbstract base class for detecting grid structure and assigning objects to wells.
GridFinder is the foundation for grid detection algorithms in arrayed plate imaging. It detects the row and column spacing of colonies on agar plates and assigns each detected object to its corresponding grid cell (well). This is essential for high-throughput phenotyping experiments where samples are arranged in regular grids (e.g., 96-well, 384-well formats).
What it does:
GridFinder implementations analyze the spatial distribution of detected objects in an image and determine the underlying grid structure. They compute pixel coordinates where grid rows and columns are located (row_edges and col_edges), then use these edges to assign each object to a row number, column number, and section number (unique well identifier).
Why it’s important for colony phenotyping:
In arrayed plate experiments, colonies are grown at fixed positions corresponding to wells in a microplate. By mapping detected colonies to grid positions, downstream analysis can:
Correlate colony measurements with sample metadata (what was inoculated in each well)
Track growth across replicate wells
Identify spatial patterns or contamination
Export results organized by well coordinates for database import
Without grid assignment, measurements are just unorganized lists of objects with no link to experimental design.
Grid concepts:
Row edges: Array of pixel row coordinates where rows begin/end. For an 8-row grid, this is an array of 9 values: [0, y1, y2, …, y8, image_height].
Column edges: Array of pixel column coordinates where columns begin/end. For a 12-column grid, this is an array of 13 values: [0, x1, x2, …, x12, image_width].
Grid cell assignment: Each object’s center is tested against row/column edges using pd.cut(), assigning it to row i (0 to nrows-1) and column j (0 to ncols-1).
Section number: A unique well ID computed as row*ncols + col, ordered from top-left (0) to bottom-right (nrows*ncols - 1).
Typical plate formats:
96-well plate: 8 rows × 12 columns (A1-H12)
384-well plate: 16 rows × 24 columns (A1-P24)
- Attributes:
nrows (int): Number of rows in the grid. For 96-well plates, this is 8. ncols (int): Number of columns in the grid. For 96-well plates, this is 12.
Abstract Methods:
You must implement these two methods in subclasses:
_operate(image: Image) -> pd.DataFrame: Main entry point. Should compute row and column edges, then call _get_grid_info() to assemble and return the complete grid DataFrame.
get_row_edges(image: Image) -> np.ndarray: Return array of row edge pixel coordinates. Length must be nrows + 1.
get_col_edges(image: Image) -> np.ndarray: Return array of column edge pixel coordinates. Length must be ncols + 1.
Helper Methods for Implementation:
These protected methods reduce code duplication when implementing _operate():
_get_grid_info(image, row_edges, col_edges) -> pd.DataFrame: Assembles complete grid information from pre-computed edge coordinates. This method automatically calls _add_row_number_info(), _add_col_number_info(), and _add_section_number_info() to populate all required columns. Use this in your _operate() implementation after computing edges.
Output Format:
The _operate() method returns a pandas DataFrame with detected objects and their grid assignments:
ROW_NUM: Grid row index (0 to nrows-1)
COL_NUM: Grid column index (0 to ncols-1)
SECTION_NUM: Well identifier (0 to nrows*ncols-1), ordered left-to-right, top-to-bottom
Additional columns: Object metadata (centroid, bounding box, etc.) from image.objects.info()
Objects that fall outside all grid cells (due to edge clipping or misalignment) will have NaN values in grid columns.
Concrete Implementations:
PhenoTypic provides two built-in implementations:
AutoGridFinder: Automatically optimizes row and column edge positions using scipy.optimize.minimize_scalar to minimize the MSE between object centroids and grid bin midpoints. Useful when grid position is unknown.
ManualGridFinder: User specifies exact row and column edge coordinates (e.g., from manual measurement or calibration). Use when you know the exact grid position.
Notes:
GridFinder subclasses can work with regular Image objects, not just GridImage.
Edge coordinates should always be sorted in ascending order (handled by _clip_row_edges and _clip_col_edges).
Ensure row_edges and col_edges are clipped to image bounds to prevent indexing errors.
Grid assignment uses pandas.cut() with include_lowest=True and right=True, meaning objects are assigned based on which interval they fall into.
Examples:
Create a ManualGridFinder for a 96-well plate with known geometry
For example, if a microscope image of a 96-well plate is 2048×3072 pixels and wells are evenly spaced, you might manually define:
import numpy as np from phenotypic import Image from phenotypic.grid import ManualGridFinder from phenotypic.detect import OtsuDetector # Load image of 96-well plate image = Image.from_image_path("plate_scan.jpg") # Detect colonies detector = OtsuDetector() image_with_objects = detector.operate(image) # Define grid for 8 rows × 12 columns # Rows: 8 wells vertically, spaced from pixel 100 to 2000 row_edges = np.array([100, 350, 600, 850, 1100, 1350, 1600, 1850, 2100]) # Columns: 12 wells horizontally, spaced from pixel 50 to 3050 col_edges = np.linspace(50, 3050, 13, dtype=int) # Create grid finder and assign colonies to wells grid_finder = ManualGridFinder(row_edges=row_edges, col_edges=col_edges) grid_df = grid_finder.measure(image_with_objects) # Result has columns: ROW_NUM, COL_NUM, SECTION_NUM, plus object info print(grid_df[['ROW_NUM', 'COL_NUM', 'SECTION_NUM']])
Use AutoGridFinder when grid position is unknown
When the image is rotated, shifted, or otherwise misaligned, let AutoGridFinder automatically compute optimal edge positions:
from phenotypic.grid import AutoGridFinder from phenotypic import Image from phenotypic.detect import OtsuDetector # Load and detect colonies image = Image.from_image_path("rotated_plate.jpg") detector = OtsuDetector() image_with_objects = detector.operate(image) # AutoGridFinder optimizes edge positions to align with detected colonies grid_finder = AutoGridFinder(nrows=8, ncols=12, tol=0.01) grid_df = grid_finder.measure(image_with_objects) # Grid assignment is robust to rotation and minor misalignment print(f"Found {len(grid_df)} colonies assigned to grid")
Understanding SECTION_NUM for well mapping
SECTION_NUM provides a single integer ID for each well, useful for organizing results or looking up sample metadata:
# Example: 8×12 grid (96-well plate) # SECTION_NUM runs 0-95, numbered left-to-right, top-to-bottom # Section 0 = Row 0, Col 0 (top-left, A1) # Section 11 = Row 0, Col 11 (top-right, A12) # Section 12 = Row 1, Col 0 (second row left, B1) # Section 95 = Row 7, Col 11 (bottom-right, H12) grid_df = grid_finder.measure(image_with_objects) # Filter colonies in a specific well section_5_objects = grid_df[grid_df['SectionNum'] == 5] # Map section numbers back to well coordinates well_row = section_num // 12 well_col = section_num % 12
- __del__()#
Automatically stop tracemalloc when the object is deleted.
- abstract get_col_edges(image: Image) np.ndarray[source]#
This method is to returns the column edges of the grid as numpy rgb. :param image:
- Returns:
Column-edges of the grid.
- Return type:
np.ndarray
- Parameters:
image (Image)
- abstract get_row_edges(image: Image) np.ndarray[source]#
This method is to returns the row edges of the grid as numpy rgb. :param image: Image object. :type image: Image
- Returns:
Row-edges of the grid.
- Return type:
np.ndarray
- Parameters:
image (Image)
- measure(image)#
Processes an arr image to calculate and organize grid-based boundaries and centroids using coordinates. This function implements a two-pass approach to refine row and column boundaries with exact precision, ensuring accurate grid labeling and indexing. The function dynamically computes boundary intervals and optimally segments the arr space into grids based on specified nrows and columns.
- Parameters:
image (Image) – The arr image to be analyzed and processed.
- Returns:
A DataFrame containing the grid results including boundary intervals, grid indices, and section numbers corresponding to the segmented arr image.
- Return type:
pd.DataFrame
- class phenotypic.abc_.GridMeasureFeatures[source]#
Bases:
MeasureFeatures,ABCExtract feature measurements from detected colonies in GridImage objects.
GridMeasureFeatures is a type-safe wrapper around MeasureFeatures that enforces GridImage input type. It is to MeasureFeatures what GridOperation is to ImageOperation: a specialization for grid-aware (arrayed plate) analysis.
Purpose
Use GridMeasureFeatures when implementing measurement operations that extract quantitative metrics from colonies in grid-structured agar plate images. Like MeasureFeatures, it returns pandas DataFrames with one row per detected colony. The only difference is that it requires GridImage input, making explicit that your measurement may leverage grid structure (well positions, row/column layout) if desired.
GridImage vs Image
Image: Generic image with optional, unvalidated grid information.
GridImage: Specialized Image subclass with validated grid structure (row/column layout, well positions, grid alignment). Suitable for 96-well, 384-well, or other arrayed plate formats.
When to use GridMeasureFeatures vs MeasureFeatures
MeasureFeatures: Measurement works equally well on any Image (with or without grid). Examples: colony size, color composition, morphology metrics that are computed globally. Use when grid structure is irrelevant.
GridMeasureFeatures: Measurement leverages grid structure or assumes well-level organization. Examples: per-well growth metrics, grid-aligned morphology, measurements that depend on row/column position. Use when grid structure is essential or enhances the measurement.
Implementation Pattern
Inherit from GridMeasureFeatures and implement
_operate()as normal:from phenotypic.abc_ import GridMeasureFeatures import pandas as pd class GridMeasureWellOccupancy(GridMeasureFeatures): '''Measure fraction of well area occupied by colonies.''' def _operate(self, image: GridImage) -> pd.DataFrame: # image is guaranteed to be GridImage with grid structure # Implement your grid-aware measurement here results = pd.DataFrame(...) return results
Typical Use Cases
Per-well phenotypic analysis where well position matters
Grid-based filtering (e.g., “measure only colonies in the center wells”)
Well-normalized metrics (e.g., colony area relative to well size)
Multi-well experiments where you need to track which well each measurement came from
Notes
The
measure()method is inherited from MeasureFeatures; the only difference is input type validation.Returns pandas.DataFrame with one row per detected object, first column is OBJECT.LABEL (matching image.objmap labels).
GridImage must have valid grid structure set before measuring. Typically set by GridFinder or GridCorrector operations in the pipeline.
All helper methods from MeasureFeatures (mean, median, sum, etc.) are available.
Examples
Grid-aware measurement of colony size per well
from phenotypic import GridImage from phenotypic.abc_ import GridMeasureFeatures from phenotypic.tools.constants_ import OBJECT import pandas as pd class MeasureWellOccupancy(GridMeasureFeatures): """Measure total area occupied in each well.""" def _operate(self, image: GridImage) -> pd.DataFrame: # Use grid accessor to calculate per-well metrics area = self._calculate_sum(image.objmask[:], image.objmap[:]) well_info = image.grid.info() # Get well assignments # Combine area with well location results = pd.DataFrame({ 'WellArea': area, }) results.insert(0, OBJECT.LABEL, image.objects.labels2series()) return results # Usage from phenotypic import Image from phenotypic.detect import OtsuDetector image = Image.from_image_path('plate.jpg') image = OtsuDetector().operate(image) grid_image = GridImage(image) grid_image.detect_grid() # Establish grid structure measurer = MeasureWellOccupancy() df = measurer.measure(grid_image) # Returns grid-aware measurements
- __del__()#
Automatically stop tracemalloc when the object is deleted.
- measure(image)[source]#
Processes an arr image to calculate and organize grid-based boundaries and centroids using coordinates. This function implements a two-pass approach to refine row and column boundaries with exact precision, ensuring accurate grid labeling and indexing. The function dynamically computes boundary intervals and optimally segments the arr space into grids based on specified nrows and columns.
- Parameters:
image (Image) – The arr image to be analyzed and processed.
- Returns:
A DataFrame containing the grid results including boundary intervals, grid indices, and section numbers corresponding to the segmented arr image.
- Return type:
pd.DataFrame
- class phenotypic.abc_.GridObjectDetector[source]#
Bases:
ObjectDetector,GridOperation,ABCDetect and label colonies in GridImage objects using grid structure.
GridObjectDetector is a type-safe wrapper around ObjectDetector that enforces GridImage input type. It is specialized for colony detection on arrayed plate images with grid structure.
Purpose
Use GridObjectDetector when implementing detection algorithms that find and label colonies in grid-structured agar plate images. Like ObjectDetector, it sets image.objmask and image.objmap. The difference is that it requires GridImage input, making explicit that your detection may leverage or assumes grid structure (well boundaries, grid alignment).
What GridObjectDetector produces
GridObjectDetector sets two outputs:
image.objmask: Binary mask (True=colony pixel, False=background)
image.objmap: Labeled integer map (0=background, 1..N=colony labels)
Both are set synchronously to ensure consistency. The labels in objmap match the row/column structure of the grid (useful for tracking which colonies are in which wells).
GridImage vs Image
Image: Generic image with optional, unvalidated grid information.
GridImage: Specialized Image subclass with validated grid structure (row/column layout, well positions, grid alignment). Suitable for 96-well, 384-well, or other arrayed plate formats. Created by GridFinder or manually specified.
When to use GridObjectDetector vs ObjectDetector
ObjectDetector: Detection works equally well on any Image (with or without grid). Examples: Otsu thresholding, Canny edges, round peak detection on single images. Use when detection is global and grid-independent.
GridObjectDetector: Detection assumes or leverages grid structure. Examples: per-well detection (find colonies only within well boundaries), grid-aware peak detection (use well centers as hints), adaptive detection per well (tuning per grid region). Use when grid structure is essential to the detection algorithm.
Typical Use Cases
Per-well detection: Find colonies only within well boundaries; one mask/label per well.
Grid-hinted detection: Use well center positions or grid-aligned regions as hints to improve detection accuracy.
Adaptive detection: Adjust detection parameters (threshold, sensitivity) per well to handle uneven plate illumination.
Well isolation: Ensure detected colonies don’t bleed across well boundaries.
Implementation Pattern
Inherit from GridObjectDetector and implement
_operate()as normal:from phenotypic.abc_ import GridObjectDetector from phenotypic import GridImage class GridAdaptiveDetector(GridObjectDetector): '''Detect colonies using per-well adaptive thresholding.''' def __init__(self, neighborhood_size: int = 15): super().__init__() self.neighborhood_size = neighborhood_size @staticmethod def _operate(image: GridImage, neighborhood_size: int = 15) -> GridImage: # image is guaranteed to be GridImage with grid structure # Use well positions to apply per-well detection from scipy.ndimage import label from skimage.filters import threshold_local enh = image.enh_gray[:] grid = image.grid # Access grid structure # Apply adaptive threshold per well mask = threshold_local(enh, neighborhood_size) > enh # Label connected components labeled, _ = label(mask) image.objmask[:] = mask image.objmap[:] = labeled return image
Critical Implementation Detail
GridObjectDetector includes input validation (GridImage required) but NO output integrity checks. Like ObjectDetector, it is READ-ONLY for rgb, gray, enh_gray. You may only write to objmask and objmap.
@staticmethod def _operate(image: GridImage, **kwargs) -> GridImage: # Read (protected by @validate_operation_integrity): enh = image.enh_gray[:] gray = image.gray[:] rgb = image.rgb[:] # Write (allowed): image.objmask[:] = binary_mask image.objmap[:] = labeled_map # GridImage structure (optional modification): # image.grid can be read, but typically not written return image
Grid-Aware Detection Patterns
Per-well detection: Create a mask/label per well independently
Well-boundary enforcement: Mask pixels outside well boundaries after detection
Well-center hinting: Use well positions as priors for peak detection
Adaptive parameters: Vary detection thresholds based on well position or intensity
Notes
GridObjectDetector enforces GridImage input type at runtime. Passing plain Image raises error.
Input validation uses @validate_operation_integrity(‘image.rgb’, ‘image.gray’, ‘image.enh_gray’) to ensure image color data is not modified.
GridImage must have valid grid structure before detection. Typically set by GridFinder or manually specified grid before applying GridObjectDetector.
All ObjectDetector helper methods and patterns apply identically.
Output is always GridImage (input type is preserved).
Examples
Per-well Otsu detection with grid structure
from phenotypic import GridImage, Image from phenotypic.abc_ import GridObjectDetector from scipy.ndimage import label from skimage.filters import threshold_otsu import numpy as np class GridOtsuDetector(GridObjectDetector): """Detect colonies using global Otsu threshold on grid plate.""" def _operate(self, image: GridImage) -> GridImage: enh = image.enh_gray[:] # Apply global Otsu threshold threshold = threshold_otsu(enh) binary_mask = enh > threshold # Label connected components labeled_map, _ = label(binary_mask) # Set detection results image.objmask[:] = binary_mask image.objmap[:] = labeled_map return image # Usage image = Image.from_image_path('plate.jpg') grid_image = GridImage(image) grid_image.detect_grid() detector = GridOtsuDetector() detected = detector.operate(grid_image) # Grid structure preserved; can access wells for well_row in range(grid_image.nrows): for well_col in range(grid_image.ncols): # Colonies in this well available via grid accessor pass
Per-well adaptive detection using well centers as hints
from phenotypic.abc_ import GridObjectDetector from phenotypic import GridImage from scipy.ndimage import label from skimage.filters import threshold_local class GridAdaptiveDetector(GridObjectDetector): """Adaptive per-well detection using well center positions.""" def __init__(self, neighborhood_size: int = 31): super().__init__() self.neighborhood_size = neighborhood_size def _operate(self, image: GridImage) -> GridImage: enh = image.enh_gray[:] grid = image.grid # Apply local adaptive threshold (per-well region) binary_mask = threshold_local( enh, self.neighborhood_size ) > enh # Label and store labeled_map, _ = label(binary_mask) image.objmask[:] = binary_mask image.objmap[:] = labeled_map return image # Usage: handle uneven illumination on large plates detector = GridAdaptiveDetector(neighborhood_size=31) detected = detector.operate(grid_image)
- __del__()#
Automatically stop tracemalloc when the object is deleted.
- __getstate__()#
Prepare the object for pickling by disposing of any widgets.
This ensures that UI components (which may contain unpickleable objects like input functions or thread locks) are cleaned up before serialization.
Note
This method modifies the object state by calling dispose_widgets(). Any active widgets will be detached from the object.
- apply(image, inplace=False)[source]#
Binarizes the given image gray using the Yen threshold method.
This function modifies the arr image by applying a binary mask to its enhanced gray (enh_gray). The binarization threshold is automatically determined using Yen’s method. The resulting binary mask is stored in the image’s objmask attribute.
- widget(image: Image | None = None, show: bool = False) Widget#
Return (and optionally display) the root widget.
- Parameters:
- Returns:
The root widget.
- Return type:
ipywidgets.Widget
- Raises:
ImportError – If ipywidgets or IPython are not installed.
- class phenotypic.abc_.GridObjectRefiner[source]#
Bases:
ObjectRefiner,GridOperation,ABCAbstract base class for post-detection refinement operations on grid-aligned plate images.
GridObjectRefiner is the grid-aware variant of ObjectRefiner, combining object mask refinement with grid structure awareness. It refines detected objects (colony masks and labeled maps) while respecting well boundaries and grid-aligned regions in arrayed plate images (96-well, 384-well, etc.). Like ObjectRefiner, it protects original image data (RGB, grayscale, enhanced grayscale) and modifies only detection results.
What is GridObjectRefiner?
GridObjectRefiner is the specialized version of ObjectRefiner for GridImage objects:
GridImage requirement: Accepts only GridImage input (with detected grid structure), enforced at runtime via
GridImageInputError.Grid-aware refinement: Can access well positions, grid cell boundaries, and row/column structure via
image.gridto make refinement decisions (e.g., remove colonies that exceed well boundaries, filter by grid position).Detection-only modification: Like ObjectRefiner, modifies only
image.objmask[:]andimage.objmap[:]. Original image components are protected via@validate_operation_integrity.
When to use GridObjectRefiner vs ObjectRefiner
ObjectRefiner: Use when refining detections on a plain Image without grid structure. Examples: general-purpose size filtering, morphological cleanup, shape filtering (applies globally regardless of position).
GridObjectRefiner: Use when refining detections on a GridImage where well structure matters. Examples: removing objects larger than their grid cell (
GridOversizedObjectRemover), per-well filtering, grid-aligned edge removal. The grid structure enables position-aware refinement that improves array phenotyping accuracy.
Typical Use Cases
GridObjectRefiner is useful for addressing grid-specific artifacts:
Oversized colonies: Objects spanning nearly an entire well (merged colonies, agar edges, segmentation spillover). Filtering improves per-well consistency.
Inter-well artifacts: Detections touching or bridging grid cell boundaries from uneven lighting or thresholding errors.
Boundary contamination: Colonies near plate edges that are incomplete or distorted. Grid structure allows identifying and filtering boundary-adjacent objects.
Grid registration errors: When grid detection is imperfect, some objects may be mis-assigned to wells; grid-aware refinement can filter or relocate based on position.
Implementing a Custom GridObjectRefiner
Subclass GridObjectRefiner and implement
_operate():from phenotypic.abc_ import GridObjectRefiner from phenotypic import GridImage import numpy as np class MyGridRefiner(GridObjectRefiner): def __init__(self, max_width_fraction: float = 0.9): super().__init__() self.max_width_fraction = max_width_fraction @staticmethod def _operate(image: GridImage, max_width_fraction: float = 0.9) -> GridImage: # Get grid info col_edges = image.grid.get_col_edges() max_cell_width = (col_edges[1:] - col_edges[:-1]).max() # Measure object widths objmap = image.objmap[:] from skimage.measure import regionprops_table props = regionprops_table(objmap, properties=['label', 'bbox']) # ... compute widths and filter ... return image
Key Rules
_operate()must be static (for parallel execution).All parameters except
imagemust exist as instance attributes.Only modify
image.objmask[:]andimage.objmap[:].Access grid via
image.grid(row/column edges, well positions, metadata).Return the modified GridImage.
Grid Access Patterns
Within
_operate(), access grid information via the GridImage accessor:# Grid structure nrows, ncols = image.grid.nrows, image.grid.ncols row_edges = image.grid.get_row_edges() # Row boundary positions (y-coordinates) col_edges = image.grid.get_col_edges() # Col boundary positions (x-coordinates) cell_info = image.grid.info() # DataFrame with per-object grid info # Per-object grid metadata (label, row, col, boundary flags) grid_data = image.grid.info() # pd.DataFrame with object properties
Notes
GridImage input required:
apply()enforces GridImage type at runtime. Passing a plain Image raisesGridImageInputError.Protected components: The
@validate_operation_integritydecorator ensuresimage.rgb,image.gray,image.enh_graycannot be modified. Onlyimage.objmaskandimage.objmapcan be refined.Immutability by default:
apply(image)returns a modified copy. Setinplace=Truefor memory-efficient in-place modification.Grid structure assumption: Your algorithm should assume a valid, registered grid. If grid metadata is unreliable, refinement may fail or produce wrong results.
Static _operate() requirement: Must be static for parallel execution in pipelines.
Parameter matching: All
_operate()parameters exceptimagemust exist as instance attributes for automatic parameter matching.
Examples
Remove objects larger than their grid cell width
from phenotypic.abc_ import GridObjectRefiner from phenotypic import GridImage import numpy as np class OversizedObjectRemover(GridObjectRefiner): '''Remove objects exceeding cell dimensions.''' def __init__(self): super().__init__() @staticmethod def _operate(image: GridImage) -> GridImage: # Get grid boundaries col_edges = image.grid.get_col_edges() row_edges = image.grid.get_row_edges() max_width = (col_edges[1:] - col_edges[:-1]).max() max_height = (row_edges[1:] - row_edges[:-1]).max() # Measure objects objmap = image.objmap[:] from skimage.measure import regionprops_table props = regionprops_table(objmap, properties=['label', 'bbox']) # Filter oversized import pandas as pd df = pd.DataFrame(props) df['width'] = df['bbox-2'] - df['bbox-0'] df['height'] = df['bbox-3'] - df['bbox-1'] keep = df[(df['width'] < max_width) & (df['height'] < max_height)]['label'].values # Refine map refined = np.where(np.isin(objmap, keep), objmap, 0) image.objmap[:] = refined return image # Usage on gridded plate image from phenotypic.detect import OtsuDetector image = GridImage.from_image_path('plate.jpg', nrows=8, ncols=12) detected = OtsuDetector().apply(image) cleaned = OversizedObjectRemover().apply(detected)
Chaining grid and non-grid refinements
from phenotypic import GridImage, ImagePipeline from phenotypic.detect import OtsuDetector from phenotypic.refine import SmallObjectRemover, GridOversizedObjectRemover # Create detection pipeline with mixed refinements pipeline = ImagePipeline() pipeline.add(OtsuDetector()) # Detect colonies pipeline.add(SmallObjectRemover(min_size=100)) # Global size filter pipeline.add(GridOversizedObjectRemover()) # Grid-aware filter # Apply to gridded plate image = GridImage.from_image_path('plate.jpg', nrows=8, ncols=12) results = pipeline.operate([image]) refined_image = results[0] print(f"Refined: {refined_image.objmap[:].max()} colonies")
- __del__()#
Automatically stop tracemalloc when the object is deleted.
- __getstate__()#
Prepare the object for pickling by disposing of any widgets.
This ensures that UI components (which may contain unpickleable objects like input functions or thread locks) are cleaned up before serialization.
Note
This method modifies the object state by calling dispose_widgets(). Any active widgets will be detached from the object.
- apply(image, inplace=False)[source]#
Applies the operation to an image, either in-place or on a copy.
- widget(image: Image | None = None, show: bool = False) Widget#
Return (and optionally display) the root widget.
- Parameters:
- Returns:
The root widget.
- Return type:
ipywidgets.Widget
- Raises:
ImportError – If ipywidgets or IPython are not installed.
- class phenotypic.abc_.GridOperation[source]#
Bases:
ImageOperation,ABCAbstract base class for operations on grid-aligned plate images.
GridOperation is a marker abstract base class that enforces type safety for operations designed to work exclusively with GridImage objects. It’s a lightweight subclass of ImageOperation that overrides the apply() method to require a GridImage input instead of a generic Image.
What is GridOperation?
GridOperation exists to distinguish between two categories of image operations:
ImageOperation: Works on single, unaligned Image objects. The image may or may not have grid information. Used for general-purpose preprocessing, detection, and measurement. Examples: GaussianBlur, OtsuDetector, MeasureColorComposition.
GridOperation: Works only on GridImage objects that have grid structure information (row/column layout of wells on an agar plate). The operation assumes grid information is present and available. Used for grid-aware operations where well-level analysis or grid alignment is required. Examples: GridObjectDetector, GridCorrector, GridRefiner.
Why GridOperation exists
GridOperation provides three benefits:
Type Safety: The apply() method signature requires a GridImage argument, catching misuse at runtime if someone tries to apply a grid operation to a plain Image.
Intent Clarity: Developers can immediately see which operations require grid information, making the design space clear: “Use ImageOperation for general image ops, GridOperation for plate-specific grid-aware ops.”
Documentation: Allows documentation and tutorials to clearly distinguish operations by their input type requirements.
What is GridImage?
GridImage is a specialized Image subclass that adds grid structure information:
Inherits from Image: All standard image capabilities (RGB, grayscale, color spaces, object detection results, etc.) are available.
Adds grid field: Contains a
gridattribute (GridInfo object) storing the detected or specified grid layout (row/column positions, cell dimensions, rotation angle).Arrayed plate context: Represents images of agar plates with samples arranged in regular grids (96-well, 384-well, 1536-well formats). Typical nrows=8, ncols=12 for 96-well plates.
Grid accessors: Via
image.grid, provides row/column counts, well positions, and grid-related metadata.
GridOperation subclasses
Most concrete grid operations inherit from BOTH a specific operation ABC (like ObjectDetector) AND GridOperation to create specialized grid-aware variants:
GridOperation (marker ABC) ├── GridObjectDetector (inherits ObjectDetector + GridOperation) │ ├── GridInstanceDetector, GridThresholdDetector, GridCannyDetector, ... │ └── Use for: well-level colony detection on gridded plates │ ├── GridCorrector (inherits ImageCorrector + GridOperation) │ ├── GridAligner, ... │ └── Use for: grid alignment, rotation, color correction per-well │ └── GridObjectRefiner (inherits ObjectRefiner + GridOperation) ├── GridSizeRefiner, ... └── Use for: per-well mask refinement, filtering by well locationWhen to use GridOperation vs ImageOperation
ImageOperation: Input is a plain Image with unknown grid state. Typical use: preprocessing (blur, contrast), general-purpose detection, color measurements that don’t depend on grid layout.
GridOperation: Input is a GridImage with detected/specified grid structure. Typical use: well-level analysis, grid-based refinement, operations that reference well positions or grid-aligned regions.
Overlap: Some operations work on both. E.g., a ColorComposition measurement can apply to an Image, but a GridColorComposition can specialize to per-well measurements on a GridImage.
When to subclass GridOperation
Subclass GridOperation when your operation:
Requires grid information: Needs to access
image.gridto get well positions, row/column structure, or grid-aligned regions.Operates on well-level data: Processes colonies at the well level rather than globally on the image (e.g., per-well filtering, well-based alignment).
Makes assumptions about grid structure: Your algorithm assumes a regular grid layout and would fail or produce nonsensical results on an image without grid info.
Otherwise, subclass ImageOperation instead. GridOperation operations are more specialized and less broadly applicable.
Multiple inheritance pattern
Most GridOperation subclasses use multiple inheritance:
class GridObjectDetector(ObjectDetector, GridOperation, ABC): '''Detects objects using grid structure.''' def apply(self, image: GridImage, inplace=False) -> GridImage: if not isinstance(image, GridImage): raise GridImageInputError return super().apply(image=image, inplace=inplace)
This combines:
ObjectDetector behavior: Sets image.objmask and image.objmap, with integrity checks.
GridOperation type safety: Requires GridImage input, enforced at runtime.
ABC pattern: Subclasses implement _operate() with grid-aware logic.
The key insight: GridOperation is just a type annotation layer over ImageOperation that makes the grid requirement explicit in the method signature.
Notes
GridOperation is a marker class with no implementation. It only overrides apply() to specify the GridImage type and enforce input validation.
GridImage inherits all Image functionality. Grid information is accessed via the
gridaccessor:image.grid.nrows,image.grid.ncols, etc.If you’re unsure whether your operation needs GridOperation, ask: “Does this algorithm fundamentally depend on grid structure?” If yes, use GridOperation. If it works equally well on plain Images, use ImageOperation.
GridImage is typically created with ImageGridHandler or GridFinder operations that detect grid structure. GridFinder is an ImageOperation, but the result is a GridImage suitable for downstream GridOperation subclasses.
Examples
Using a GridOperation subclass
from phenotypic import GridImage from phenotypic.detect import GridObjectDetector # Load plate image (96-well) grid_image = GridImage('plate_scan.jpg', nrows=8, ncols=12) # Apply a grid-aware detector (subclass of GridObjectDetector) # This operation requires GridImage and uses well structure detector = GridObjectDetector() # Concrete subclass detected = detector.apply(grid_image) # Type-safe: GridImage -> GridImage # Access detected colonies per well for well_row in range(grid_image.nrows): for well_col in range(grid_image.ncols): # Per-well analysis available because operation is grid-aware pass
Understanding the type safety benefit
from phenotypic import Image, GridImage from phenotypic.enhance import GaussianBlur from phenotypic.detect import GridObjectDetector image = Image.from_image_path('generic.jpg') # Plain Image grid_image = GridImage('plate.jpg') # GridImage # ImageOperation (GaussianBlur) accepts both enhancer = GaussianBlur(sigma=2) result1 = enhancer.apply(image) # OK: Image -> Image result2 = enhancer.apply(grid_image) # OK: GridImage -> GridImage # GridOperation requires GridImage detector = GridObjectDetector() # Subclass of GridOperation result3 = detector.apply(grid_image) # OK: GridImage -> GridImage # result4 = detector.apply(image) # ERROR: raises GridImageInputError
- __del__()#
Automatically stop tracemalloc when the object is deleted.
- __getstate__()#
Prepare the object for pickling by disposing of any widgets.
This ensures that UI components (which may contain unpickleable objects like input functions or thread locks) are cleaned up before serialization.
Note
This method modifies the object state by calling dispose_widgets(). Any active widgets will be detached from the object.
- apply(image: GridImage, inplace: bool = False) GridImage[source]#
Applies the operation to an image, either in-place or on a copy.
- widget(image: Image | None = None, show: bool = False) Widget#
Return (and optionally display) the root widget.
- Parameters:
- Returns:
The root widget.
- Return type:
ipywidgets.Widget
- Raises:
ImportError – If ipywidgets or IPython are not installed.
- class phenotypic.abc_.ImageCorrector[source]#
Bases:
ImageOperation,ABCAbstract base class for whole-image transformation operations affecting all components.
ImageCorrector is a specialized subclass of ImageOperation for global image transformations that modify every image component together (rgb, gray, enh_gray, objmask, objmap). Unlike ImageEnhancer (modifies only enh_gray) or ObjectDetector/ObjectRefiner (modify only detection results), an ImageCorrector transforms the entire image geometry or structure, ensuring all components remain synchronized.
What is ImageCorrector?
ImageCorrector handles operations where it is impossible or meaningless to modify only a single component. When you rotate, warp, or apply perspective transforms to an image, the rgb and gray representations must change together, and any existing detection masks and maps must be rotated identically. ImageCorrector guarantees this synchronization without requiring manual alignment of separate components.
Key Design Principle: No Integrity Checks
Unlike ImageEnhancer and ObjectDetector, ImageCorrector uses no @validate_operation_integrity decorator. This is by design: since all components must change together in a coordinated way, there is nothing to “protect” or “validate”. The entire image is intentionally modified as a unit. The absence of integrity checks reflects this design, not a security weakness.
When to use ImageCorrector vs other operation types
Use the operation type that matches your modification scope:
ImageEnhancer: You only modify
image.enh_gray(preprocessing for better detection). Use when: blur, contrast, edge detection, background subtraction. Example: GaussianBlur, CLAHE, RankMedianEnhancer.ObjectDetector: You analyze image data and produce
image.objmaskandimage.objmap. Use when: discovering and labeling colonies, particles, or features. Example: OtsuDetector, CannyDetector, RoundPeaksDetector.ObjectRefiner: You edit the mask and map (filter by size, merge, remove objects). Use when: post-processing detection results to clean up false positives. Example: SizeFilter, ComponentMerger, MorphologyRefiner.
ImageCorrector (this class): You transform the entire image structure (geometry, orientation, coordinate system). All components change together. Use when: rotation, alignment, perspective correction, image resampling. Example: GridAligner (rotates image to align detected colonies with grid rows/columns).
Typical Use Cases
ImageCorrector is designed for operations that physically transform the image:
Rotation: Align a plate image with detected grid structure. Rotate to make colony rows parallel to image axes, improving grid-based analysis.
Perspective transformation: Correct camera angle or lens distortion.
Image resampling: Change resolution or interpolation method.
Global color correction: Apply white balance or color space mapping to entire image.
Alignment: Register image to a reference coordinate system.
Why ImageCorrector is Rare in Practice
Most image processing operations are targeted to specific aspects of the image:
Colony detection focuses on finding objects in image data.
Post-detection cleanup focuses on refining the mask/map.
Preprocessing focuses on making detection more robust.
Operations that transform the entire image structure are comparatively rare because:
Plate images are typically already well-oriented from the scanner/camera.
Most analysis works directly with image data as acquired (no rotation needed).
Grid-based alignment is a specialized step, not routine preprocessing.
However, when needed, ImageCorrector provides the correct abstraction.
How to Implement a Custom ImageCorrector
Inherit from ImageCorrector and implement the
_operate()static method:from phenotypic.abc_ import ImageCorrector from phenotypic import Image class MyRotator(ImageCorrector): def __init__(self, angle: float): super().__init__() self.angle = angle # Instance attribute, matched to _operate() @staticmethod def _operate(image: Image, angle: float) -> Image: # Rotate ALL image components together image.rotate(angle_of_rotation=angle, mode='edge') return image # Usage rotator = MyRotator(angle=5.0) rotated_image = rotator.apply(image)
Critical Implementation Detail: Updating All Components
Your
_operate()method must ensure all image components are updated together:@staticmethod def _operate(image: Image, **kwargs) -> Image: # Rotate rgb and gray (color representation) image.rotate(angle_of_rotation=angle, mode='edge') # Rotate enh_gray (enhanced version for detection) # This is automatically handled by image.rotate() # Rotate objmask and objmap (detection results) # This is also automatically handled by image.rotate() return image
Access image data through accessors (never direct attributes):
Reading:
image.rgb[:],image.gray[:],image.enh_gray[:],image.objmask[:]Modifying:
image.rgb[:] = new_data,image.objmap[:] = new_map
The Image class provides helper methods for common transformations:
image.rotate(angle_of_rotation, mode='constant')- Rotates all components identicallyFor custom transformations, apply the same operation to all components
Performance and Interpolation Considerations
When rotating or resampling:
Color data (rgb, gray): Use smooth interpolation (order=1 or higher) to preserve color gradients and colony boundaries.
Detection data (objmask, objmap): Use nearest-neighbor interpolation (order=0) to preserve discrete object identities. Object labels must remain integer-valued.
Enhanced grayscale (enh_gray): Use the same interpolation as color data for consistency.
Example with explicit interpolation control:
from scipy.ndimage import rotate as ndimage_rotate from skimage.transform import rotate as skimage_rotate # For rgb/gray: use higher-order interpolation rotated_rgb = skimage_rotate(image.rgb[:], angle=5.0, order=1, preserve_range=True) # For objmap: use nearest-neighbor (order=0) rotated_objmap = ndimage_rotate(image.objmap[:], angle=5.0, order=0, reshape=False)
Edge Handling During Transformation
Transformations may introduce edge artifacts:
Rotation with ‘edge’ mode: Replicas image border pixels (minimal artificiality).
Rotation with ‘constant’ mode: Fills with a constant value (usually 0 for dark edge).
Rotation with ‘reflect’ mode: Reflects image at boundary (avoids abrupt discontinuities).
Choose the mode based on downstream analysis. For colony detection, ‘edge’ is often safest.
Attributes
ImageCorrector itself has no public attributes. Subclasses define operation-specific parameters as instance attributes. All subclass attributes must be matched to the
_operate()method signature for parallelization support.Methods
Inherits all methods from ImageOperation. Key methods:
apply(image, inplace=False)- Execute the correction (default: returns new image)._operate(image, **kwargs)- Abstract method you implement with transformation logic.
Notes
Static method requirement:
_operate()must be static to enable parallel execution in ImagePipeline. This allows the operation to be serialized and sent to worker processes.Parameter matching: All
_operate()parameters (exceptimage) must exist as instance attributes with matching names. Whenapply()is called, these are automatically extracted and passed to_operate().No copy by default: Operations return modified copies by default (inplace=False). Original image is unchanged unless
inplace=Trueis explicitly passed.Coordinate system changes: After an ImageCorrector, downstream operations may need to re-detect objects or re-measure features, since the spatial coordinate system has changed.
Grid alignment workflow: GridAligner is the canonical example—it rotates the entire GridImage to align detected colonies with the expected grid structure, then downstream operations proceed with the aligned image.
Examples
GridAligner: rotating a GridImage to align colonies with rows/columns
from phenotypic import GridImage, Image from phenotypic.detect import RoundPeaksDetector from phenotypic.correction import GridAligner # Load plate image image = Image.from_image_path('colony_plate.jpg') # Detect colonies in original orientation detected = RoundPeaksDetector().apply(image) # Create GridImage for grid-aware operations grid_image = GridImage(detected) grid_image.detect_grid() # Estimate grid structure # Rotate entire image to align colonies with grid axes aligner = GridAligner(axis=0) # Align rows horizontally aligned = aligner.apply(grid_image) # Now all components (rgb, gray, enh_gray, masks, map) are rotated together # Downstream grid-based operations work with aligned coordinates print(f"Original shape: {grid_image.shape}") print(f"Aligned shape: {aligned.shape}")
Custom perspective correction (conceptual example)
from phenotypic.abc_ import ImageCorrector from phenotypic import Image from skimage.transform import warp, ProjectiveTransform import numpy as np class PerspectiveCorrector(ImageCorrector): """Correct camera angle by applying perspective transform.""" def __init__(self, tilt_angle: float, direction: str = 'x'): super().__init__() self.tilt_angle = tilt_angle self.direction = direction @staticmethod def _operate(image: Image, tilt_angle: float, direction: str) -> Image: # Define perspective transform h, w = image.shape # (Implementation details omitted for brevity) # Apply warp to all components with appropriate interpolation return image # Usage corrector = PerspectiveCorrector(tilt_angle=10.0, direction='x') corrected = corrector.apply(image) # All image components are perspective-corrected together
- __del__()#
Automatically stop tracemalloc when the object is deleted.
- __getstate__()#
Prepare the object for pickling by disposing of any widgets.
This ensures that UI components (which may contain unpickleable objects like input functions or thread locks) are cleaned up before serialization.
Note
This method modifies the object state by calling dispose_widgets(). Any active widgets will be detached from the object.
- apply(image: Image, inplace=False) Image#
Applies the operation to an image, either in-place or on a copy.
- widget(image: Image | None = None, show: bool = False) Widget#
Return (and optionally display) the root widget.
- Parameters:
- Returns:
The root widget.
- Return type:
ipywidgets.Widget
- Raises:
ImportError – If ipywidgets or IPython are not installed.
- class phenotypic.abc_.ImageEnhancer[source]#
Bases:
ImageOperation,ABCAbstract base class for preprocessing operations that improve colony detection through enhanced grayscale.
ImageEnhancer is the foundation for all preprocessing algorithms that modify only the enhanced grayscale channel (image.enh_gray) to improve colony visibility and detection quality. Unlike ImageCorrector (which transforms the entire Image), ImageEnhancer leaves the original RGB and grayscale data untouched, protecting image integrity while enabling targeted preprocessing.
What is ImageEnhancer?
ImageEnhancer operates on the principle of non-destructive preprocessing: all modifications are applied to image.enh_gray (a working copy of grayscale), while original image components (image.rgb, image.gray, image.objmask, image.objmap) remain protected and unchanged. This allows you to experiment with multiple enhancement chains without affecting raw data or detection results.
Role in the Detection Pipeline
ImageEnhancer sits at the beginning of the processing chain:
Raw Image (image.rgb, image.gray) ↓ ImageEnhancer(s) → Improve visibility, reduce noise ↓ ObjectDetector → Detect colonies/objects ↓ ObjectRefiner → Clean up detections (optional)When you call enhancer.apply(image), you get back an Image with improved enh_gray but identical RGB/gray data—ready for detection algorithms to operate on enhanced contrast.
Why Enhancement Matters for Colony Phenotyping
Real agar plate imaging introduces several challenges:
Uneven illumination: Vignetting, shadows, and scanner lighting gradients make colonies appear faint in dark regions and over-exposed elsewhere.
Noise and texture: Scanner noise, agar granularity, condensation droplets, and dust create artifacts that confuse thresholding or edge detection.
Faint colonies: Small or translucent colonies blend into background, reducing detectability.
Poor contrast: Low-contrast colonies on dense plates require local contrast enhancement.
Enhancement operations target these issues in a domain-specific way: they preserve colony morphology while suppressing artifacts, enabling robust detection in downstream algorithms.
When to Use ImageEnhancer vs Other Operations
ImageEnhancer: You only modify enh_gray for preprocessing. Use for: noise reduction (Gaussian blur, median filtering), contrast enhancement (CLAHE), illumination compensation (background subtraction), edge enhancement (Sobel, Laplacian). Typical use: before detection.
ImageCorrector: You transform the entire Image (rotation, cropping, color correction). Typical use: geometric corrections or global color transformations.
ObjectDetector: You analyze image data and produce only objmask and objmap. Input image data is protected. Typical use: colony detection and labeling.
ObjectRefiner: You edit mask and map (filtering, merging, removing objects). Typical use: post-detection cleanup and validation.
Integrity Validation: Protection of Core Data
ImageEnhancer uses the
@validate_operation_integritydecorator on theapply()method to guarantee that RGB and grayscale data are never modified:@validate_operation_integrity('image.rgb', 'image.gray') def apply(self, image: Image, inplace: bool = False) -> Image: return super().apply(image=image, inplace=inplace)
This decorator:
Calculates cryptographic signatures of image.rgb and image.gray before processing
Calls the parent apply() method to execute your _operate() implementation
Recalculates signatures after operation completes
Raises
OperationIntegrityErrorif any protected component was modified
Note: Integrity validation only runs if the
VALIDATE_OPS=Trueenvironment variable is set (development-time safety; disabled in production for performance).Implementing a Custom ImageEnhancer
Subclass ImageEnhancer and implement a single method:
from phenotypic.abc_ import ImageEnhancer from phenotypic import Image from scipy.ndimage import gaussian_filter class MyCustomEnhancer(ImageEnhancer): def __init__(self, sigma: float = 1.0): super().__init__() self.sigma = sigma # Instance attribute matched to _operate() @staticmethod def _operate(image: Image, sigma: float = 1.0) -> Image: # Modify ONLY enh_gray; read, process, write back enh = image.enh_gray[:] filtered = gaussian_filter(enh.astype(float), sigma=sigma) image.enh_gray[:] = filtered.astype(enh.dtype) return image
Key Rules for Implementation:
_operate()must be static (required for parallel execution in pipelines).All parameters except image must exist as instance attributes with matching names (enables automatic parameter matching via _get_matched_operation_args()).
Only modify ``image.enh_gray[:]``—all other components are protected.
Always use the accessor pattern:
image.enh_gray[:] = new_data(never direct attribute assignment likeimage._enh_gray = ...).Return the modified Image object.
Accessing and Modifying enh_gray
Within your _operate() method, use the accessor interface:
# Reading enhanced grayscale data enh_data = image.enh_gray[:] # Full array region = image.enh_gray[10:50, 20:80] # Slicing with NumPy syntax # Modifying enhanced grayscale image.enh_gray[:] = processed_array # Full replacement image.enh_gray[10:50, 20:80] = new_region # Partial update
The accessor handles all consistency checks and automatic cache invalidation.
The _make_footprint() Static Utility
ImageEnhancer provides a static helper for generating morphological structuring elements (footprints) used in morphological operations like erosion, dilation, and median filtering:
@staticmethod def _make_footprint(shape: Literal["square", "diamond", "disk"], radius: int) -> np.ndarray: '''Creates a binary morphological footprint for image processing.'''
Footprint Shapes and When to Use Each
“disk”: Circular/isotropic footprint. Best for preserving rounded colony shapes and applying uniform processing in all directions. Use for: general-purpose smoothing, median filtering, dilations that expand colonies symmetrically.
“square”: Square footprint with 8-connectivity. Emphasizes horizontal/vertical edges and aligns with pixel grid. Use for: grid-aligned artifacts (imaging hardware stripe patterns), when processing speed matters (slightly faster than disk).
“diamond”: Diamond-shaped (rotated square) footprint with 4-connectivity. Creates a cross-like neighborhood pattern. Use for: specialized cases where diagonal connections should be de-emphasized; less common in practice.
The radius parameter controls the neighborhood size (in pixels). Larger radii affect more neighbors and produce broader effects (more noise suppression, but potential colony merging). Choose radius smaller than the minimum colony diameter to avoid destroying fine details.
Common Morphological Patterns
Use _make_footprint() with morphological operations from scipy.ndimage or skimage.morphology:
from scipy.ndimage import binary_dilation, binary_erosion from phenotypic.abc_ import ImageEnhancer disk_fp = ImageEnhancer._make_footprint('disk', radius=5) # Erosion: shrink bright regions (removes small colonies/noise) eroded = binary_erosion(binary_image, structure=disk_fp) # Dilation: expand bright regions (closes holes, merges nearby colonies) dilated = binary_dilation(binary_image, structure=disk_fp)
When and Why to Chain Multiple Enhancements
Enhancement operations are typically chained together to address multiple issues in sequence:
# Example pipeline: handle uneven illumination + noise # Step 1: Remove background gradients result = RollingBallRemoveBG(radius=50).apply(image) # Step 2: Boost local contrast for faint colonies result = CLAHE(kernel_size=50, clip_limit=0.02).apply(result) # Step 3: Smooth remaining noise result = GaussianBlur(sigma=2).apply(result) # Step 4: Detect colonies in enhanced grayscale result = OtsuDetector().apply(result)
Rationale for chaining:
Order matters: Background correction before contrast enhancement yields better results than vice versa.
Divide and conquer: One enhancer per problem (illumination, noise, contrast) is more maintainable and tunable than one monolithic algorithm.
No data loss: Each enhancer preserves the original RGB/gray, so intermediate results can be inspected and validated.
Reproducibility: Chained operations can be serialized to YAML for documentation and reuse across experiments.
Use ImagePipeline for convenient chaining:
from phenotypic import Image, ImagePipeline from phenotypic.enhance import RollingBallRemoveBG, CLAHE, GaussianBlur from phenotypic.detect import OtsuDetector pipeline = ImagePipeline() pipeline.add(RollingBallRemoveBG(radius=50)) pipeline.add(CLAHE(kernel_size=50, clip_limit=0.02)) pipeline.add(GaussianBlur(sigma=2)) pipeline.add(OtsuDetector()) # Process a batch of images with automatic parallelization images = [Image.from_image_path(f) for f in plate_scans] results = pipeline.operate(images)
Methods and Attributes
- None at the ImageEnhancer level; subclasses define enhancement parameters
- as instance attributes
- Type:
e.g., sigma, kernel_size, clip_limit
- apply(image, inplace=False)[source]#
Applies the enhancement to an image. Returns a modified Image with enhanced enh_gray but unchanged RGB/gray/objects. Handles copy/inplace logic and validates data integrity.
- _operate(image, **kwargs)#
Abstract static method implemented by subclasses. Performs the actual enhancement algorithm. Parameters are automatically matched to instance attributes.
- _make_footprint(shape, radius)[source]#
Static utility that creates a binary morphological footprint (disk, square, or diamond) for use in morphological operations.
Notes
Protected components: The
@validate_operation_integritydecorator ensures thatimage.rgbandimage.graycannot be modified. Onlyimage.enh_graycan be changed.Immutability by default:
apply(image)returns a modified copy by default. Setinplace=Truefor memory-efficient in-place modification.Static _operate() requirement: The
_operate()method must be static to support parallel execution in pipelines.Parameter matching for parallelization: All
_operate()parameters exceptimagemust exist as instance attributes. Whenapply()is called, these values are extracted and passed to_operate().Accessor pattern: Always use
image.enh_gray[:] = new_datato modify enhanced grayscale. Never use direct attribute assignment.Automatic cache invalidation: When you modify
image.enh_gray[:], the Image’s internal caches (e.g., color space conversions, object maps) are automatically invalidated to prevent stale results.
Examples
Implementing a custom noise-reduction enhancer with Gaussian blur
from phenotypic.abc_ import ImageEnhancer from phenotypic import Image from scipy.ndimage import gaussian_filter import numpy as np class CustomGaussianEnhancer(ImageEnhancer): '''Enhance by applying Gaussian blur to reduce noise.''' def __init__(self, sigma: float = 1.5): super().__init__() self.sigma = sigma @staticmethod def _operate(image: Image, sigma: float = 1.5) -> Image: enh = image.enh_gray[:] # Convert to float for processing filtered = gaussian_filter(enh.astype(float), sigma=sigma) # Restore original dtype image.enh_gray[:] = filtered.astype(enh.dtype) return image # Usage from phenotypic import Image from phenotypic.detect import OtsuDetector image = Image.from_image_path('agar_plate.jpg') enhancer = CustomGaussianEnhancer(sigma=2.0) enhanced = enhancer.apply(image) # Original unchanged detected = OtsuDetector().apply(enhanced) # Detect in enhanced data colonies = detected.objects print(f"Detected {len(colonies)} colonies")
Morphological operations using _make_footprint for colony refinement
from phenotypic.abc_ import ImageEnhancer from phenotypic import Image from scipy.ndimage import binary_closing, binary_opening import numpy as np class MorphologicalEnhancer(ImageEnhancer): '''Enhance by applying morphological closing/opening to fill holes and remove noise.''' def __init__(self, operation: str = 'closing', radius: int = 3): super().__init__() self.operation = operation # 'closing' or 'opening' self.radius = radius @staticmethod def _operate(image: Image, operation: str = 'closing', radius: int = 3) -> Image: enh = image.enh_gray[:] # Create a disk footprint for isotropic processing footprint = ImageEnhancer._make_footprint('disk', radius) # Apply morphological operation to binary image binary = enh > enh.mean() if operation == 'closing': # Close small holes within colonies refined = binary_closing(binary, structure=footprint) elif operation == 'opening': # Remove small noise regions refined = binary_opening(binary, structure=footprint) else: return image # Convert back to grayscale (refined mask as 0/255) image.enh_gray[:] = (refined * 255).astype(enh.dtype) return image # Usage enhancer = MorphologicalEnhancer(operation='closing', radius=5) result = enhancer.apply(image)
Chaining multiple enhancements to handle complex agar plate imaging conditions
from phenotypic import Image, ImagePipeline from phenotypic.enhance import ( RollingBallRemoveBG, CLAHE, GaussianBlur ) from phenotypic.detect import OtsuDetector # Scenario: Agar plate image with vignetting, dust, and low contrast # Build a processing pipeline pipeline = ImagePipeline() # Step 1: Remove illumination gradient (vignetting) pipeline.add(RollingBallRemoveBG(radius=80)) # Step 2: Boost local contrast for faint colonies pipeline.add(CLAHE(kernel_size=50, clip_limit=0.02)) # Step 3: Smooth dust and scanner noise pipeline.add(GaussianBlur(sigma=1.5)) # Step 4: Detect colonies pipeline.add(OtsuDetector()) # Process a batch of plate images image_paths = ['plate1.tif', 'plate2.tif', 'plate3.tif'] images = [Image.from_image_path(p) for p in image_paths] results = pipeline.operate(images) # Each result has cleaned detection results for i, result in enumerate(results): colonies = result.objects print(f"Plate {i}: {len(colonies)} colonies detected")
Using different footprint shapes for specialized morphological filtering
from phenotypic.abc_ import ImageEnhancer from phenotypic import Image from skimage.filters.rank import median from skimage.util import img_as_ubyte, img_as_float import numpy as np class SelectiveMedianEnhancer(ImageEnhancer): '''Enhance by applying median filtering with configurable footprint shape.''' def __init__(self, shape: str = 'disk', radius: int = 3): super().__init__() self.shape = shape # 'disk', 'square', or 'diamond' self.radius = radius @staticmethod def _operate(image: Image, shape: str = 'disk', radius: int = 3) -> Image: enh = image.enh_gray[:] # Create footprint with specified shape footprint = ImageEnhancer._make_footprint(shape, radius) # Apply median filter (rank filter) # Convert to uint8 for rank filter compatibility as_uint8 = img_as_ubyte(enh) filtered = median(as_uint8, footprint=footprint) # Restore original dtype image.enh_gray[:] = img_as_float(filtered) if enh.dtype == np.float64 else filtered return image # Usage with different shapes image = Image.from_image_path('plate.jpg') # Isotropic smoothing (preserves round colony shapes) result1 = SelectiveMedianEnhancer(shape='disk', radius=3).apply(image) # Grid-aligned smoothing (for hardware artifacts) result2 = SelectiveMedianEnhancer(shape='square', radius=3).apply(image) # Both preserve original image.rgb and image.gray assert np.array_equal(image.gray[:], result1.gray[:]) assert np.array_equal(image.rgb[:], result1.rgb[:])
- __del__()#
Automatically stop tracemalloc when the object is deleted.
- __getstate__()#
Prepare the object for pickling by disposing of any widgets.
This ensures that UI components (which may contain unpickleable objects like input functions or thread locks) are cleaned up before serialization.
Note
This method modifies the object state by calling dispose_widgets(). Any active widgets will be detached from the object.
- apply(image, inplace=False)[source]#
Applies the operation to an image, either in-place or on a copy.
- widget(image: Image | None = None, show: bool = False) Widget#
Return (and optionally display) the root widget.
- Parameters:
- Returns:
The root widget.
- Return type:
ipywidgets.Widget
- Raises:
ImportError – If ipywidgets or IPython are not installed.
- class phenotypic.abc_.ImageOperation[source]#
Bases:
BaseOperation,LazyWidgetMixin,ABCCore abstract base class for all single-image transformation operations in PhenoTypic.
ImageOperation is the foundation of PhenoTypic’s algorithm system. It defines the interface for algorithms that transform an Image object by modifying specific components. Unlike GridOperation (which handles grid-aligned operations on plate images), ImageOperation acts on a single image independently.
What is ImageOperation?
ImageOperation manages the distinction between:
apply() method: The user-facing interface that handles memory management (copy vs. in-place) and integrity validation
_operate() method: The abstract algorithm-specific method that subclasses implement with the actual processing logic
This separation ensures consistent behavior, automatic memory tracking, and validation across all image operations.
The Operation Hierarchy
ImageOperation has four main subclass categories, each modifying different image components with different integrity guarantees:
ImageOperation (this class) ├── ImageEnhancer │ └── Modifies ONLY image.enh_gray │ ├── GaussianBlur, CLAHE, RankMedianEnhancer, ... │ └── Use for: noise reduction, contrast, edge sharpening │ ├── ObjectDetector │ └── Modifies ONLY image.objmask and image.objmap │ ├── OtsuDetector, CannyDetector, RoundPeaksDetector, ... │ └── Use for: discovering and labeling colonies/particles │ ├── ObjectRefiner │ └── Modifies ONLY image.objmask and image.objmap │ ├── Size filtering, merging, removing objects │ └── Use for: cleaning up detection results │ └── ImageCorrector └── Modifies ALL image components ├── GridAligner, rotation, color correction └── Use for: general-purpose transformationsWhen to inherit from each subclass:
ImageEnhancer: You only modify
image.enh_gray(enhanced grayscale). Originalimage.rgbandimage.grayare protected by integrity checks. Typical use: preprocessing before detection.ObjectDetector: You analyze image data and produce only
image.objmask(binary mask) andimage.objmap(labeled object map). Input image data is protected. Typical use: colony detection, particle finding.ObjectRefiner: You edit the mask and map (filtering, merging, removing). Input image data is protected. Typical use: post-detection cleanup.
ImageCorrector: You transform the entire Image (every component may change). No integrity checks are performed. Typical use: rotation, alignment, global color correction.
Never inherit directly from ImageOperation. Always choose one of the four subclasses above, as each provides appropriate integrity validation and shared utilities (e.g.,
_make_footprint()for morphology operations).How apply() and _operate() work together
The user-facing method
apply(image, inplace=False)is the entry point:Calls ``_apply_to_single_image()`` with the operation logic
Handles copy/inplace semantics: - If
inplace=False(default): Image is copied before modification,original unchanged
If
inplace=True: Image is modified in-place for memory efficiency
Extracts matched parameters via
_get_matched_operation_args()- Matches operation instance attributes to_operate()method parameters - Enables parallel execution in pipelinesCalls your _operate() static method with the image and matched parameters
Validates integrity (subclass-specific via
@validate_operation_integrity) - Detects unexpected modifications to protected image components - Only enabled ifVALIDATE_OPS=Truein environment
Your subclass only needs to implement
_operate(image, **kwargs) -> Image.The _operate() method contract
_operate()is a static method (required for parallel execution):Signature:
@staticmethod def _operate(image: Image, param1, param2=default) -> ImageParameters: All parameters except
imageare automatically matched to instance attributes via the_get_matched_operation_args()systemBehavior: Modify only the allowed image components (determined by subclass)
Returns: The modified Image object
Must be static: This enables serialization and parallel execution
Example parameter matching:
class MyEnhancer(ImageEnhancer): def __init__(self, sigma: float): super().__init__() self.sigma = sigma # Instance attribute @staticmethod def _operate(image: Image, sigma: float = 1.0) -> Image: # When apply() is called, 'sigma' is automatically passed from self.sigma image.enh_gray[:] = gaussian_filter(image.enh_gray[:], sigma=sigma) return image
The
_apply_to_single_image()static method retrievessigmafrom the instance (via_get_matched_operation_args()) and passes it to_operate().Data access through accessors
Within
_operate(), always access image data through accessors (never direct attribute modification). This ensures lazy evaluation, caching, and consistency:Reading data:
image.enh_gray[:]- Enhanced grayscale (for enhancers)image.rgb[:]- Original RGB dataimage.gray[:]- Luminance grayscaleimage.objmask[:]- Binary object maskimage.objmap[:]- Labeled object mapimage.color.Lab[:],image.color.HSV[:]- Color spaces
Modifying data:
image.enh_gray[:] = new_array- Set enhanced grayscaleimage.objmask[:] = binary_array- Set object maskimage.objmap[:] = labeled_array- Set object map
Never do this:
# ✗ WRONG - direct attribute modification image.rgb = new_data image._enh_gray = new_data image.objects_handler.enh_gray = new_data
Do this instead:
# ✓ CORRECT - use accessors image.enh_gray[:] = new_data image.objmask[:] = new_mask
Integrity validation with @validate_operation_integrity
Intermediate subclasses use the
@validate_operation_integritydecorator to enforce that modifications are limited to specific components. For example:class ImageEnhancer(ImageOperation, ABC): @validate_operation_integrity('image.rgb', 'image.gray') def apply(self, image: Image, inplace=False) -> Image: return super().apply(image=image, inplace=inplace)
This decorator:
Calculates MurmurHash3 signatures of protected arrays before
apply()Calls the parent
apply()methodRecalculates signatures after operation completes
Raises
OperationIntegrityErrorif any protected component changed
Only enabled if
VALIDATE_OPS=Truein environment (for performance).Operation chaining and pipelines
Operations are designed for method chaining:
result = (GaussianBlur(sigma=2).apply(image) .apply_operation(OtsuDetector()))
Or use
ImagePipelinefor multi-step workflows with automatic benchmarking:pipeline = ImagePipeline() pipeline.add(GaussianBlur(sigma=2)) pipeline.add(OtsuDetector()) pipeline.add(GridFinder()) results = pipeline.operate([image1, image2, image3])
Parallel execution support
ImageOperation’s static method design enables parallel execution. When
ImagePipelineruns with multiple images, it:Extracts operation parameters via
_get_matched_operation_args()Serializes the operation instance (attributes only)
Sends to worker processes
Workers call
_apply_to_single_image()in parallel
This is why
_operate()must be static and all parameters must be instance attributes matching the method signature.- None#
- Type:
all operation state is stored in subclass instances as attributes
- apply(image, inplace=False)[source]#
User-facing method that applies the operation. Handles copy/inplace logic and parameter matching.
- _operate(image, **kwargs)[source]#
Abstract static method implemented by subclasses with algorithm logic. Parameters are automatically extracted from instance attributes via _get_matched_operation_args().
- _apply_to_single_image(cls_name, image, operation, inplace, matched_args)[source]#
Static method that performs the actual apply operation. Handles copy/inplace logic and error handling. Called internally by apply(). Also called directly by ImagePipeline for parallel execution.
Notes
No direct Image attribute modification: Never write to
image.rgb,image.gray, or other attributes directly. Use the accessor pattern (image.component[:] = new_data).Immutability by default: Operations return modified copies by default. Original image is unchanged unless
inplace=Trueis explicitly passed.Static _operate() is required: The method must be static (decorated with
@staticmethod) to support parallel execution in pipelines. This enables ImagePipeline to serialize operations and execute them in worker processes.Parameter matching for parallelization: All
_operate()parameters (exceptimage) must exist as instance attributes with the same name. Whenapply()is called,_get_matched_operation_args()extracts these values and passes them to_operate(). This is why subclasses store operation parameters asself.param_namein__init__.Automatic memory/performance tracking: BaseOperation (parent class) automatically tracks memory usage and execution time when the logger is configured for INFO level or higher. Disable by setting logger to WARNING.
Cross-platform compatibility: Some dependencies (rawpy, pympler) are platform-specific. Code must gracefully handle missing optional dependencies.
Integrity validation is optional: The
@validate_operation_integritydecorator only runs ifVALIDATE_OPS=Truein environment. This provides development-time safety without production overhead.
Examples
Implementing a custom ImageEnhancer with parameter matching
from phenotypic.abc_ import ImageEnhancer from phenotypic import Image from scipy.ndimage import gaussian_filter class GaussianEnhancer(ImageEnhancer): '''Custom enhancer applying Gaussian blur to enh_gray.''' def __init__(self, sigma: float = 1.0): super().__init__() self.sigma = sigma # Instance attribute matched to _operate() @staticmethod def _operate(image: Image, sigma: float = 1.0) -> Image: '''Apply Gaussian blur to enh_gray.''' # Read enhanced grayscale enh = image.enh_gray[:] # Apply Gaussian filter blurred = gaussian_filter(enh.astype(float), sigma=sigma) # Modify enh_gray through accessor image.enh_gray[:] = blurred.astype(enh.dtype) return image # Usage enhancer = GaussianEnhancer(sigma=2.5) enhanced = enhancer.apply(image) # Original unchanged enhanced_inplace = enhancer.apply(image, inplace=True) # Original modified
Implementing a custom ObjectDetector
from phenotypic.abc_ import ObjectDetector from phenotypic import Image from skimage.feature import peak_local_max from skimage.measure import label as measure_label import numpy as np class PeakDetector(ObjectDetector): '''Detector using local peak finding to locate colonies.''' def __init__(self, min_distance: int = 10, threshold_abs: int = 100): super().__init__() self.min_distance = min_distance self.threshold_abs = threshold_abs @staticmethod def _operate(image: Image, min_distance: int = 10, threshold_abs: int = 100) -> Image: '''Find peaks in enh_gray and create object mask/map.''' # Find local maxima (colony peaks) coords = peak_local_max( image.enh_gray[:], min_distance=min_distance, threshold_abs=threshold_abs ) # Create binary mask from peaks mask = np.zeros(image.enh_gray.shape, dtype=bool) for y, x in coords: mask[y, x] = True # Create labeled map from mask labeled_map = measure_label(mask) # Set detection results image.objmask[:] = mask image.objmap[:] = labeled_map return image # Usage - automatic integrity validation in ImageDetector detector = PeakDetector(min_distance=15, threshold_abs=120) detected = detector.apply(image) colonies = detected.objects print(f"Detected {len(colonies)} colonies")
Understanding inplace parameter and memory efficiency
from phenotypic.enhance import GaussianBlur from phenotypic import Image image = Image.from_image_path('colony_plate.jpg') enhancer = GaussianBlur(sigma=2.0) # Default: inplace=False (safe, creates copy) enhanced = enhancer.apply(image) print(f"Same object? {id(image) == id(enhanced)}") # False # For memory efficiency with large images result = enhancer.apply(image, inplace=True) print(f"Same object? {id(image) == id(result)}") # True # inplace=True is useful in pipelines with many large images # to minimize memory overhead, but modifies the original
Using operations in a processing pipeline
from phenotypic import Image, ImagePipeline from phenotypic.enhance import GaussianBlur from phenotypic.detect import OtsuDetector from phenotypic.grid import GridFinder # Load image image = Image.from_image_path('colony_plate.jpg') # Sequential chaining enhanced = GaussianBlur(sigma=2).apply(image) detected = OtsuDetector().apply(enhanced) grid = GridFinder().apply(detected) # Or use ImagePipeline for batch processing pipeline = ImagePipeline() pipeline.add(GaussianBlur(sigma=2)) pipeline.add(OtsuDetector()) pipeline.add(GridFinder()) # Process multiple images with automatic parallelization images = [Image.from_image_path(f) for f in image_files] results = pipeline.operate(images) # Results are fully processed images
How parameter matching enables parallel execution
from phenotypic.abc_ import ImageOperation from phenotypic import Image class CustomThreshold(ImageOperation): def __init__(self, threshold: int, min_size: int = 5): super().__init__() self.threshold = threshold # Matched to _operate self.min_size = min_size # Matched to _operate @staticmethod def _operate(image: Image, threshold: int, min_size: int = 5) -> Image: # 'threshold' and 'min_size' automatically passed binary = image.enh_gray[:] > threshold image.objmask[:] = binary return image # When apply() is called: op = CustomThreshold(threshold=100, min_size=10) # apply() internally: # 1. Calls _get_matched_operation_args() # 2. Extracts: {'threshold': 100, 'min_size': 10} # 3. Calls _apply_to_single_image(..., matched_args=...) # 4. _apply_to_single_image passes kwargs to _operate() result = op.apply(image)
- __del__()#
Automatically stop tracemalloc when the object is deleted.
- __getstate__()#
Prepare the object for pickling by disposing of any widgets.
This ensures that UI components (which may contain unpickleable objects like input functions or thread locks) are cleaned up before serialization.
Note
This method modifies the object state by calling dispose_widgets(). Any active widgets will be detached from the object.
- apply(image: Image, inplace=False) Image[source]#
Applies the operation to an image, either in-place or on a copy.
- widget(image: Image | None = None, show: bool = False) Widget#
Return (and optionally display) the root widget.
- Parameters:
- Returns:
The root widget.
- Return type:
ipywidgets.Widget
- Raises:
ImportError – If ipywidgets or IPython are not installed.
- class phenotypic.abc_.MeasureFeatures[source]#
Bases:
BaseOperation,ABCExtract quantitative measurements from detected colony objects in images.
MeasureFeatures is the abstract base class for all feature extraction operations in PhenoTypic. Unlike ImageOperation classes that return modified images, MeasureFeatures subclasses extract numerical measurements from detected objects and return pandas DataFrames.
Design Principles:
This class follows a strict pattern where subclasses implement minimal code:
__init__: Define parameters and configuration for your measurement
_operate(image: Image) -> pd.DataFrame: Implement your measurement logic
Everything else (type validation, metadata handling, exception handling) is handled by the measure() method
This ensures consistent behavior, robust error handling, and automatic memory profiling across all measurement operations.
How It Works:
Users call the public API method measure(image, include_meta=False), which:
Validates input (Image object with detected objects)
Extracts operation parameters using introspection
Calls _operate() with matched parameters
Optionally merges image metadata into results
Returns a pandas DataFrame with object labels in the first column
Subclasses only override _operate() and __init__(). The measure() method provides automatic validation, exception handling, and metadata merging.
Accessing Image Data in _operate():
Within your _operate() implementation, access image data through accessors (lazy-evaluated, cached):
image.gray[:] - Grayscale intensity values (weighted luminance)
image.enh_gray[:] - Enhanced grayscale (preprocessed for analysis)
image.objmask[:] - Binary mask of detected objects (1 = object, 0 = background)
image.objmap[:] - Labeled integer array (label ID per object, 0 = background)
image.color.Lab[:] - CIE Lab color space (perceptually uniform)
image.color.XYZ[:] - CIE XYZ color space
image.color.HSV[:] - HSV color space (hue, saturation, value)
image.objects - High-level object interface (iterate with for loop)
image.num_objects - Count of detected objects
DataFrame Output Format:
Your _operate() method must return a pandas DataFrame with:
First column: OBJECT.LABEL (integer object IDs matching image.objmap[:])
Subsequent columns: Measurement results (numeric values)
One row per detected object
Example structure:
OBJECT.LABEL | Area | MeanIntensity | StdDev ----------- |------|---------------|-------- 1 | 1024 | 128.5 | 12.3 2 | 956 | 135.2 | 14.1 3 | 1101 | 120.8 | 11.9
Static Helper Methods:
This class provides 20+ static helper methods for common measurements on labeled objects:
Statistical: _calculate_mean(), _calculate_median(), _calculate_stddev(), _calculate_variance(), _calculate_sum(), _calculate_center_of_mass()
Extrema: _calculate_minimum(), _calculate_maximum(), _calculate_extrema()
Quantiles: _calculate_q1(), _calculate_q3(), _calculate_iqr()
Advanced: _calculate_coeff_variation(), _calculate_min_extrema(), _calculate_max_extrema()
Custom: _funcmap2objects() (apply arbitrary functions to labeled regions)
Utility: _ensure_array() (normalize scalar/array inputs)
All helpers accept an objmap parameter (labeled integer array). If None, the entire non-zero region is treated as a single object.
Example: Creating a Custom Measurer for Bacterial Colonies
Implementing a custom colony measurement class
from phenotypic.abc_ import MeasureFeatures from phenotypic.tools.constants_ import OBJECT import pandas as pd import numpy as np class MeasureCustom(MeasureFeatures): """Measure custom morphology metrics for microbial colonies.""" def __init__(self, intensity_threshold=100): """Initialize with intensity threshold for bright pixels.""" self.intensity_threshold = intensity_threshold def _operate(self, image): """Extract bright region area and mean intensity.""" gray = image.enh_gray[:] objmap = image.objmap[:] # Identify bright pixels within each object bright_mask = gray > self.intensity_threshold # Count bright pixels per object bright_area = self._calculate_sum( array=bright_mask.astype(int), objmap=objmap ) # Mean intensity of bright pixels bright_intensity = self._funcmap2objects( func=lambda arr: np.mean(arr[arr > self.intensity_threshold]), out_dtype=float, array=gray, objmap=objmap, default=np.nan ) # Create results DataFrame results = pd.DataFrame({ 'BrightArea': bright_area, 'BrightMeanIntensity': bright_intensity, }) results.insert(0, OBJECT.LABEL, image.objects.labels2series()) return results # Usage: from phenotypic import Image image = Image.from_image_path('colony_plate.jpg') # (After detection...) measurer = MeasureCustom(intensity_threshold=150) measurements = measurer.measure(image) # Returns DataFrame
When to Use MeasureFeatures vs ImageOperation:
Use MeasureFeatures when you want to extract numerical metrics from objects (returns DataFrame). Use ImageOperation (ImageEnhancer, ImageCorrector, ObjectDetector) when you want to modify the image (returns Image).
Microbe Phenotyping Context:
In arrayed microbial growth assays, measurements extract colony phenotypes: morphology (size, shape, compactness), color (pigmentation, growth medium binding), texture (biofilm formation, colony surface roughness), and intensity distribution (density variation, heterogeneity). These measurements feed into genetic and environmental association studies.
- No public attributes. Configuration is passed through __init__() parameters.
Examples
Basic usage: measure colony area and intensity
from phenotypic import Image from phenotypic.measure import MeasureSize # Load and detect colonies image = Image.from_image_path('plate_image.jpg') from phenotypic.detect import OtsuDetector detector = OtsuDetector() image = detector.operate(image) # Extract size measurements measurer = MeasureSize() df = measurer.measure(image) print(df) # Output: # OBJECT.LABEL Area IntegratedIntensity # 0 1 1024 128512 # 1 2 956 121232 # 2 3 1101 134232
Advanced: extract multiple measurement types with metadata
from phenotypic.measure import ( MeasureSize, MeasureShape, MeasureColor ) from phenotypic.core import ImagePipeline # Create pipeline combining detectors and measurers pipeline = ImagePipeline( detector=OtsuDetector(), measurers=[ MeasureSize(), MeasureShape(), MeasureColor(include_XYZ=False) ] ) # Measure a single image with metadata results = pipeline.operate(image) # Combine measurements: merge multiple DataFrames by OBJECT.LABEL combined = results[0] for df in results[1:]: combined = combined.merge(df, on=OBJECT.LABEL)
- __del__()#
Automatically stop tracemalloc when the object is deleted.
- measure(image, include_meta=False)[source]#
Execute the measurement operation on a detected-object image.
This is the main public API method for extracting measurements. It handles: input validation, parameter extraction via introspection, calling the subclass-specific _operate() method, optional metadata merging, and exception handling.
How it works (for users):
Pass your processed Image (with detected objects) to measure()
The method calls your subclass’s _operate() implementation
Results are validated and returned as a pandas DataFrame
If include_meta=True, image metadata (filename, grid info) is merged in
How it works (for developers):
When you subclass MeasureFeatures, you only implement _operate(). This measure() method automatically:
Extracts __init__ parameters from your instance (introspection)
Passes matched parameters to _operate() as keyword arguments
Validates the Image has detected objects (objmap)
Wraps exceptions in OperationFailedError with context
Merges grid/object metadata if requested
- Parameters:
- Returns:
Measurement results with structure:
First column: OBJECT.LABEL (integer IDs from image.objmap[:])
Remaining columns: Measurement values (float, int, or string)
One row per detected object
If include_meta=True, additional metadata columns are prepended before OBJECT.LABEL (e.g., Filename, GridRow, GridCol).
- Return type:
pd.DataFrame
- Raises:
OperationFailedError – If _operate() raises any exception, it is caught and re-raised as OperationFailedError with details including the original exception type, message, image name, and operation class. This provides consistent error handling across all measurers.
Notes
This method is the main entry point; do not override in subclasses
Subclasses implement _operate() only, not this method
Automatic memory profiling is available via logging configuration
Image must have detected objects (image.objmap should be non-empty)
Examples
Basic measurement extraction
from phenotypic import Image from phenotypic.measure import MeasureSize from phenotypic.detect import OtsuDetector # Load and detect image = Image.from_image_path('plate.jpg') image = OtsuDetector().operate(image) # Extract measurements measurer = MeasureSize() df = measurer.measure(image) print(df.head())
Include metadata in measurements
# With image metadata (filename, grid info) df_with_meta = measurer.measure(image, include_meta=True) print(df_with_meta.columns) # Output: ['Filename', 'GridRow', 'GridCol', 'OBJECT.LABEL', # 'Area', 'IntegratedIntensity', ...]
- class phenotypic.abc_.MeasurementInfo(value)[source]#
-
An enumeration.
- __format__(format_spec)#
Returns format using actual value type unless __str__ has been overridden.
- classmethod append_rst_to_doc(module) str[source]#
returns a string with the RST table appended to the module docstring.
- Return type:
- capitalize()#
Return a capitalized version of the string.
More specifically, make the first character have upper case and the rest lower case.
- casefold()#
Return a version of the string suitable for caseless comparisons.
- center(width, fillchar=' ', /)#
Return a centered string of length width.
Padding is done using the specified fill character (default is a space).
- count(sub[, start[, end]]) int#
Return the number of non-overlapping occurrences of substring sub in string S[start:end]. Optional arguments start and end are interpreted as in slice notation.
- encode(encoding='utf-8', errors='strict')#
Encode the string using the codec registered for encoding.
- encoding
The encoding in which to encode the string.
- errors
The error handling scheme to use for encoding errors. The default is ‘strict’ meaning that encoding errors raise a UnicodeEncodeError. Other possible values are ‘ignore’, ‘replace’ and ‘xmlcharrefreplace’ as well as any other name registered with codecs.register_error that can handle UnicodeEncodeErrors.
- endswith(suffix[, start[, end]]) bool#
Return True if S ends with the specified suffix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. suffix can also be a tuple of strings to try.
- expandtabs(tabsize=8)#
Return a copy where all tab characters are expanded using spaces.
If tabsize is not given, a tab size of 8 characters is assumed.
- find(sub[, start[, end]]) int#
Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Return -1 on failure.
- format(*args, **kwargs) str#
Return a formatted version of S, using substitutions from args and kwargs. The substitutions are identified by braces (‘{’ and ‘}’).
- format_map(mapping) str#
Return a formatted version of S, using substitutions from mapping. The substitutions are identified by braces (‘{’ and ‘}’).
- index(sub[, start[, end]]) int#
Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
- isalnum()#
Return True if the string is an alpha-numeric string, False otherwise.
A string is alpha-numeric if all characters in the string are alpha-numeric and there is at least one character in the string.
- isalpha()#
Return True if the string is an alphabetic string, False otherwise.
A string is alphabetic if all characters in the string are alphabetic and there is at least one character in the string.
- isascii()#
Return True if all characters in the string are ASCII, False otherwise.
ASCII characters have code points in the range U+0000-U+007F. Empty string is ASCII too.
- isdecimal()#
Return True if the string is a decimal string, False otherwise.
A string is a decimal string if all characters in the string are decimal and there is at least one character in the string.
- isdigit()#
Return True if the string is a digit string, False otherwise.
A string is a digit string if all characters in the string are digits and there is at least one character in the string.
- isidentifier()#
Return True if the string is a valid Python identifier, False otherwise.
Call keyword.iskeyword(s) to test whether string s is a reserved identifier, such as “def” or “class”.
- islower()#
Return True if the string is a lowercase string, False otherwise.
A string is lowercase if all cased characters in the string are lowercase and there is at least one cased character in the string.
- isnumeric()#
Return True if the string is a numeric string, False otherwise.
A string is numeric if all characters in the string are numeric and there is at least one character in the string.
- isprintable()#
Return True if the string is printable, False otherwise.
A string is printable if all of its characters are considered printable in repr() or if it is empty.
- isspace()#
Return True if the string is a whitespace string, False otherwise.
A string is whitespace if all characters in the string are whitespace and there is at least one character in the string.
- istitle()#
Return True if the string is a title-cased string, False otherwise.
In a title-cased string, upper- and title-case characters may only follow uncased characters and lowercase characters only cased ones.
- isupper()#
Return True if the string is an uppercase string, False otherwise.
A string is uppercase if all cased characters in the string are uppercase and there is at least one cased character in the string.
- join(iterable, /)#
Concatenate any number of strings.
The string whose method is called is inserted in between each given string. The result is returned as a new string.
Example: ‘.’.join([‘ab’, ‘pq’, ‘rs’]) -> ‘ab.pq.rs’
- ljust(width, fillchar=' ', /)#
Return a left-justified string of length width.
Padding is done using the specified fill character (default is a space).
- lower()#
Return a copy of the string converted to lowercase.
- lstrip(chars=None, /)#
Return a copy of the string with leading whitespace removed.
If chars is given and not None, remove characters in chars instead.
- static maketrans()#
Return a translation table usable for str.translate().
If there is only one argument, it must be a dictionary mapping Unicode ordinals (integers) or characters to Unicode ordinals, strings or None. Character keys will be then converted to ordinals. If there are two arguments, they must be strings of equal length, and in the resulting dictionary, each character in x will be mapped to the character at the same position in y. If there is a third argument, it must be a string, whose characters will be mapped to None in the result.
- partition(sep, /)#
Partition the string into three parts using the given separator.
This will search for the separator in the string. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.
If the separator is not found, returns a 3-tuple containing the original string and two empty strings.
- removeprefix(prefix, /)#
Return a str with the given prefix string removed if present.
If the string starts with the prefix string, return string[len(prefix):]. Otherwise, return a copy of the original string.
- removesuffix(suffix, /)#
Return a str with the given suffix string removed if present.
If the string ends with the suffix string and that suffix is not empty, return string[:-len(suffix)]. Otherwise, return a copy of the original string.
- replace(old, new, count=-1, /)#
Return a copy with all occurrences of substring old replaced by new.
- count
Maximum number of occurrences to replace. -1 (the default value) means replace all occurrences.
If the optional argument count is given, only the first count occurrences are replaced.
- rfind(sub[, start[, end]]) int#
Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Return -1 on failure.
- rindex(sub[, start[, end]]) int#
Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
- rjust(width, fillchar=' ', /)#
Return a right-justified string of length width.
Padding is done using the specified fill character (default is a space).
- rpartition(sep, /)#
Partition the string into three parts using the given separator.
This will search for the separator in the string, starting at the end. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.
If the separator is not found, returns a 3-tuple containing two empty strings and the original string.
- rsplit(sep=None, maxsplit=-1)#
Return a list of the substrings in the string, using sep as the separator string.
- sep
The separator used to split the string.
When set to None (the default value), will split on any whitespace character (including \n \r \t \f and spaces) and will discard empty strings from the result.
- maxsplit
Maximum number of splits (starting from the left). -1 (the default value) means no limit.
Splitting starts at the end of the string and works to the front.
- classmethod rst_table(*, title: str | None = None, header: tuple[str, str] = ('Name', 'Description')) str[source]#
- rstrip(chars=None, /)#
Return a copy of the string with trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
- split(sep=None, maxsplit=-1)#
Return a list of the substrings in the string, using sep as the separator string.
- sep
The separator used to split the string.
When set to None (the default value), will split on any whitespace character (including \n \r \t \f and spaces) and will discard empty strings from the result.
- maxsplit
Maximum number of splits (starting from the left). -1 (the default value) means no limit.
Note, str.split() is mainly useful for data that has been intentionally delimited. With natural text that includes punctuation, consider using the regular expression module.
- splitlines(keepends=False)#
Return a list of the lines in the string, breaking at line boundaries.
Line breaks are not included in the resulting list unless keepends is given and true.
- startswith(prefix[, start[, end]]) bool#
Return True if S starts with the specified prefix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. prefix can also be a tuple of strings to try.
- strip(chars=None, /)#
Return a copy of the string with leading and trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
- swapcase()#
Convert uppercase characters to lowercase and lowercase characters to uppercase.
- title()#
Return a version of the string where each word is titlecased.
More specifically, words start with uppercased characters and all remaining cased characters have lower case.
- translate(table, /)#
Replace each character in the string using the given translation table.
- table
Translation table, which must be a mapping of Unicode ordinals to Unicode ordinals, strings, or None.
The table must implement lookup/indexing via __getitem__, for instance a dictionary or list. If this operation raises LookupError, the character is left untouched. Characters mapped to None are deleted.
- upper()#
Return a copy of the string converted to uppercase.
- zfill(width, /)#
Pad a numeric string with zeros on the left, to fill a field of the given width.
The string is never truncated.
- class phenotypic.abc_.ObjectDetector[source]#
Bases:
ImageOperation,ABCAbstract base class for colony detection operations on agar plate images.
ObjectDetector defines the interface for algorithms that identify and label microbial colonies (or other objects) in image data. Detection is a critical step in the PhenoTypic image processing pipeline: it bridges image preprocessing (enhancement) and downstream analysis (measurement, refinement, and statistical analysis).
What does ObjectDetector do?
ObjectDetector subclasses analyze image data and produce two outputs that describe detected colonies:
image.objmask (binary mask): A 2D boolean array where True indicates colony pixels and False indicates background. Each True pixel belongs to some colony but the mask does not distinguish which colony each pixel belongs to—that is the role of objmap.
image.objmap (labeled map): A 2D integer array where each pixel value identifies the colony it belongs to. Background is 0, and each unique positive integer (1, 2, 3, …, N) represents a distinct labeled colony. This enables accessing individual colonies via
image.objectsafter detection.
Key principle: ObjectDetector is READ-ONLY for image data
ObjectDetector operations:
Read
image.enh_gray[:](enhanced grayscale),image.rgb[:], and optionally other image data to inform detection.Write only
image.objmask[:]andimage.objmap[:].Protect
image.rgb,image.gray, andimage.enh_grayvia automatic integrity validation (@validate_operation_integritydecorator).
Any attempt to modify protected image components raises
OperationIntegrityErrorwhenVALIDATE_OPS=Truein the environment (enabled during development/testing).Why is detection central to the pipeline?
Detection enables:
Object identification: Distinguishes individual colonies from background and from each other.
Downstream analysis: Once colonies are labeled,
image.objectsprovides access to properties (area, intensity, centroid, morphology) for each colony.Refinement: ObjectRefiner operations clean up detection masks/maps post-detection (e.g., removing spurious objects, merging fragments, filtering by size).
Phenotyping: Measurement operations (MeasureFeatures) extract colony features (color, morphology, growth) for statistical analysis.
Differences: objmask vs objmap
objmask (binary): Answers “is this pixel part of any colony?” Simple, useful for visualization or as input to further processing (e.g., morphological operations). Generated by most detectors via thresholding or edge detection.
objmap (labeled): Answers “which colony does this pixel belong to?” Enables per-object analysis. Each colony has a unique integer label, and connected-component labeling (usually
scipy.ndimage.label) assigns these labels.
Both are typically set together in
_operate()via:image.objmask[:] = binary_mask image.objmap[:] = labeled_map
When to use ObjectDetector vs ThresholdDetector vs ObjectRefiner
ObjectDetector (this class): Implement when you have a novel algorithm that produces both objmask and objmap from image data. Examples: Otsu thresholding, Canny edges, peak detection (RoundPeaks), watershed.
ThresholdDetector (ObjectDetector subclass): Inherit from this if your detection relies on a threshold value. Provides common patterns and may offer utility methods. Examples: OtsuDetector, TriangleDetector, LocalThresholdDetector.
ObjectRefiner (different ABC): Use when modifying existing masks/maps without analyzing image data. Examples: size filtering, morphological cleanup, erosion/dilation, merging nearby objects, removing objects near borders.
How to implement a custom ObjectDetector
Create the class:
from phenotypic.abc_ import ObjectDetector from phenotypic import Image class MyDetector(ObjectDetector): def __init__(self, param1: float, param2: int = 10): super().__init__() self.param1 = param1 self.param2 = param2 @staticmethod def _operate(image: Image, param1: float, param2: int = 10) -> Image: # Detection logic here return image
Within _operate(), read image data carefully:
Access via accessors:
image.enh_gray[:],image.gray[:],image.rgb[:]Never modify these; integrity validation will catch it
Consider the data type and range (uint8, uint16, float, etc.)
Perform detection: Use your algorithm to create a binary mask and labeled map. Typical approaches:
Thresholding-based: Global or local threshold → binary mask → label
Edge-based: Edge detector (Canny) → invert edges → label regions
Peak-based: Detect intensity peaks → grow regions → label
Region-based: Watershed or morphological operations
Create and set the binary mask and labeled map:
from scipy import ndimage import numpy as np # Example: simple Otsu thresholding enh = image.enh_gray[:] threshold = skimage.filters.threshold_otsu(enh) binary_mask = enh > threshold # Remove small noise binary_mask = skimage.morphology.remove_small_objects(binary_mask, min_size=20) # Label connected components labeled_map, num_objects = ndimage.label(binary_mask) # Set both outputs image.objmask[:] = binary_mask image.objmap[:] = labeled_map return image
Post-processing (optional): Some detectors include additional cleanup:
Clear borders: Use
skimage.segmentation.clear_border()to remove objects touching image edges.Remove small/large objects: Use
skimage.morphology.remove_small_objects()orskimage.morphology.remove_large_objects()to filter by area.Relabel: Call
image.objmap.relabel(connectivity=...)to ensure consecutive labels.
Helper functions from scipy and scikit-image
Common utilities for ObjectDetector implementations:
scipy.ndimage.label(): Assigns unique integers to connected components in a binary mask. Returns (labeled_array, num_features). Specify
structurefor connectivity (default 3x3 with all 8 neighbors; usegenerate_binary_structure(2, 1)for 4-connectivity).skimage.morphology.remove_small_objects(): Removes binary regions smaller than min_size pixels. Helpful for filtering noise or spurious detections.
skimage.morphology.remove_large_objects(): Removes regions larger than a threshold. Useful for excluding large artefacts or plate boundaries.
skimage.segmentation.clear_border(): Sets pixels on the image border to False, eliminating objects that touch the edge (common in arrayed imaging where wells at plate boundaries may be partially cut off).
skimage.morphology.binary_opening(): Erosion followed by dilation; removes small noise while preserving larger objects. Use with a suitable footprint (disk, square, or diamond).
scipy.ndimage.binary_dilation() / binary_erosion(): Expand or shrink objects morphologically. Useful for bridging fragmented colonies or removing small protrusions.
skimage.feature.canny(): Multi-stage edge detection (Gaussian → gradient → non-max suppression → hysteresis). Robust but requires threshold tuning.
Examples of different detection strategies
Otsu thresholding (simple, fast, global intensity)
from phenotypic.abc_ import ObjectDetector from phenotypic import Image from skimage import filters from scipy import ndimage import numpy as np class SimpleOtsuDetector(ObjectDetector): """Detects colonies using global Otsu thresholding.""" def __init__(self): super().__init__() @staticmethod def _operate(image: Image) -> Image: enh = image.enh_gray[:] # Compute threshold thresh = filters.threshold_otsu(enh) # Create binary mask mask = enh > thresh # Label connected components labeled, num = ndimage.label(mask) # Set results image.objmask[:] = mask image.objmap[:] = labeled return image # Usage: # detector = SimpleOtsuDetector() # plate = Image.from_image_path("plate.jpg") # result = detector.apply(plate) # colonies = result.objects # Access detected colonies
Edge-based detection with Canny and post-processing
from phenotypic.abc_ import ObjectDetector from phenotypic import Image from skimage import feature, morphology from scipy import ndimage import numpy as np class EdgeDetector(ObjectDetector): """Detects colonies by finding edges and labeling enclosed regions.""" def __init__(self, sigma: float = 1.5, min_size: int = 50): super().__init__() self.sigma = sigma self.min_size = min_size @staticmethod def _operate(image: Image, sigma: float = 1.5, min_size: int = 50) -> Image: enh = image.enh_gray[:] # Detect edges using Canny edges = feature.canny(enh, sigma=sigma) # Invert to get regions (not edges) regions = ~edges # Label connected regions labeled, num = ndimage.label(regions) # Remove small objects labeled = morphology.remove_small_objects( labeled, min_size=min_size) # Create binary mask from labeled map mask = labeled > 0 # Set results image.objmask[:] = mask image.objmap[:] = labeled return image # Usage: # detector = EdgeDetector(sigma=1.0, min_size=100) # result = detector.apply(plate)
Peak detection with cleanup (RoundPeaks-inspired approach)
from phenotypic.abc_ import ObjectDetector from phenotypic import Image from scipy import ndimage from scipy.signal import find_peaks import numpy as np class PeakColumnDetector(ObjectDetector): """Detects colonies via 1D peak finding on intensity profiles.""" def __init__(self, min_distance: int = 15, threshold_abs: int = 100): super().__init__() self.min_distance = min_distance self.threshold_abs = threshold_abs @staticmethod def _operate(image: Image, min_distance: int = 15, threshold_abs: int = 100) -> Image: enh = image.enh_gray[:] # Sum intensity along rows to get column profile col_sums = np.sum(enh, axis=0) # Find peaks in profile peaks, _ = find_peaks(col_sums, distance=min_distance, height=threshold_abs) # Simple approach: threshold and label # (Production code would do more sophisticated grid inference) mask = enh > np.median(enh) # Placeholder threshold labeled, num = ndimage.label(mask) image.objmask[:] = mask image.objmap[:] = labeled return image
When and how to refine detections (post-processing)
Raw detections often need cleanup:
Remove small noise: Spurious single-pixel detections or tiny salt-and-pepper artifacts. Use ObjectRefiner + remove_small_objects.
Clean borders: Colonies at plate edges may be incomplete. Use ObjectRefiner or clear_border() in detector.
Merge fragments: Noise or uneven lighting can fragment a single colony into multiple labels. Use ObjectRefiner with morphological dilation or connected-component merging.
Remove large objects: Plate edges, dust on the scanner, or agar artifacts appear as large regions. Use ObjectRefiner + remove_large_objects.
Grid-aware filtering: In arrayed formats (96-well, 384-well), one object per grid cell is expected. Use GridObjectRefiner to enforce this constraint or GridRefiner to assign dominant objects to grid positions.
Example pipeline with detection + refinement:
from phenotypic import Image, ImagePipeline from phenotypic.detect import OtsuDetector from phenotypic.refine import RemoveSmallObjectsRefiner, ClearBorderRefiner pipeline = ImagePipeline() pipeline.add(OtsuDetector()) # Initial detection pipeline.add(ClearBorderRefiner()) # Remove edge-touching objects pipeline.add(RemoveSmallObjectsRefiner(min_size=100)) # Filter noise image = Image.from_image_path("plate.jpg") result = pipeline.operate([image])[0] # result now has clean, labeled colonies ready for measurement
- None#
- Type:
all operation parameters are stored in subclass instances
- apply(image, inplace=False)[source]#
User-facing method to apply detection to an image. Handles copy/inplace logic and parameter matching.
- _operate(image, **kwargs)#
Abstract static method implemented by subclasses with detection logic. Must set image.objmask and image.objmap.
Notes
Integrity protection: The @validate_operation_integrity decorator on apply() ensures image.rgb, image.gray, and image.enh_gray are not modified. Violations raise OperationIntegrityError during development (VALIDATE_OPS=True).
Binary mask is often intermediate: Many implementations create objmask first, then derive objmap via connected-component labeling. Both must be set for downstream code to work correctly.
Label consistency: Use image.objmap.relabel() after manipulating the labeled map to ensure labels are consecutive (1, 2, 3, …, N) and to update objmask.
Memory efficiency: Large images and detailed segmentations consume memory. Consider inplace=True in pipelines processing many images, or use sparse representations (objmap uses scipy.sparse internally).
Static _operate() method: Required for parallel execution in ImagePipeline. All parameters (except image) must be instance attributes.
Examples
Detect colonies in a plate image and access results
from phenotypic import Image from phenotypic.detect import OtsuDetector # Load a plate image plate = Image.from_image_path("agar_plate.jpg") # Apply detection detector = OtsuDetector() detected = detector.apply(plate) # Access binary mask mask = detected.objmask[:] # numpy array print(f"Mask shape: {mask.shape}, True pixels: {mask.sum()}") # Access labeled map objmap = detected.objmap[:] print(f"Detected {objmap.max()} colonies") # Iterate over colonies and measure properties for colony in detected.objects: print(f"Colony area: {colony.area} px, " f"centroid: {colony.centroid}")
Custom detector with parameter tuning
from phenotypic.abc_ import ObjectDetector from phenotypic import Image from skimage import filters from scipy import ndimage class LocalThresholdDetector(ObjectDetector): """Detects colonies using adaptive local thresholding.""" def __init__(self, block_size: int = 31): super().__init__() self.block_size = block_size @staticmethod def _operate(image: Image, block_size: int = 31) -> Image: enh = image.enh_gray[:] # Ensure block_size is odd if block_size % 2 == 0: block_size += 1 # Apply local threshold threshold = filters.threshold_local(enh, block_size=block_size) mask = enh > threshold # Label labeled, num = ndimage.label(mask) # Set results image.objmask[:] = mask image.objmap[:] = labeled return image # Usage detector = LocalThresholdDetector(block_size=51) result = detector.apply(plate) print(f"Found {result.objmap[:].max()} colonies")
Detection in a full pipeline with enhancement and refinement
from phenotypic import Image, ImagePipeline from phenotypic.enhance import GaussianBlur from phenotypic.detect import CannyDetector from phenotypic.refine import RemoveSmallObjectsRefiner from phenotypic.measure import MeasureColor # Create a processing pipeline pipeline = ImagePipeline() pipeline.add(GaussianBlur(sigma=2.0)) # Preprocessing pipeline.add(CannyDetector(sigma=1.5)) # Detection pipeline.add(RemoveSmallObjectsRefiner(min_size=50)) # Cleanup pipeline.add(MeasureColor()) # Downstream analysis # Load image and process image = Image.from_image_path("plate.jpg") result = pipeline.operate([image])[0] # Results include enhanced image, detected/refined colonies, and measurements print(f"Colonies: {result.objmap[:].max()}") print(f"Measurements: {result.measurements.shape}")
- __del__()#
Automatically stop tracemalloc when the object is deleted.
- __getstate__()#
Prepare the object for pickling by disposing of any widgets.
This ensures that UI components (which may contain unpickleable objects like input functions or thread locks) are cleaned up before serialization.
Note
This method modifies the object state by calling dispose_widgets(). Any active widgets will be detached from the object.
- apply(image, inplace=False)[source]#
Binarizes the given image gray using the Yen threshold method.
This function modifies the arr image by applying a binary mask to its enhanced gray (enh_gray). The binarization threshold is automatically determined using Yen’s method. The resulting binary mask is stored in the image’s objmask attribute.
- widget(image: Image | None = None, show: bool = False) Widget#
Return (and optionally display) the root widget.
- Parameters:
- Returns:
The root widget.
- Return type:
ipywidgets.Widget
- Raises:
ImportError – If ipywidgets or IPython are not installed.
- class phenotypic.abc_.ObjectRefiner[source]#
Bases:
ImageOperation,ABCAbstract base class for post-detection refinement operations that modify object masks and maps.
ObjectRefiner is the foundation for all post-detection cleanup algorithms that refine colony detections through morphological operations, filtering, and merging. Unlike ObjectDetector (which analyzes image data to create initial detections), ObjectRefiner only modifies the object mask and labeled map, leaving preprocessing data untouched.
What is ObjectRefiner?
ObjectRefiner operates on the principle of non-destructive post-processing: all modifications are applied only to image.objmask (binary mask) and image.objmap (labeled map), while original image components (image.rgb, image.gray, image.enh_gray) remain protected and unchanged. This allows you to experiment with multiple refinement chains without affecting raw or enhanced image data, ensuring reproducibility and enabling comparison of different cleanup strategies.
Key Principle: ObjectRefiner Modifies Only Detection Results
ObjectRefiner operations:
Read image.objmask[:] (binary mask) and image.objmap[:] (labeled map) from prior detection.
Write only image.objmask[:] and image.objmap[:] with refined results.
Protect image.rgb, image.gray, and image.enh_gray via automatic integrity validation (@validate_operation_integrity decorator).
Any attempt to modify protected image components raises OperationIntegrityError when VALIDATE_OPS=True in the environment (enabled during development/testing).
Role in the Detection-to-Measurement Pipeline
ObjectRefiner sits after detection but before measurement:
Raw Image (rgb, gray, enh_gray) ↓ ImageEnhancer(s) → Improve visibility, reduce noise ↓ ObjectDetector → Detect colonies/objects (initial, often noisy) ↓ ObjectRefiner(s) → Clean up detections (optional but recommended) ↓ MeasureFeatures → Extract colony properties ↓ Analysis → Statistical phenotyping, clustering, growth curvesWhen you call refiner.apply(image), you get back an Image with refined objmask and objmap but identical preprocessing and image data—ready for downstream measurement and analysis.
Why Refinement Matters for Colony Phenotyping
Raw detections from ObjectDetector often contain artifacts:
Spurious small objects: Dust, sensor noise, agar texture, or salt-and-pepper thresholding artifacts create false-positive detections that bias colony counts and statistics.
Fragmented colonies: Uneven lighting, pigment heterogeneity, or aggressive thresholding fragments a single colony into multiple disconnected regions, inflating counts and distorting area measurements.
Merged colonies: In dense plates or when colonies touch, thresholding may merge adjacent colonies into a single detection, losing individuality and requiring post-hoc separation.
Holes in masks: Internal voids within colony masks (from glare or non-uniform pigmentation) create discontinuous shapes that confuse morphological measurements or downstream analysis.
Border artifacts: Colonies touching plate or well boundaries may be incomplete, biasing per-well phenotyping in high-throughput formats.
Refinement operations target these issues with domain-specific strategies: morphological operations (erosion, dilation, opening, closing), shape filtering (circularity, solidity), size thresholding, and boundary enforcement to produce clean, valid detection results.
Differences: ObjectDetector vs ObjectRefiner
ObjectDetector: Analyzes image data (grayscale, RGB, color spaces) and produces initial objmask and objmap. Input: enhanced image. Output: detection results. Typical use: thresholding, edge detection, peak finding, watershed segmentation.
ObjectRefiner: Modifies existing objmask and objmap without analyzing image data. Input: detection results. Output: refined detection results. Typical use: size filtering, morphological cleanup, shape filtering, merging/splitting objects, border removal.
When to Use ObjectRefiner vs Building Better ObjectDetector
Should you refine or improve the detector? Consider:
Use ObjectRefiner if: - The detector produces mostly correct detections but with manageable noise/artifacts - You can characterize the artifacts (small, fragmented, low-circularity, etc.) - Chaining simple refinement operations is clearer than tuning detector parameters - You want to compare cleanup strategies or enable parameter sweeps
Improve ObjectDetector if: - The detector fundamentally fails (misses most colonies, detects at wrong threshold) - Raw detections are too noisy to salvage through simple refinement - The problem is best solved through domain-specific detection logic, not post-hoc cleanup - You have labeled ground truth for detector optimization
Typical Refinement Strategies
Common ObjectRefiner implementations address specific issues:
Size filtering: Remove objects below/above size thresholds (e.g., SmallObjectRemover). Targets: spurious noise, dust, agar artifacts, or unrealistically large regions.
Shape filtering: Remove objects with poor morphology (low circularity, low solidity). Targets: elongated artifacts, merged colonies, debris. Example: LowCircularityRemover.
Hole filling: Fill holes within colony masks for solid shape representation (e.g., MaskFill). Targets: voids from uneven illumination, pigment patterns. Improves area measurements.
Morphological operations: Erosion, dilation, opening, closing to refine mask edges. Targets: fragmented edges, small protrusions, internal gaps. Uses _make_footprint().
Border removal: Remove or exclude objects touching image/well boundaries. Targets: incomplete colonies in arrayed formats. Example: clear_border operations.
Merging/splitting: Combine nearby objects (dilation + relabeling) or separate touching regions (watershed, distance transform). Targets: fragmented colonies, merged colonies.
Integrity Validation: Protection of Core Data
ObjectRefiner uses the
@validate_operation_integritydecorator on theapply()method to guarantee that preprocessing data are never modified:@validate_operation_integrity('image.rgb', 'image.gray', 'image.enh_gray') def apply(self, image: Image, inplace: bool = False) -> Image: return super().apply(image=image, inplace=inplace)
This decorator:
Calculates cryptographic signatures of image.rgb, image.gray, and image.enh_gray before processing
Calls the parent apply() method to execute your _operate() implementation
Recalculates signatures after operation completes
Raises
OperationIntegrityErrorif any protected component was modified
Note: Integrity validation only runs if the
VALIDATE_OPS=Trueenvironment variable is set (development-time safety; disabled in production for performance).Implementing a Custom ObjectRefiner
Subclass ObjectRefiner and implement a single method:
from phenotypic.abc_ import ObjectRefiner from phenotypic import Image from skimage.morphology import remove_small_objects class MyCustomRefiner(ObjectRefiner): def __init__(self, min_size: int = 50): super().__init__() self.min_size = min_size # Instance attribute matched to _operate() @staticmethod def _operate(image: Image, min_size: int = 50) -> Image: # Modify ONLY objmap; read, process, write back # objmask will be auto-updated from objmap via relabel() refined_map = remove_small_objects(image.objmap[:], min_size=min_size) image.objmap[:] = refined_map return image
Key Rules for Implementation:
_operate()must be static (required for parallel execution in pipelines).All parameters except image must exist as instance attributes with matching names (enables automatic parameter matching via _get_matched_operation_args()).
Only modify ``image.objmask[:]`` and ``image.objmap[:]``—all other components are protected. Reading image data is allowed but modifications will trigger integrity errors.
Always use the accessor pattern:
image.objmap[:] = new_data(never direct attribute assignment).Return the modified Image object.
Modifying objmask and objmap
Within your _operate() method, use the accessor interface to read and write detection results:
# Reading detection data mask = image.objmask[:] # Binary mask (True = object) objmap = image.objmap[:] # Labeled map (0 = background, 1+ = object label) objects = image.objects # High-level ObjectCollection interface # Modifying detection data image.objmask[:] = refined_mask # Full replacement of binary mask image.objmap[:] = refined_map # Full replacement of labeled map # Partial updates (boolean indexing) # Mark certain labels as background (set to 0) keep_labels = [1, 3, 5] # Labels to retain filtered_map = np.where(np.isin(objmap, keep_labels), objmap, 0) image.objmap[:] = filtered_map
Relationship Between objmask and objmap
objmap (labeled map): Each pixel contains the object label (0 = background, 1+ = object ID). Authoritative source of truth; defines which pixels belong to which colony.
objmask (binary mask): Simple binary version of objmap; True where objmap > 0, False elsewhere. Derived from objmap via image.objmap.relabel().
When you modify objmap, objmask is automatically updated. When you modify objmask directly, call image.objmap.relabel() to ensure consistency (or reconstruct objmap from objmask via connected-component labeling).
The _make_footprint() Static Utility
ObjectRefiner provides a static helper for generating morphological structuring elements (footprints) used in erosion, dilation, and other morphological operations:
@staticmethod def _make_footprint(shape: Literal["square", "diamond", "disk"], radius: int) -> np.ndarray: '''Creates a binary morphological footprint for image processing.'''
Footprint Shapes and When to Use Each
“disk”: Circular/isotropic footprint. Best for preserving rounded colony shapes and applying uniform processing in all directions. Use for: general-purpose morphology (dilation to merge fragments, erosion to remove noise), operations that respect colony roundness.
“square”: Square footprint with 8-connectivity. Emphasizes horizontal/vertical edges and aligns with pixel grid. Use for: grid-aligned artifacts, operations aligned with imaging hardware, when processing speed matters (slightly faster than disk).
“diamond”: Diamond-shaped (rotated square) footprint with 4-connectivity. Creates a cross-like neighborhood pattern. Use for: specialized cases where diagonal connections should be de-emphasized; less common in practice.
The radius parameter controls the neighborhood size (in pixels). Larger radii affect more neighbors and produce broader morphological effects (merge more fragments, remove larger noise, but risk bridging adjacent colonies). Choose radius smaller than minimum inter-colony spacing to avoid creating false merges.
Common Morphological Refinement Patterns
Use _make_footprint() with morphological operations from scipy.ndimage or skimage.morphology:
from scipy.ndimage import binary_dilation, binary_erosion from skimage.morphology import binary_closing, binary_opening from phenotypic.abc_ import ObjectRefiner disk_fp = ObjectRefiner._make_footprint('disk', radius=3) # Dilation: expand object regions (merge fragmented colonies) dilated_mask = binary_dilation(binary_mask, structure=disk_fp) # Erosion: shrink object regions (remove thin protrusions, small noise) eroded_mask = binary_erosion(binary_mask, structure=disk_fp) # Closing: dilation then erosion (fill small holes) closed_mask = binary_closing(binary_mask, structure=disk_fp) # Opening: erosion then dilation (remove small noise) opened_mask = binary_opening(binary_mask, structure=disk_fp)
Chaining Multiple Refinements
Refinement operations are typically chained to address multiple issues in sequence:
from phenotypic import Image, ImagePipeline from phenotypic.refine import SmallObjectRemover, MaskFill, LowCircularityRemover # Build a refinement pipeline pipeline = ImagePipeline() pipeline.add(SmallObjectRemover(min_size=100)) # Remove dust/noise pipeline.add(MaskFill()) # Fill holes in colonies pipeline.add(LowCircularityRemover(cutoff=0.75)) # Remove elongated artifacts # Apply to detected image image = Image.from_image_path('plate.jpg') from phenotypic.detect import OtsuDetector detected = OtsuDetector().apply(image) # Refine refined = pipeline.operate([detected])[0] colonies = refined.objects print(f"After refinement: {len(colonies)} colonies")
Rationale for chaining:
Order matters: Remove small noise before filling holes (no point filling tiny artifacts). Remove low-circularity objects before morphological operations (cleaner starting point).
Divide and conquer: One refiner per issue (size, shape, holes, borders) is clearer than monolithic operations.
No data loss: Original detection and image data are preserved, so intermediate steps can be inspected and validated.
Reproducibility: Chained operations can be serialized to YAML for documentation and reuse.
Methods and Attributes
- None at the ObjectRefiner level; subclasses define refinement parameters
- as instance attributes
- Type:
e.g., min_size, cutoff, radius
- apply(image, inplace=False)[source]#
Applies the refinement to an image. Returns a modified Image with refined objmask and objmap but unchanged RGB/gray/enh_gray. Handles copy/inplace logic and validates data integrity.
- _operate(image, **kwargs)#
Abstract static method implemented by subclasses. Performs the actual refinement algorithm. Parameters are automatically matched to instance attributes.
- _make_footprint(shape, radius)[source]#
Static utility that creates a binary morphological footprint (disk, square, or diamond) for use in morphological operations.
Notes
Protected components: The
@validate_operation_integritydecorator ensures thatimage.rgb,image.gray, andimage.enh_graycannot be modified. Onlyimage.objmaskandimage.objmapcan be changed.Immutability by default:
apply(image)returns a modified copy by default. Setinplace=Truefor memory-efficient in-place modification.Static _operate() requirement: The
_operate()method must be static to support parallel execution in pipelines.Parameter matching for parallelization: All
_operate()parameters exceptimagemust exist as instance attributes. Whenapply()is called, these values are extracted and passed to_operate().Accessor pattern: Always use
image.objmap[:] = new_datato modify object maps. Never use direct attribute assignment.objmap/objmask consistency: When modifying objmap, call image.objmap.relabel() to ensure objmask is updated. When modifying objmask directly, reconstruct objmap via connected-component labeling.
Boolean indexing for filtering: Use numpy boolean arrays to filter labels:
mask = np.isin(objmap, keep_labels); filtered_map = objmap * mask
Examples
Removing small spurious objects below minimum size
from phenotypic.abc_ import ObjectRefiner from phenotypic import Image from skimage.morphology import remove_small_objects from scipy import ndimage class SimpleSmallObjectRemover(ObjectRefiner): '''Remove objects smaller than a minimum size threshold.''' def __init__(self, min_size: int = 50): super().__init__() self.min_size = min_size @staticmethod def _operate(image: Image, min_size: int = 50) -> Image: '''Remove small objects from labeled map.''' # Get current labeled map objmap = image.objmap[:] # Remove small objects (automatically updates objmap) refined = remove_small_objects(objmap, min_size=min_size) # Set refined result image.objmap[:] = refined return image # Usage from phenotypic.detect import OtsuDetector image = Image.from_image_path('plate.jpg') detected = OtsuDetector().apply(image) # Remove noise below 100 pixels refiner = SimpleSmallObjectRemover(min_size=100) cleaned = refiner.apply(detected) print(f"Before: {detected.objmap[:].max()} objects") print(f"After: {cleaned.objmap[:].max()} objects")
Removing low-circularity objects (merged colonies, artifacts)
from phenotypic.abc_ import ObjectRefiner from phenotypic import Image from skimage.measure import regionprops_table import pandas as pd import numpy as np import math class CircularityFilter(ObjectRefiner): '''Remove objects with low circularity (merged colonies, artifacts).''' def __init__(self, min_circularity: float = 0.7): super().__init__() self.min_circularity = min_circularity @staticmethod def _operate(image: Image, min_circularity: float = 0.7) -> Image: '''Filter objects by circularity using Polsby-Popper metric.''' objmap = image.objmap[:] # Measure shape properties props = regionprops_table( label_image=objmap, properties=['label', 'area', 'perimeter'] ) df = pd.DataFrame(props) # Calculate circularity (Polsby-Popper: 4*pi*area / perimeter^2) df['circularity'] = (4 * math.pi * df['area']) / (df['perimeter'] ** 2) # Keep only circular objects keep_labels = df[df['circularity'] >= min_circularity]['label'].values # Filter map: keep only selected labels refined_map = np.where(np.isin(objmap, keep_labels), objmap, 0) image.objmap[:] = refined_map return image # Usage image = Image.from_image_path('plate.jpg') from phenotypic.detect import OtsuDetector detected = OtsuDetector().apply(image) # Keep only well-formed circular colonies refiner = CircularityFilter(min_circularity=0.75) refined = refiner.apply(detected) print(f"Removed elongated artifacts: {detected.objmap[:].max()} -> {refined.objmap[:].max()}")
Filling holes in colony masks for solid shape representation
from phenotypic.abc_ import ObjectRefiner from phenotypic import Image from scipy.ndimage import binary_fill_holes class HoleFiller(ObjectRefiner): '''Fill holes within colony masks for solid shape representation.''' def __init__(self): super().__init__() @staticmethod def _operate(image: Image) -> Image: '''Fill holes in binary mask.''' mask = image.objmask[:] # Fill holes (interior voids within objects) filled = binary_fill_holes(mask) # Update mask image.objmask[:] = filled # Reconstruct labeled map from filled mask from scipy import ndimage labeled, _ = ndimage.label(filled) image.objmap[:] = labeled return image # Usage image = Image.from_image_path('plate.jpg') from phenotypic.detect import OtsuDetector detected = OtsuDetector().apply(image) # Fill holes from uneven illumination or pigmentation refiner = HoleFiller() refined = refiner.apply(detected) # Result: solid, contiguous colony shapes better for area measurements print(f"Holes filled; colonies now solid")
Morphological refinement with dilation to merge fragmented colonies
from phenotypic.abc_ import ObjectRefiner from phenotypic import Image from scipy.ndimage import binary_dilation, label as ndi_label import numpy as np class FragmentMerger(ObjectRefiner): '''Merge fragmented colonies via morphological dilation and relabeling.''' def __init__(self, dilation_radius: int = 2): super().__init__() self.dilation_radius = dilation_radius @staticmethod def _operate(image: Image, dilation_radius: int = 2) -> Image: '''Dilate mask and relabel to merge nearby fragments.''' mask = image.objmask[:] # Create disk footprint for isotropic dilation fp = ObjectRefiner._make_footprint('disk', dilation_radius) # Dilate to bridge fragmented regions dilated = binary_dilation(mask, structure=fp) # Relabel connected components relabeled, _ = ndi_label(dilated) # Set refined results image.objmask[:] = dilated image.objmap[:] = relabeled return image # Usage image = Image.from_image_path('plate.jpg') from phenotypic.detect import OtsuDetector detected = OtsuDetector().apply(image) # Merge fragments from uneven lighting refiner = FragmentMerger(dilation_radius=3) merged = refiner.apply(detected) print(f"Merged fragments: {detected.objmap[:].max()} -> {merged.objmap[:].max()} objects")
Chaining multiple refinements in a pipeline
from phenotypic import Image, ImagePipeline from phenotypic.enhance import GaussianBlur from phenotypic.detect import OtsuDetector from phenotypic.refine import ( SmallObjectRemover, MaskFill, LowCircularityRemover ) from phenotypic.measure import MeasureColor # Build complete processing pipeline with enhancement, detection, and refinement pipeline = ImagePipeline() # Preprocessing pipeline.add(GaussianBlur(sigma=1.5)) # Detection pipeline.add(OtsuDetector()) # Refinement (chain multiple cleanup operations) pipeline.add(SmallObjectRemover(min_size=100)) # Remove dust pipeline.add(MaskFill()) # Fill internal holes pipeline.add(LowCircularityRemover(cutoff=0.75)) # Remove merged/irregular # Measurement pipeline.add(MeasureColor()) # Load images and process image = Image.from_image_path('plate.jpg') results = pipeline.operate([image]) final = results[0] # Access final clean detection results colonies = final.objects measurements = final.measurements print(f"Detected and cleaned: {len(colonies)} colonies") print(f"Color measurements: {measurements.shape}")
- __del__()#
Automatically stop tracemalloc when the object is deleted.
- __getstate__()#
Prepare the object for pickling by disposing of any widgets.
This ensures that UI components (which may contain unpickleable objects like input functions or thread locks) are cleaned up before serialization.
Note
This method modifies the object state by calling dispose_widgets(). Any active widgets will be detached from the object.
- apply(image, inplace=False)[source]#
Applies the operation to an image, either in-place or on a copy.
- widget(image: Image | None = None, show: bool = False) Widget#
Return (and optionally display) the root widget.
- Parameters:
- Returns:
The root widget.
- Return type:
ipywidgets.Widget
- Raises:
ImportError – If ipywidgets or IPython are not installed.
- class phenotypic.abc_.PrefabPipeline(ops: List[ImageOperation] | Dict[str, ImageOperation] | None = None, meas: List[MeasureFeatures] | Dict[str, MeasureFeatures] | None = None, benchmark: bool = False, verbose: bool = False)[source]#
Bases:
ImagePipelineMarker class for pre-built, validated image processing pipelines from the PhenoTypic team.
PrefabPipeline is a specialized subclass of ImagePipeline that distinguishes “official” pre-built pipelines maintained by the PhenoTypic development team from user-created custom pipelines. It serves as a marker class (no additional functionality) that signals “this pipeline is validated, documented, and recommended for specific use cases in microbe colony phenotyping.”
What is PrefabPipeline?
PrefabPipeline is NOT an operation ABC and does NOT inherit from BaseOperation. Instead, it’s a subclass of ImagePipeline that:
Is a marker class: Inherits all ImagePipeline functionality unchanged; no new methods.
Indicates official status: Subclasses of PrefabPipeline are pre-built, validated pipelines with documented performance, parameter settings, and recommended use cases.
Enables classification: Code can distinguish official pipelines (
isinstance(obj, PrefabPipeline)) from user-defined pipelines for documentation, discovery, or defaulting.Provides templates: Each PrefabPipeline subclass is a complete processing workflow (enhancement, detection, refinement, measurement) ready to use out-of-the-box.
Available PrefabPipeline Subclasses
The PhenoTypic team maintains several pre-built pipelines optimized for different imaging scenarios:
HeavyOtsuPipeline: Multi-layer Otsu detection with aggressive refinement and measurement. - Use case: Robust colony detection on challenging images (uneven lighting, varied sizes). - Cost: Computationally expensive; best for offline batch processing. - Includes: Gaussian blur, CLAHE, Sobel filter, Otsu detection, morphological refinement,
grid alignment, multiple measurements.
HeavyWatershedPipeline: Watershed segmentation with extensive cleanup. - Use case: Closely-spaced, touching, or merged colonies. - Cost: Very expensive; suitable for small batches or deep analysis. - Includes: Enhancement, watershed detection, refinement, grid alignment, measurements.
RoundPeaksPipeline: Peak detection for well-separated, circular colonies. - Use case: Early-time-point growth, sparse or isolated colonies. - Cost: Fast; good for high-throughput screening. - Includes: Gaussian blur, round peak detection, size filtering, measurements.
GridSectionPipeline: Per-well section extraction and analysis. - Use case: Fine-grained per-well quality control and segmentation. - Cost: Moderate; depends on grid resolution. - Includes: Grid-aware section extraction, per-well measurements.
When to use PrefabPipeline vs Custom ImagePipeline
Use PrefabPipeline if: - You’re analyzing colony growth on agar plates (the intended use case). - You want an immediately usable, tested workflow without configuration. - You want to reproduce results matching published benchmarks or team documentation. - You need a baseline for custom extensions (subclass or copy and modify).
Create a custom ImagePipeline if: - Your imaging scenario is novel (unusual plate format, different organisms, special preparation). - You want to experiment with different detector/refiner/measurement combinations. - You have labeled ground truth and want to optimize parameters for your specific images. - You need pipeline extensions (custom operations not in standard library).
Using a PrefabPipeline
PrefabPipeline subclasses are used exactly like ImagePipeline:
from phenotypic import Image, GridImage from phenotypic.prefab import HeavyOtsuPipeline # Load image(s) image = GridImage.from_image_path('plate.jpg', nrows=8, ncols=12) # Instantiate and apply pipeline pipeline = HeavyOtsuPipeline() result = pipeline.apply(image) # or .operate([image]) # Access results colonies = result.objects measurements = result.measurements print(f"Detected: {len(colonies)} colonies") print(f"Measurements shape: {measurements.shape}")
Customizing a PrefabPipeline
PrefabPipelines accept tunable parameters in
__init__()to adapt to your images without rebuilding the pipeline structure:from phenotypic.prefab import HeavyOtsuPipeline # Use defaults (recommended for most cases) pipeline1 = HeavyOtsuPipeline() # Tune for noisier images pipeline2 = HeavyOtsuPipeline( gaussian_sigma=7, # Stronger blur small_object_min_size=150, # More aggressive noise removal border_remover_size=2 # Remove more edge objects ) # Parameters are typically named after the algorithm or parameter they control. # See pipeline docstring for available parameters and typical values.
When Parameters Fail: Creating a Custom Pipeline
If PrefabPipeline parameter tuning doesn’t solve your problem:
Analyze failures: Which step fails (detection, refinement, measurement)?
Use
pipeline.benchmark=True, verbose=Trueto trace execution.Visually inspect intermediate results (detection masks, refined masks).
Create a custom pipeline:
from phenotypic import ImagePipeline from phenotypic.enhance import GaussianBlur, CLAHE from phenotypic.detect import CannyDetector # Different detector from phenotypic.refine import SmallObjectRemover, MaskFill from phenotypic.measure import MeasureShape, MeasureColor # Custom pipeline for your specific use case custom = ImagePipeline() custom.add(GaussianBlur(sigma=3)) custom.add(CLAHE()) custom.add(CannyDetector(sigma=1.5, low_threshold=0.1, high_threshold=0.4)) custom.add(SmallObjectRemover(min_size=100)) custom.add(MaskFill()) custom.add(MeasureShape()) custom.add(MeasureColor()) # Test and iterate result = custom.operate([image])
Share successful custom pipelines: If you develop a successful custom pipeline for a new imaging scenario, consider contributing it as a PrefabPipeline subclass to the project.
Extending PrefabPipeline
To create a new official PrefabPipeline subclass:
from phenotypic.abc_ import PrefabPipeline from phenotypic.enhance import GaussianBlur, CLAHE from phenotypic.detect import OtsuDetector from phenotypic.refine import SmallObjectRemover from phenotypic.measure import MeasureShape class MyCustomPrefabPipeline(PrefabPipeline): '''Brief description of when to use this pipeline.''' def __init__(self, param1: int = 100, param2: float = 1.5, benchmark: bool = False, verbose: bool = False): '''Initialize with tunable parameters.''' ops = [ GaussianBlur(sigma=param2), CLAHE(), OtsuDetector(), SmallObjectRemover(min_size=param1), ] meas = [MeasureShape()] super().__init__(ops=ops, meas=meas, benchmark=benchmark, verbose=verbose)
Notes
Is a marker, not an operation: PrefabPipeline does not inherit from BaseOperation. It’s a convenient subclass of ImagePipeline for classification and discovery.
Inheritance of ImagePipeline features: PrefabPipeline inherits all ImagePipeline functionality: sequential operation chaining, benchmarking, verbose logging, batch processing via
.operate(), and serialization via.to_yaml()/.from_yaml().Parameter tuning via __init__(): Most PrefabPipeline subclasses expose key algorithm parameters in
__init__()(e.g., detection threshold, smoothing sigma, refinement footprint). Adjust these for your specific images before scaling to large batches.Benchmarking for profiling: Set
benchmark=Truewhen instantiating to track execution time and memory usage per operation. Useful for identifying bottlenecks in large batch runs.Documentation and examples: Each PrefabPipeline subclass is documented with use cases, typical parameters, performance characteristics, and example code. Check the subclass docstring for guidance.
Not for operations: Use PrefabPipeline only for complete pipelines. For individual operations (detection, enhancement, measurement), use operation ABCs directly.
Examples
Quick start: Detect colonies with HeavyOtsuPipeline
from phenotypic import GridImage from phenotypic.prefab import HeavyOtsuPipeline # Load a 96-well plate image image = GridImage.from_image_path('agar_plate.jpg', nrows=8, ncols=12) # Use the pre-built, validated pipeline pipeline = HeavyOtsuPipeline() result = pipeline.apply(image) # Access results print(f"Detected {len(result.objects)} colonies") print(f"Measurements: {result.measurements.columns.tolist()}")
Batch processing multiple plates with a PrefabPipeline
from phenotypic import GridImage from phenotypic.prefab import HeavyOtsuPipeline import glob # Load multiple plate images image_paths = glob.glob('batch_*.jpg') images = [GridImage.from_image_path(p, nrows=8, ncols=12) for p in image_paths] # Create pipeline (reusable for all images) pipeline = HeavyOtsuPipeline(benchmark=True) # Batch process results = pipeline.operate(images) # Collect results for i, result in enumerate(results): print(f"Image {i}: {len(result.objects)} colonies") print(f"Measurements shape: {result.measurements.shape}")
Customizing pipeline parameters for difficult images
from phenotypic import GridImage from phenotypic.prefab import HeavyOtsuPipeline image = GridImage.from_image_path('noisy_plate.jpg', nrows=8, ncols=12) # Increase smoothing and noise removal for difficult images pipeline = HeavyOtsuPipeline( gaussian_sigma=8, # Stronger blur small_object_min_size=200, # Aggressive noise removal border_remover_size=2 # More border filtering ) result = pipeline.apply(image) print(f"Robust detection: {len(result.objects)} colonies")
Comparing PrefabPipeline vs custom pipeline
from phenotypic import GridImage, ImagePipeline from phenotypic.prefab import HeavyOtsuPipeline from phenotypic.detect import CannyDetector from phenotypic.refine import SmallObjectRemover image = GridImage.from_image_path('plate.jpg', nrows=8, ncols=12) # Option 1: Use pre-built validated pipeline prefab = HeavyOtsuPipeline() result1 = prefab.apply(image) # Option 2: Create custom pipeline for comparison custom = ImagePipeline() from phenotypic.enhance import GaussianBlur custom.add(GaussianBlur(sigma=2)) custom.add(CannyDetector(sigma=1.5, low_threshold=0.1, high_threshold=0.4)) custom.add(SmallObjectRemover(min_size=100)) result2 = custom.apply(image) # Compare results print(f"Prefab: {len(result1.objects)}, Custom: {len(result2.objects)}")
- Parameters:
ops (List[ImageOperation] | Dict[str, ImageOperation] | None)
meas (List[MeasureFeatures] | Dict[str, MeasureFeatures] | None)
benchmark (bool)
verbose (bool)
- __del__()#
Automatically stop tracemalloc when the object is deleted.
- __getstate__()#
Prepare the object for pickling by disposing of any widgets.
This ensures that UI components (which may contain unpickleable objects like input functions or thread locks) are cleaned up before serialization.
Note
This method modifies the object state by calling dispose_widgets(). Any active widgets will be detached from the object.
- __init__(ops: List[ImageOperation] | Dict[str, ImageOperation] | None = None, meas: List[MeasureFeatures] | Dict[str, MeasureFeatures] | None = None, benchmark: bool = False, verbose: bool = False)#
This class represents a processing and measurement abc_ for Image operations and feature extraction. It initializes operational and measurement queues based on the provided dictionaries.
- Parameters:
ops (List[ImageOperation] | Dict[str, ImageOperation] | None) – A dictionary where the keys are operation names (strings) and the values are ImageOperation objects responsible for performing specific Image processing tasks.
meas (List[MeasureFeatures] | Dict[str, MeasureFeatures] | None) – An optional dictionary where the keys are feature names (strings) and the values are FeatureExtractor objects responsible for extracting specific features.
benchmark (bool) – A flag indicating whether to track execution times for operations and measurements. Defaults to False.
verbose (bool) – A flag indicating whether to print progress information when benchmark mode is on. Defaults to False.
- apply(image: Image, inplace: bool = False, reset: bool = True) GridImage | Image#
The class provides an abc_ to process and apply a series of operations on an Image. The operations are maintained in a queue and executed sequentially when applied to the given Image.
- Parameters:
image (Image) – The arr Image to be processed. The type Image refers to an instance of the Image object to which transformations are applied.
inplace (bool, optional) – A flag indicating whether to apply the transformations directly on the provided Image (True) or create a copy of the Image before performing transformations (False). Defaults to False.
reset (bool) – Whether to reset the image before applying the pipeline
- Return type:
- apply_and_measure(image: Image, inplace: bool = False, reset: bool = True, include_metadata: bool = True) pd.DataFrame#
Applies processing to the given image and measures the results.
This function first applies a processing method to the supplied image, adjusting it based on the given parameters. After processing, the resulting image is measured, and a DataFrame containing the measurement data is returned.
- Parameters:
image (Image) – The image to process and measure.
inplace (bool) – Whether to modify the original image directly or work on a copy. Default is False.
reset (bool) – Whether to reset any previous processing on the image before applying the current method. Default is True.
include_metadata (bool) – Whether to include metadata in the measurement results. Default is True.
- Returns:
A DataFrame containing measurement data for the processed image.
- Return type:
pd.DataFrame
- benchmark_results() pandas.DataFrame#
Returns a table of execution times for operations and measurements.
This method should be called after applying the pipeline on an image to get the execution times of the different processes.
- Returns:
A DataFrame containing execution times for each operation and measurement.
- Return type:
pd.DataFrame
- classmethod from_json(json_data: str | Path) SerializablePipeline#
Deserialize a pipeline from JSON format.
This method reconstructs a pipeline from a JSON string or file, restoring all operations, measurements, and configuration flags. Classes are imported from the phenotypic namespace and instantiated with their saved parameters.
- Parameters:
json_data (str | Path) – Either a JSON string or a path to a JSON file.
- Returns:
A new pipeline instance with the loaded configuration.
- Return type:
SerializablePipeline
- Raises:
ValueError – If the JSON is invalid or cannot be parsed.
ImportError – If a required operation or measurement class cannot be imported.
AttributeError – If a class cannot be found in the phenotypic namespace.
Example
Deserialize a pipeline from JSON format
>>> from phenotypic import ImagePipeline >>> >>> # Load from file >>> pipe = ImagePipeline.from_json('my_pipeline.json') >>> >>> # Load from string >>> json_str = '{"ops": {...}, "meas": {...}}' >>> pipe = ImagePipeline.from_json(json_str)
- measure(image: Image, include_metadata=True) pd.DataFrame#
Measures properties of a given image and optionally includes metadata. The method performs measurements using a set of predefined measurement operations. If benchmarking is enabled, the execution time of each measurement is recorded. When verbose mode is active, detailed logging of the measurement process is displayed. A progress bar is used to track progress if the tqdm library is available.
- Parameters:
- Returns:
- A DataFrame containing the results of all performed measurements combined
on the same index.
- Return type:
pd.DataFrame
- Raises:
Exception – An exception is raised if a measurement operation fails while being applied to the image.
- set_meas(measurements: List[MeasureFeatures] | Dict[str, MeasureFeatures])#
Sets the measurements to be used for further computation. The input can be either a list of MeasureFeatures objects or a dictionary with string keys and MeasureFeatures objects as values.
The method processes the given input to construct a dictionary mapping measurement names to MeasureFeatures instances. If a list is passed, unique class names of the MeasureFeatures instances in the list are used as keys.
- Parameters:
measurements (List[MeasureFeatures] | Dict[str, MeasureFeatures]) – A collection of measurement features either as a list of MeasureFeatures objects, where class names are used as keys for dictionary creation, or as a dictionary where keys are predefined strings and values are MeasureFeatures objects.
- Raises:
TypeError – If the measurements argument is neither a list nor a dictionary.
- set_ops(ops: List[ImageOperation] | Dict[str, ImageOperation])#
Sets the operations to be performed. The operations can be passed as either a list of ImageOperation instances or a dictionary mapping operation names to ImageOperation instances. This method ensures that each operation in the list has a unique name. Raises a TypeError if the input is neither a list nor a dictionary.
- Parameters:
ops (List[ImageOperation] | Dict[str, ImageOperation]) – A list of ImageOperation objects or a dictionary where keys are operation names and values are ImageOperation objects.
- Raises:
TypeError – If the input is not a list or a dictionary.
- to_json(filepath: str | Path | None = None) str#
Serialize the pipeline configuration to JSON format.
This method captures the pipeline’s operations, measurements, and configuration flags. It excludes internal state (attributes starting with ‘_’) and pandas DataFrames to keep the serialization clean and focused on reproducible configuration.
- Parameters:
filepath (str | Path | None) – Optional path to save the JSON. If None, returns JSON string. Can be a string or Path object.
- Returns:
JSON string representation of the pipeline configuration.
- Return type:
Example
Serialize a pipeline to JSON format
>>> from phenotypic import ImagePipeline >>> from phenotypic.detect import OtsuDetector >>> from phenotypic.measure import MeasureShape >>> >>> pipe = ImagePipeline(ops=[OtsuDetector()], meas=[MeasureShape()]) >>> json_str = pipe.to_json() >>> pipe.to_json('my_pipeline.json') # Save to file
- widget(image: Image | None = None, show: bool = False) Widget#
Return (and optionally display) the root widget.
- Parameters:
- Returns:
The root widget.
- Return type:
ipywidgets.Widget
- Raises:
ImportError – If ipywidgets or IPython are not installed.
- class phenotypic.abc_.ThresholdDetector[source]#
Bases:
ObjectDetector,ABCMarker ABC for threshold-based colony detection strategies.
ThresholdDetector specializes ObjectDetector for algorithms that detect colonies by converting grayscale intensity to a binary mask via thresholding. Unlike edge-based (Canny) or peak-based (RoundPeaks) approaches, thresholding works by partitioning intensity space: pixels above a threshold value become foreground (colonies), pixels below become background.
Why threshold-based detection?
Thresholding is ideal when:
Clear intensity separation: Colonies have distinctly different intensity than background (common on high-contrast agar plates or with good lighting).
Simplicity and speed: Single-pass algorithms (no iterative edge tracking or distance computation).
Robustness to morphology: Works equally well on round and irregular colonies (unlike peak-based approaches that assume circular shapes).
Well-defined boundary: Sharp transitions between foreground and background (less effective on blurry or faded colonies).
Thresholding strategies implemented in PhenoTypic
Otsu’s method: Finds threshold that minimizes within-class variance. Automatic, global, works for most balanced foreground/background histograms.
Li’s method: Minimizes Kullback-Leibler divergence. Good for dark foreground on bright background.
Yen’s method: Maximizes Yen’s object variance criterion. Good for sharply defined objects.
Triangle method: Connects histogram extrema. Works well for non-overlapping bimodal distributions.
Isodata/Iterative selection: Iteratively refines threshold based on class means. Robust but slower.
Mean/Minimum methods: Simple heuristic thresholds (average or minimum intensity). Fast, useful for baseline or preprocessing.
Local/Adaptive thresholding: Applies threshold per neighborhood instead of globally. Handles uneven illumination on agar.
When to subclass ThresholdDetector vs ObjectDetector directly
Subclass ThresholdDetector if:
Your algorithm produces objmask and objmap via thresholding (any strategy).
You want to signal intent: “this detector groups with other thresholding methods.”
You may add shared utility methods later (e.g., post-processing filters).
You value categorization for discovery and code organization.
Subclass ObjectDetector directly if:
Your algorithm uses edge detection (Canny), peak finding, watershed, or morphological operations (not thresholding).
Your approach doesn’t fit the threshold → binary mask → label pattern.
Typical workflow: enhance → threshold → label → refine
Most ThresholdDetector implementations follow this pipeline:
Read enhanced grayscale:
enh = image.enh_gray[:](preprocessed for contrast and noise suppression).Compute threshold: Use chosen strategy (Otsu, Li, Yen, etc.) to find optimal threshold value from histogram.
Create binary mask:
mask = enh > thresholdormask = enh >= threshold(test both if edge pixels ambiguous).Post-process (optional): Remove small noise, clear borders, morphological cleanup to improve mask quality.
Label connected components: Use
scipy.ndimage.label()to assign unique integer IDs to each colony (objmap).Set both outputs:
image.objmask = mask,image.objmap = labeled_map.
Parameter tuning guidance
Threshold-based detectors typically expose parameters that affect detection quality:
Threshold value: For manual methods (Mean, Minimum), directly controls the intensity cutoff. Higher values → fewer, larger colonies; lower → more, noisier.
Block size (local methods): Size of neighborhood for adaptive threshold. Larger blocks → smoother mask but may miss small colonies; smaller blocks → more detail but noise-prone.
Post-processing parameters:
ignore_zeros(skip pure black pixels in threshold computation),ignore_borders(remove edge-touching objects),min_size(filter objects below pixel count).
Comparison with other detection strategies
Edge-based (CannyDetector): Finds intensity gradients (colony boundaries). Better for faint or merged colonies; requires gradient-based preprocessing.
Peak-based (RoundPeaksDetector): Assumes round peaks; grows from maxima. Excellent for well-separated round colonies; fails on irregular shapes.
Threshold-based (this class): Direct intensity partitioning. Robust, fast, works for any shape; requires good intensity separation.
Common pitfalls and remedies
Over-segmentation (too many small objects): Use
ignore_zeros=Trueto skip dark pixels, apply morphological opening, or use ObjectRefiner withremove_small_objects(min_size=...).Under-segmentation (merged colonies): Local thresholding, morphological closing, or watershed post-processing.
False positives at edges: Use
ignore_borders=Trueorclear_border()in post-processing.Uneven illumination: Apply enhancement (contrast stretching, illumination correction) before detection, or use local thresholding.
Example implementations
See concrete subclasses for reference patterns:
OtsuDetector: Global automatic thresholding via Otsu’s variance minimization.
LiDetector, YenDetector, TriangleDetector: Alternative global strategies from scikit-image.filters.
MeanDetector, MinimumDetector: Simple heuristic thresholds.
Interface specification
Subclasses of ThresholdDetector must:
Inherit from ThresholdDetector (which provides ObjectDetector’s interface).
Implement
_operate(image: Image) -> Imageas a static method.Within
_operate():Read
image.enh_gray[:](and optionallyimage.rgb[:], image.gray[:]).Compute threshold (automatically or from parameter).
Generate binary mask via comparison:
mask = enh > threshold.Label connected components:
labeled, _ = ndimage.label(mask).Set both outputs:
image.objmask = mask,image.objmap = labeled.Return modified image.
Add to
phenotypic.detect.__init__.pyexports for public discovery.
Notes
This is a marker ABC with no additional methods. It exists to categorize threshold-based detectors in the class hierarchy and enable flexible discovery and code organization.
Examples
Detect colonies using Otsu’s automatic threshold
from phenotypic import Image from phenotypic.detect import OtsuDetector # Load a plate image plate = Image.from_image_path("agar_plate.jpg") # Apply Otsu threshold detection detector = OtsuDetector(ignore_zeros=True, ignore_borders=True) detected = detector.apply(plate) # Access results mask = detected.objmask[:] # Binary mask objmap = detected.objmap[:] # Labeled map num_colonies = objmap.max() print(f"Detected {num_colonies} colonies") # Iterate over colonies for colony in detected.objects: print(f"Colony {colony.label}: area={colony.area} px")
Compare different threshold strategies
from phenotypic import Image from phenotypic.detect import ( OtsuDetector, LiDetector, YenDetector, TriangleDetector ) plate = Image.from_image_path("agar_plate.jpg") # Test multiple threshold strategies detectors = { "Otsu": OtsuDetector(), "Li": LiDetector(), "Yen": YenDetector(), "Triangle": TriangleDetector(), } for name, detector in detectors.items(): result = detector.apply(plate) num = result.objmap[:].max() print(f"{name}: detected {num} colonies")
Build a pipeline with thresholding and refinement
from phenotypic import Image, ImagePipeline from phenotypic.enhance import ContrastEnhancer from phenotypic.detect import OtsuDetector from phenotypic.refine import RemoveSmallObjectsRefiner # Create pipeline pipeline = ImagePipeline() pipeline.add(ContrastEnhancer(factor=1.5)) # Boost contrast pipeline.add(OtsuDetector(ignore_zeros=True)) # Threshold pipeline.add(RemoveSmallObjectsRefiner(min_size=50)) # Cleanup # Process image plate = Image.from_image_path("agar_plate.jpg") result = pipeline.operate([plate])[0] print(f"Final colonies: {result.objmap[:].max()}")
- __del__()#
Automatically stop tracemalloc when the object is deleted.
- __getstate__()#
Prepare the object for pickling by disposing of any widgets.
This ensures that UI components (which may contain unpickleable objects like input functions or thread locks) are cleaned up before serialization.
Note
This method modifies the object state by calling dispose_widgets(). Any active widgets will be detached from the object.
- apply(image, inplace=False)#
Binarizes the given image gray using the Yen threshold method.
This function modifies the arr image by applying a binary mask to its enhanced gray (enh_gray). The binarization threshold is automatically determined using Yen’s method. The resulting binary mask is stored in the image’s objmask attribute.
- widget(image: Image | None = None, show: bool = False) Widget#
Return (and optionally display) the root widget.
- Parameters:
- Returns:
The root widget.
- Return type:
ipywidgets.Widget
- Raises:
ImportError – If ipywidgets or IPython are not installed.