phenotypic.GridImage#
- class phenotypic.GridImage(arr: ndarray | Image | PathLike | Path | str | None = None, name: str | None = None, grid_finder: GridFinder | None = None, nrows: int = 8, ncols: int = 12, bit_depth: Literal[8, 16] | None = None, illuminant: str | None = 'D65', gamma_encoding: Literal['sRGB'] | None = 'sRGB')[source]#
Bases:
ImageGridHandlerA specialized Image object that supports grid-based processing and overlay visualization.
This class extends ImageGridHandler to provide an intuitive interface for analyzing arrayed samples on agar plates or other gridded microbe cultures. It combines complete image processing capabilities with grid-aware operations, enabling well-level detection, measurement, and visualization with grid overlays. This is useful for high-throughput phenotyping workflows where colonies are arranged in regular arrays (e.g., 96-well or 384-well plates).
The class automatically manages grid detection and alignment, supports grid-based slicing to extract individual well images, and provides overlay visualizations with gridlines, well labels, and measurements aligned to the detected grid structure.
- Parameters:
- grid_finder#
Object responsible for detecting and optimizing the grid layout. If None, an AutoGridFinder is created with specified nrows/ncols.
- Type:
Optional[GridFinder]
- __init__(arr: ndarray | Image | PathLike | Path | str | None = None, name: str | None = None, grid_finder: GridFinder | None = None, nrows: int = 8, ncols: int = 12, bit_depth: Literal[8, 16] | None = None, illuminant: str | None = 'D65', gamma_encoding: Literal['sRGB'] | None = 'sRGB')[source]#
Initialize a GridImage with grid-based processing capabilities.
Creates a new GridImage instance with support for grid detection, well-level analysis, and grid-aligned visualization. Inherits all image processing capabilities from the parent hierarchy while adding grid-specific features.
- Parameters:
arr (np.ndarray | Image | PathLike | Path | str | None) – Initial image data. Can be a NumPy array (2-D grayscale or 3-D RGB), an Image instance, a file path string, or None for an empty image. Defaults to None.
name (str | None) – Human-readable name for the image. If None, uses the image UUID. Defaults to None.
grid_finder (Optional[GridFinder]) – Custom grid detection algorithm. If None, an AutoGridFinder is instantiated with the specified nrows and ncols. Defaults to None.
nrows (int) – Number of rows in the grid structure. Used only if grid_finder is None. Typical values: 8 (96-well), 16 (384-well), 32 (1536-well). Defaults to 8.
ncols (int) – Number of columns in the grid structure. Used only if grid_finder is None. Typical values: 12 (96-well), 24 (384-well), 48 (1536-well). Defaults to 12.
bit_depth (Literal[8, 16] | None) – Bit depth of the image (8 or 16 bits). If None, automatically inferred from arr dtype. Defaults to None.
illuminant (str | None) – Reference illuminant for color calculations. ‘D65’ (standard daylight) or ‘D50’ (imaging illuminant). Defaults to ‘D65’.
gamma_encoding (Literal["sRGB"] | None) – Gamma encoding for color correction. ‘sRGB’ for gamma-corrected images, None for linear RGB. Defaults to ‘sRGB’.
- Raises:
ValueError – If illuminant is not ‘D65’ or ‘D50’.
ValueError – If gamma_encoding is not ‘sRGB’ or None.
TypeError – If arr is provided but is not a valid image type.
Examples
Create from a plate image file
from phenotypic import GridImage # Load plate image with 96-well grid (8 rows x 12 cols) grid_img = GridImage('plate_scan.jpg', nrows=8, ncols=12) grid_img.show_overlay(show_gridlines=True)
Create with custom grid finder
from phenotypic import GridImage from phenotypic.grid import AutoGridFinder finder = AutoGridFinder(nrows=16, ncols=24) # 384-well plate grid_img = GridImage('plate_384.jpg', grid_finder=finder) print(grid_img.nrows, grid_img.ncols) # Output: 16 24
- __eq__(other: Image) bool#
Compares the current object with another object for equality.
This method checks if the current object’s attributes are equal to another object’s attributes. Equality is determined by verifying that the numerical arrays (rgb, gray, enh_gray, objmap) are element-wise identical.
Note
Only checks core image data, and not any other attributes such as metadata.
- __getitem__(key) Image#
Returns a copy of the image at the slices specified as a regular Image object.
- Returns:
A copy of the image at the slices indicated
- Return type:
- __setitem__(key, other_image)#
Sets an item in the object with a given key and Image object. Ensures that the Image being set matches the expected shape and type, and updates internal properties accordingly.
- Parameters:
key (Any) – The array slices for accesssing the elements of the image.
other_image (ImageHandler) – The other image to be set, which must match the shape of the existing elements accessed by the key and conform to the expected schema.
- Raises:
ValueError – If the shape of the value does not match the shape of the existing elements being accessed.
- property bit_depth: int#
Get the bit depth of the image.
The bit depth determines the number of bits used to represent each pixel value. Common values are 8 (0-255 range) and 16 (0-65535 range).
- Returns:
- The bit depth value (8 or 16) stored in protected metadata,
or None if not yet set.
- Return type:
int | None
- clear() None#
Reset all image data to empty state.
Note
bit_depth is retained. To change the bit depth, make a new Image object.
- Return type:
None
- property color: ColorAccessor#
Access all color space representations through a unified interface.
This property provides access to the ColorAccessor object, which groups all color space transformations and representations including:
XYZ: CIE XYZ color space
XYZ_D65: CIE XYZ under D65 illuminant
Lab: CIE L*a*b* perceptually uniform color space
xy: CIE xy chromaticity coordinates
hsv: HSV (Hue, Saturation, Value) color space
- Returns:
Unified accessor for all color space representations.
- Return type:
ColorAccessor
Examples
Access color spaces
>>> img = Image.imread('sample.jpg') >>> xyz_data = img.color.XYZ[:] >>> lab_data = img.color.Lab[:] >>> hue = img.color.hsv[..., 0] # hue is the first matrix in the array
- copy()#
Creates a copy of the current Image instance, excluding the UUID. .. note:: - The new instance is only informationally a copy. The UUID of the new instance is different.
- Returns:
A copy of the current Image instance.
- Return type:
- property enh_gray: EnhancedGrayscale#
Returns the image’s enhanced grayscale accessor. Preprocessing steps can be applied to this component to improve detection performance.
The enhanceable gray is a copy of the image’s gray form that can be modified and used to improve detection performance. The original gray data should be left intact in order to preserve image information integrity for measurements.’
- Returns:
A mutable container that stores a copy of the image’s gray form
- Return type:
EnhancedGrayscale
- property gray: Grayscale#
The image’s grayscale representation. The array form is converted into a gray form since some algorithm’s only handle 2-D
Note
gray elements are not directly mutable in order to preserve image information integrity
Change gray elements by changing the image being represented with Image.set_image()
- Returns:
An immutable container for the image gray that can be accessed like a numpy array, but has extra methods to streamline development.
- Return type:
Grayscale
See Also:
ImageMatrix
- property grid: GridAccessor#
Returns the GridAccessor object for grid-related operations.
- Returns:
Provides access to Grid-related operations.
- Return type:
GridAccessor
See Also
GridAccessor
- property grid_finder: GridFinder#
Get the GridFinder object responsible for detecting and aligning the grid.
The GridFinder determines the positions of grid lines (rows and columns) in the image, enabling well-level detection and measurement. The finder is initialized during construction and can be customized via the grid_finder setter.
- Returns:
The grid finding/alignment algorithm currently in use for this image.
- Return type:
Examples
Access and inspect grid finder
from phenotypic import GridImage from phenotypic.grid import AutoGridFinder grid_img = GridImage('plate.jpg') finder = grid_img.grid_finder print(type(finder)) # GridFinder instance
See also
GridFinder: Base class for grid finding algorithms.
- classmethod imread(filepath: PathLike, rawpy_params: dict | None = None, **kwargs) Image#
imread is a class method responsible for reading an image file from the specified path and performing necessary preprocessing based on the file format and additional parameters. The method supports a variety of image file types including common formats (e.g., JPEG, PNG) as well as raw sensor data. It uses the scikit-image library for loading standard images and rawpy for processing raw image files. This method also handles additional configurations for raw image preprocessing via rawpy parameters, such as white balance, gamma correction, and demosaic algorithm.
- Parameters:
filepath (PathLike) – Path to the image file to be read. It can be any valid file path-like object (e.g., str, pathlib.Path).
rawpy_params (dict | None) – Optional dictionary of parameters for processing raw image files when using rawpy. Supports options like white balance settings, demosaic algorithm, gamma correction, and others. Defaults to None.
**kwargs – Arbitrary keyword arguments to be passed for additional configurations specific to the Image instantiation.
- Returns:
- An instance of the Image class containing the processed image array and any
additional metadata.
- Return type:
- Raises:
UnsupportedFileTypeError – If the file type of the provided filepath is not supported by the method, either due to its extension not being recognized or due to the absence of required libraries like rawpy.
- classmethod load_hdf5(filename, image_name) Image#
Load an ImageHandler instance from an HDF5 file at the default hdf5 location
- Return type:
- classmethod load_pickle(filename: str) Image#
Load an image from a pickle file.
Deserializes image data and metadata that were previously saved with save2pickle(). Restores all image components including RGB, grayscale, enhanced grayscale, object map, and metadata.
- Parameters:
filename (str | PathLike) – Path to the pickle file to read.
- Returns:
A new Image instance with all data and metadata restored from the pickle file.
- Return type:
- Raises:
FileNotFoundError – If the specified pickle file does not exist.
pickle.UnpicklingError – If the file is not a valid pickle file or is corrupted.
Notes
Pickle files must be created with save2pickle() to ensure compatibility.
Enhanced gray and object map are reset and reconstructed from saved data.
Metadata (protected and public) is fully restored.
Examples
Load from pickle
>>> loaded = Image.load_pickle('image.pkl') >>> print(loaded.shape)
- property metadata: MetadataAccessor#
- property name: str#
Returns the name of the image. If no name is set, the name will be the uuid of the image.
- property ncols: int#
Gets the number of columns in the grid.
This property retrieves the total number of columns in the grid by accessing the corresponding attribute of the underlying grid instance. It provides a read-only interface to the ncols value.
- Returns:
The number of columns in the grid.
- Return type:
- property nrows: int#
Retrieves the number of nrows in the grid.
This property is used to access the number of nrows present in the grid object. It encapsulates the nrows attribute of the grid and returns it as an integer.
- Returns:
The number of nrows in the grid.
- Return type:
- property objects: ObjectsAccessor#
Accessor for performing operations on detected objects in the image.
Provides access to individual or grouped objects detected in the image, enabling measurement calculations, filtering, and object-specific analyses. Objects are identified through the object map (objmap) component which stores integer labels for each detected object.
- Returns:
- An accessor instance that manages object-specific operations
and measurements.
- Return type:
ObjectsAccessor
- Raises:
NoObjectsError – If no objects are present in the image. This occurs when num_objects == 0, indicating that either no object detection has been performed yet, or the detection found no objects. Apply an ObjectDetector first to identify and label objects.
Examples
Measure object properties
>>> img = Image.imread('sample.jpg') >>> detector = ObjectDetector() >>> detector.detect(img) >>> obj_accessor = img.objects >>> measurements = img.objects.measure.area()
- property objmap: ObjectMap#
Returns the ObjectMap accessor; The object map is a mutable integer gray that identifies the different objects in an image to be analyzed. Changes to elements of the object_map sync to the object_mask.
The object_map is stored as a compressed sparse column gray in the backend. This is to save on memory consumption at the cost of adding increased computational overhead between converting between sparse and dense matrices.
Note
Has accessor methods to get sparse representations of the object map that can streamline measurement calculations.
- Returns:
A mutable integer gray that identifies the different objects in an image to be analyzed.
- Return type:
ObjectMap
See Also:
ObjectMap
- property objmask: ObjectMask#
Returns the ObjectMask Accessor; The object mask is a mutable binary representation of the objects in an image to be analyzed. Changing elements of the mask will reset object_map labeling.
Note
- If the image has not been processed by a detector, the target for analysis is the entire image itself. Accessing the object_mask in this case
will return a 2-D array entirely with other_image 1 that is the same shape as the gray
Changing elements of the mask will relabel of objects in the object_map
- Returns:
A mutable binary representation of the objects in an image to be analyzed.
- Return type:
ObjectMaskErrors
See Also:
ObjectMask
- property props: list[RegionProperties]#
Fetches the properties of the whole image.
Calculates region properties for the entire image using the gray representation. The labeled image is generated as a full array with values of 1, and the intensity image corresponds to the _data.gray attribute of the object. Cache is disabled in this configuration.
- Returns:
A list of properties for the entire provided image.
- Return type:
list[skimage.measure._regionprops.RegionProperties]
Propertyetails
(Excerpt from skimage.measure.regionprops documentation on available properties.):
Read more at
skimage.measure.regionpropsor scikit-image documentation- area: float
Area of the region i.e. number of pixels of the region scaled by pixel-area.
- area_bbox: float
Area of the bounding box i.e. number of pixels of bounding box scaled by pixel-area.
- area_convex: float
Area of the convex hull image, which is the smallest convex polygon that encloses the region.
- area_filled: float
Area of the region with all the holes filled in.
- axis_major_length: float
The length of the major axis of the ellipse that has the same normalized second central moments as the region.
- axis_minor_length: float
The length of the minor axis of the ellipse that has the same normalized second central moments as the region.
- bbox: tuple
Bounding box (min_row, min_col, max_row, max_col). Pixels belonging to the bounding box are in the half-open interval [min_row; max_row) and [min_col; max_col).
- centroid: array
Centroid coordinate tuple (row, col).
- centroid_local: array
Centroid coordinate tuple (row, col), relative to region bounding box.
- centroid_weighted: array
Centroid coordinate tuple (row, col) weighted with intensity image.
- centroid_weighted_local: array
Centroid coordinate tuple (row, col), relative to region bounding box, weighted with intensity image.
- coords_scaled(K, 2): ndarray
Coordinate list (row, col) of the region scaled by spacing.
- coords(K, 2): ndarray
Coordinate list (row, col) of the region.
- eccentricity: float
Eccentricity of the ellipse that has the same second-moments as the region. The eccentricity is the ratio of the focal distance (distance between focal points) over the major axis length. The other_image is in the interval [0, 1). When it is 0, the ellipse becomes a circle.
- equivalent_diameter_area: float
The diameter of a circle with the same area as the region.
- euler_number: int
Euler characteristic of the set of non-zero pixels. Computed as number of connected components subtracted by number of holes (arr.ndim connectivity). In 3D, number of connected components plus number of holes subtracted by number of tunnels.
- extent: float
Ratio of pixels in the region to pixels in the total bounding box. Computed as area / (nrows * ncols)
- feret_diameter_max: float
Maximum Feret’s diameter computed as the longest distance between points around a region’s convex hull contour as determined by find_contours. [5]
- image(H, J): ndarray
Sliced binary region image which has the same size as bounding box.
- image_convex(H, J): ndarray
Binary convex hull image which has the same size as bounding box.
- image_filled(H, J): ndarray
Binary region image with filled holes which has the same size as bounding box.
- image_intensity: ndarray
Image inside region bounding box.
- inertia_tensor: ndarray
Inertia tensor of the region for the rotation around its mass.
- inertia_tensor_eigvals: tuple
The eigenvalues of the inertia tensor in decreasing order.
- intensity_max: float
Value with the greatest intensity in the region.
- intensity_mean: float
Value with the mean intensity in the region.
- intensity_min: float
Value with the least intensity in the region.
- intensity_std: float
Standard deviation of the intensity in the region.
- label: int
The label in the labeled arr image.
- moments(3, 3): ndarray
Spatial moments up to 3rd order:
m_ij = sum{ array(row, col) * row^i * col^j }
where the sum is over the row, col coordinates of the region.
- moments_central(3, 3): ndarray
Central moments (translation invariant) up to 3rd order:
mu_ij = sum{ array(row, col) * (row - row_c)^i * (col - col_c)^j }
where the sum is over the row, col coordinates of the region, and row_c and col_c are the coordinates of the region’s centroid.
- moments_hu: tuple
Hu moments (translation, scale, and rotation invariant).
- moments_normalized(3, 3): ndarray
Normalized moments (translation and scale invariant) up to 3rd order:
nu_ij = mu_ij / m_00^[(i+j)/2 + 1]
where m_00 is the zeroth spatial moment.
- moments_weighted(3, 3): ndarray
Spatial moments of intensity image up to 3rd order:
wm_ij = sum{ array(row, col) * row^i * col^j }
where the sum is over the row, col coordinates of the region.
- moments_weighted_central(3, 3): ndarray
Central moments (translation invariant) of intensity image up to 3rd order:
wmu_ij = sum{ array(row, col) * (row - row_c)^i * (col - col_c)^j }
where the sum is over the row, col coordinates of the region, and row_c and col_c are the coordinates of the region’s weighted centroid.
- moments_weighted_hu: tuple
Hu moments (translation, scale and rotation invariant) of intensity image.
- moments_weighted_normalized(3, 3): ndarray
Normalized moments (translation and scale invariant) of intensity image up to 3rd order:
wnu_ij = wmu_ij / wm_00^[(i+j)/2 + 1]
where wm_00 is the zeroth spatial moment (intensity-weighted area).
- num_pixels: int
Number of foreground pixels.
- orientation: float
Angle between the 0th axis (nrows) and the major axis of the ellipse that has the same second moments as the region, ranging from -pi/2 to pi/2 counter-clockwise.
- perimeter: float
Perimeter of object which approximates the contour as a line through the centers of border pixels using a 4-connectivity.
- perimeter_crofton: float
Perimeter of object approximated by the Crofton formula in 4 directions.
- slice: tuple of slices
A slice to extract the object from the source image.
- solidity: float
Ratio of pixels in the region to pixels of the convex hull image.
- reset() Type[Image]#
Resets the internal state of the object and returns an updated instance.
This method resets the state of enhanced gray and object map components maintained by the object. It ensures that the object is reset to its original state while maintaining its type integrity. Upon execution, the instance of the calling object itself is returned.
- Returns:
The instance of the object after resetting its internal state.
- Return type:
Type[Image]
- property rgb: ImageRGB#
Returns the ImageArray accessor; An image rgb represents the multichannel information
Note
rgb/gray element data is synced
change image shape by changing the image being represented with Image.set_image()
Raises an error if the arr image has no rgb form
- Returns:
A class that can be accessed like a numpy rgb, but has extra methods to streamline development, or None if not set
- Return type:
ImageRGB
- Raises:
NoArrayError – If no multichannel image data is set as arr.
Example
Image.rgb
from phenotypic import Image from phenotypic.data import load_colony image = Image(load_colony()) # get the rgb data arr = image.rgb[:] print(type(arr)) # set the rgb data # the shape of the new rgb must be the same shape as the original rgb image.rgb[:] = arr # without the bracket indexing the accessor is returned instead sprint(image.rgb[:])
See Also:
ImageArray
- rotate(angle_of_rotation: int, mode: str = 'constant', cval=0, order=0, preserve_range=True) None#
Rotates various data attributes of the object by a specified angle.
The method applies rotation transformations image data. It data that falls outside the border is clipped.
- Parameters:
angle_of_rotation (int) – The angle, in degrees, by which to rotate the data attributes. Positive values indicate counterclockwise rotation.
mode (str) – Mode parameter determining how borders are handled during the rotation. Default is ‘constant’.
cval – Constant value to fill edges in ‘constant’ mode. Default is 0.
order (int) – The order of the spline interpolation for rotating images. Must be an integer in the range [0, 5]. Default is 0 for nearest-neighbor interpolation.
preserve_range (bool) – Whether to keep the original input range of values after performing the rotation. Default is True.
- Returns:
None
- Return type:
None
- save2hdf5(filename, compression='gzip', compression_opts=4, overwrite=False)#
Save the image to an HDF5 file with all data and metadata.
Stores the complete image data (RGB, gray, enhanced gray, object map) and metadata (protected and public) to an HDF5 file. Images are organized under /phenotypic/images/{image_name}/ structure. If the file does not exist, it is created. If it exists, the image is appended or overwritten based on the overwrite flag.
- Parameters:
filename (str | PathLike) – Path to the HDF5 file (.h5 extension recommended). Will be created if it doesn’t exist.
compression (str, optional) – Compression filter to apply to datasets. Options: ‘gzip’ (recommended), ‘szip’, or None for no compression. Defaults to ‘gzip’.
compression_opts (int, optional) – Compression level for ‘gzip’ (1-9, where 1=fastest, 9=best compression). For ‘szip’ and None, this parameter is ignored. Defaults to 4 (balanced compression/speed).
overwrite (bool, optional) – If True, overwrites existing image with the same name in the file. If False, raises an error if image already exists. Defaults to False.
- Raises:
UserWarning – If the PhenoTypic version in the file does not match the current package version, indicating potential compatibility issues.
ValueError – If file is in SWMR (single-write multiple-read) mode and a new group needs to be created (cannot create in SWMR mode).
Notes
Large image arrays are stored as chunked datasets for memory efficiency.
Protected and public metadata are stored in separate HDF5 groups.
Version information is recorded to track HDF5 file compatibility.
All numeric data types are preserved when storing.
Examples
Save to HDF5
>>> img = Image.imread('photo.jpg') >>> img.save2hdf5('output.h5') >>> img.save2hdf5('output.h5', compression='szip')
- save2pickle(filename: str) None#
Save the image to a pickle file for fast serialization and deserialization.
Stores all image data components and metadata in Python’s pickle format, which preserves data types and structure exactly. This is the fastest serialization method but produces larger files than HDF5 and is not suitable for inter-language data exchange.
- Parameters:
filename (str | PathLike) – Path to the pickle file to write (.pkl or .pickle extension recommended).
- Return type:
None
Notes
Pickle format is Python-specific and cannot be read by other languages.
File size is typically larger than HDF5 compressed files.
Load/save is faster than HDF5 for small to medium images.
Pickle files may not be compatible across Python versions.
Examples
Save to pickle
>>> img = Image.imread('photo.jpg') >>> img.save2pickle('image.pkl') >>> loaded = Image.load_pickle('image.pkl')
- set_image(input_image: Image | np.ndarray) None#
Sets the image for the object by processing the provided input, which can be either a NumPy array or an instance of the Image class. If the input type is unsupported, an exception is raised to notify the user.
- Parameters:
input_image (Image | np.ndarray) – A NumPy array or an instance of the Image class representing the image to be set.
- Raises:
ValueError – If the input is not a NumPy array or an Image instance.
- Return type:
None
- property shape#
Returns the shape of the image array or gray depending on arr format or none if no image is set.
- show(ax: plt.Axes = None, figsize: Tuple[int, int] | None = None, **kwargs)#
Displays the image data using matplotlib.
This method renders either the array or gray property of the instance depending on the image format. It either shows the content on the provided matplotlib axes (ax) or creates a new figure and axes for the visualization. Additional display-related customization can be passed using keyword arguments.
- Parameters:
ax (plt.Axes, optional) – The matplotlib Axes object where the image will be displayed. If None, a new Axes object is created.
figsize (Tuple[int, int] | None, optional) – The size of the resulting figure if no ax is provided. Defaults to None.
**kwargs – Additional keyword arguments to customize the rendering behavior when showing the image.
- Returns:
- A tuple consisting of the matplotlib
Figure and Axes that contain the rendered content.
- Return type:
Tuple[plt.Figure, plt.Axes]
- show_overlay(object_label: int | None = None, show_gridlines: bool = True, show_linreg: bool = False, figsize: ~typing.Tuple[int, int] = (9, 10), show_labels: bool = False, label_settings: dict | None = None, ax: ~matplotlib.axes._axes.Axes | None = None) -> (<class 'matplotlib.figure.Figure'>, <class 'matplotlib.axes._axes.Axes'>)#
Displays an overlay of data with optional annotations, linear regression lines, and gridlines on a grid-based figure. The figure can be customized with various parameters to suit visualization needs.
- Parameters:
object_label (Optional) – Specific label of the object to highlight or focus on in the overlay.
show_gridlines (bool) – Whether to include gridlines on the overlay. Defaults to True.
show_linreg (bool) – Indicate whether to display linear regression lines on the overlay. Defaults to False.
figsize (Tuple[int, int]) – Size of the figure, specified as a tuple of width and height values (in inches). Defaults to (9, 10).
show_labels (bool) – Determines whether points or objects should be annotated. Defaults to False.
label_settings (None | dict) – Additional parameters for customization of the object annotations. Defaults: size=12, color=’white’, facecolor=’red’. Other kwargs are passed to the matplotlib.axes.text () method.
ax (plt.Axes, optional) – Axis on which to draw the overlay; can be provided externally. Defaults to None.
- Returns:
Modified figure and axis containing the rendered overlay.
- Return type:
Tuple[plt.Figure, plt.Axes]
- property uuid#
Returns the UUID of the image