phenotypic.Image#
- class phenotypic.Image(arr: np.ndarray | Image | None = None, name: str | None = None, bit_depth: Literal[8, 16] | None = None, gamma_encoding: str | None = 'sRGB', illuminant: str | None = 'D65')[source]#
Bases:
ImageIOHandlerComprehensive image processing class with integrated data, color, and I/O management.
The Image class is the primary interface for image processing, analysis, and manipulation within the PhenoTypic framework. It combines: - Data management (image arrays, enhanced versions, object maps) - Color space handling (RGB, grayscale, HSV, XYZ, Lab with color corrections) - Object detection and analysis (object masks, labels, measurements) - File I/O and metadata management (loading, saving, metadata extraction) - Image manipulation (rotation, slicing, copying, visualization)
Image data can be provided as: - NumPy arrays (2-D grayscale or 3-D RGB/RGBA) - Another Image instance (copies all data) - Loaded from file via imread()
The class automatically manages format conversions and maintains internal consistency across multiple data representations. RGB and grayscale forms are kept synchronized, and additional representations (enhanced grayscale, object maps) support analysis workflows.
Notes
2-D input arrays are treated as grayscale; rgb form remains empty.
3-D input arrays are treated as RGB; grayscale is computed automatically.
Color space properties (gamma_encoding, illuminant, _observer) are inherited.
Object detection and measurements require an ObjectDetector first.
HSV color space support added in v0.5.0.
Examples
Create from array
import numpy as np from phenotypic import Image arr = np.random.randint(0, 256, (480, 640, 3), dtype=np.uint8) img = Image(arr, name='sample') img.show()
Load from file
img = Image.imread('photo.jpg') print(img.shape) # Image dimensions img.save2pickle('saved.pkl')
- Parameters:
- __eq__(other: Image) bool#
Compares the current object with another object for equality.
This method checks if the current object’s attributes are equal to another object’s attributes. Equality is determined by verifying that the numerical arrays (rgb, gray, enh_gray, objmap) are element-wise identical.
Note
Only checks core image data, and not any other attributes such as metadata.
- __getitem__(key) Image#
Returns a new subimage from the current object based on the provided key. The subimage is initialized as a new instance of the same class, maintaining the schema and format consistency as the original image object. This method supports 2-dimensional slicing and indexing.
Note
The subimage arrays are copied from the original image object. This means that any changes made to the subimage will not affect the original image.
We may add this functionality in future updates if there is demand for it.
- Parameters:
key – A slicing key or index used to extract a subset or part of the image object.
- Returns:
An instance of the Image representing the subimage corresponding to the provided key.
- Return type:
- Raises:
KeyError – If the provided key does not match the expected slicing format or dimensions.
- __init__(arr: np.ndarray | Image | None = None, name: str | None = None, bit_depth: Literal[8, 16] | None = None, gamma_encoding: str | None = 'sRGB', illuminant: str | None = 'D65')[source]#
Initialize an Image instance with optional image data and color properties.
Creates a new Image with complete initialization of all data management, color space, I/O, and object handling capabilities. The image can be initialized empty or with data from a NumPy array or another Image instance.
- Parameters:
arr (np.ndarray | Image | None) –
Optional image data. Can be: - A NumPy array of shape (height, width) for grayscale or
(height, width, channels) for RGB/RGBA
An existing Image instance to copy from
None to create an empty image
Defaults to None.
name (str | None) – Optional human-readable name for the image. If not provided, the image UUID will be used as the name. Defaults to None.
bit_depth (Literal[8, 16] | None) – The bit depth of the image data (8 or 16 bits). If not specified and arr is provided, bit depth is automatically inferred from the array dtype. Defaults to None.
gamma_encoding (str | None) – The gamma encoding used for color correction. ‘sRGB’: applies sRGB gamma correction (standard display gamma) None: assumes linear RGB data Only ‘sRGB’ and None are supported. Defaults to ‘sRGB’.
illuminant (str | None) – The reference illuminant for color calculations. ‘D65’: standard daylight illuminant (recommended) ‘D50’: standard illumination for imaging Defaults to ‘D65’.
- Raises:
ValueError – If gamma_encoding is not ‘sRGB’ or None.
ValueError – If illuminant is not ‘D65’ or ‘D50’.
TypeError – If arr is provided but is not a NumPy array or Image instance.
Examples
Create empty image
img = Image(name='empty_image')
Create from grayscale array
gray_arr = np.random.randint(0, 256, (480, 640), dtype=np.uint8) img = Image(gray_arr, name='grayscale_photo')
Create from RGB array
rgb_arr = np.random.randint(0, 256, (480, 640, 3), dtype=np.uint8) img = Image(rgb_arr, name='color_photo', gamma_encoding='sRGB')
Copy another image
img1 = Image.imread('original.jpg') img2 = Image(img1, name='copy_of_original')
- __setitem__(key, other_image)#
Sets an item in the object with a given key and Image object. Ensures that the Image being set matches the expected shape and type, and updates internal properties accordingly.
- Parameters:
key (Any) – The array slices for accesssing the elements of the image.
other_image (ImageHandler) – The other image to be set, which must match the shape of the existing elements accessed by the key and conform to the expected schema.
- Raises:
ValueError – If the shape of the value does not match the shape of the existing elements being accessed.
- property bit_depth: int#
Get the bit depth of the image.
The bit depth determines the number of bits used to represent each pixel value. Common values are 8 (0-255 range) and 16 (0-65535 range).
- Returns:
- The bit depth value (8 or 16) stored in protected metadata,
or None if not yet set.
- Return type:
int | None
- clear() None#
Reset all image data to empty state.
Note
bit_depth is retained. To change the bit depth, make a new Image object.
- Return type:
None
- property color: ColorAccessor#
Access all color space representations through a unified interface.
This property provides access to the ColorAccessor object, which groups all color space transformations and representations including:
XYZ: CIE XYZ color space
XYZ_D65: CIE XYZ under D65 illuminant
Lab: CIE L*a*b* perceptually uniform color space
xy: CIE xy chromaticity coordinates
hsv: HSV (Hue, Saturation, Value) color space
- Returns:
Unified accessor for all color space representations.
- Return type:
ColorAccessor
Examples
Access color spaces
>>> img = Image.imread('sample.jpg') >>> xyz_data = img.color.XYZ[:] >>> lab_data = img.color.Lab[:] >>> hue = img.color.hsv[..., 0] # hue is the first matrix in the array
- copy()#
Creates a copy of the current Image instance, excluding the UUID. .. note:: - The new instance is only informationally a copy. The UUID of the new instance is different.
- Returns:
A copy of the current Image instance.
- Return type:
- property enh_gray: EnhancedGrayscale#
Returns the image’s enhanced grayscale accessor. Preprocessing steps can be applied to this component to improve detection performance.
The enhanceable gray is a copy of the image’s gray form that can be modified and used to improve detection performance. The original gray data should be left intact in order to preserve image information integrity for measurements.’
- Returns:
A mutable container that stores a copy of the image’s gray form
- Return type:
EnhancedGrayscale
- property gray: Grayscale#
The image’s grayscale representation. The array form is converted into a gray form since some algorithm’s only handle 2-D
Note
gray elements are not directly mutable in order to preserve image information integrity
Change gray elements by changing the image being represented with Image.set_image()
- Returns:
An immutable container for the image gray that can be accessed like a numpy array, but has extra methods to streamline development.
- Return type:
Grayscale
See Also:
ImageMatrix
- classmethod imread(filepath: PathLike, rawpy_params: dict | None = None, **kwargs) Image#
imread is a class method responsible for reading an image file from the specified path and performing necessary preprocessing based on the file format and additional parameters. The method supports a variety of image file types including common formats (e.g., JPEG, PNG) as well as raw sensor data. It uses the scikit-image library for loading standard images and rawpy for processing raw image files. This method also handles additional configurations for raw image preprocessing via rawpy parameters, such as white balance, gamma correction, and demosaic algorithm.
- Parameters:
filepath (PathLike) – Path to the image file to be read. It can be any valid file path-like object (e.g., str, pathlib.Path).
rawpy_params (dict | None) – Optional dictionary of parameters for processing raw image files when using rawpy. Supports options like white balance settings, demosaic algorithm, gamma correction, and others. Defaults to None.
**kwargs – Arbitrary keyword arguments to be passed for additional configurations specific to the Image instantiation.
- Returns:
- An instance of the Image class containing the processed image array and any
additional metadata.
- Return type:
- Raises:
UnsupportedFileTypeError – If the file type of the provided filepath is not supported by the method, either due to its extension not being recognized or due to the absence of required libraries like rawpy.
- classmethod load_hdf5(filename, image_name) Image#
Load an ImageHandler instance from an HDF5 file at the default hdf5 location
- Return type:
- classmethod load_pickle(filename: str) Image#
Load an image from a pickle file.
Deserializes image data and metadata that were previously saved with save2pickle(). Restores all image components including RGB, grayscale, enhanced grayscale, object map, and metadata.
- Parameters:
filename (str | PathLike) – Path to the pickle file to read.
- Returns:
A new Image instance with all data and metadata restored from the pickle file.
- Return type:
- Raises:
FileNotFoundError – If the specified pickle file does not exist.
pickle.UnpicklingError – If the file is not a valid pickle file or is corrupted.
Notes
Pickle files must be created with save2pickle() to ensure compatibility.
Enhanced gray and object map are reset and reconstructed from saved data.
Metadata (protected and public) is fully restored.
Examples
Load from pickle
>>> loaded = Image.load_pickle('image.pkl') >>> print(loaded.shape)
- property metadata: MetadataAccessor#
- property name: str#
Returns the name of the image. If no name is set, the name will be the uuid of the image.
- property objects: ObjectsAccessor#
Accessor for performing operations on detected objects in the image.
Provides access to individual or grouped objects detected in the image, enabling measurement calculations, filtering, and object-specific analyses. Objects are identified through the object map (objmap) component which stores integer labels for each detected object.
- Returns:
- An accessor instance that manages object-specific operations
and measurements.
- Return type:
ObjectsAccessor
- Raises:
NoObjectsError – If no objects are present in the image. This occurs when num_objects == 0, indicating that either no object detection has been performed yet, or the detection found no objects. Apply an ObjectDetector first to identify and label objects.
Examples
Measure object properties
>>> img = Image.imread('sample.jpg') >>> detector = ObjectDetector() >>> detector.detect(img) >>> obj_accessor = img.objects >>> measurements = img.objects.measure.area()
- property objmap: ObjectMap#
Returns the ObjectMap accessor; The object map is a mutable integer gray that identifies the different objects in an image to be analyzed. Changes to elements of the object_map sync to the object_mask.
The object_map is stored as a compressed sparse column gray in the backend. This is to save on memory consumption at the cost of adding increased computational overhead between converting between sparse and dense matrices.
Note
Has accessor methods to get sparse representations of the object map that can streamline measurement calculations.
- Returns:
A mutable integer gray that identifies the different objects in an image to be analyzed.
- Return type:
ObjectMap
See Also:
ObjectMap
- property objmask: ObjectMask#
Returns the ObjectMask Accessor; The object mask is a mutable binary representation of the objects in an image to be analyzed. Changing elements of the mask will reset object_map labeling.
Note
- If the image has not been processed by a detector, the target for analysis is the entire image itself. Accessing the object_mask in this case
will return a 2-D array entirely with other_image 1 that is the same shape as the gray
Changing elements of the mask will relabel of objects in the object_map
- Returns:
A mutable binary representation of the objects in an image to be analyzed.
- Return type:
ObjectMaskErrors
See Also:
ObjectMask
- property props: list[RegionProperties]#
Fetches the properties of the whole image.
Calculates region properties for the entire image using the gray representation. The labeled image is generated as a full array with values of 1, and the intensity image corresponds to the _data.gray attribute of the object. Cache is disabled in this configuration.
- Returns:
A list of properties for the entire provided image.
- Return type:
list[skimage.measure._regionprops.RegionProperties]
Propertyetails
(Excerpt from skimage.measure.regionprops documentation on available properties.):
Read more at
skimage.measure.regionpropsor scikit-image documentation- area: float
Area of the region i.e. number of pixels of the region scaled by pixel-area.
- area_bbox: float
Area of the bounding box i.e. number of pixels of bounding box scaled by pixel-area.
- area_convex: float
Area of the convex hull image, which is the smallest convex polygon that encloses the region.
- area_filled: float
Area of the region with all the holes filled in.
- axis_major_length: float
The length of the major axis of the ellipse that has the same normalized second central moments as the region.
- axis_minor_length: float
The length of the minor axis of the ellipse that has the same normalized second central moments as the region.
- bbox: tuple
Bounding box (min_row, min_col, max_row, max_col). Pixels belonging to the bounding box are in the half-open interval [min_row; max_row) and [min_col; max_col).
- centroid: array
Centroid coordinate tuple (row, col).
- centroid_local: array
Centroid coordinate tuple (row, col), relative to region bounding box.
- centroid_weighted: array
Centroid coordinate tuple (row, col) weighted with intensity image.
- centroid_weighted_local: array
Centroid coordinate tuple (row, col), relative to region bounding box, weighted with intensity image.
- coords_scaled(K, 2): ndarray
Coordinate list (row, col) of the region scaled by spacing.
- coords(K, 2): ndarray
Coordinate list (row, col) of the region.
- eccentricity: float
Eccentricity of the ellipse that has the same second-moments as the region. The eccentricity is the ratio of the focal distance (distance between focal points) over the major axis length. The other_image is in the interval [0, 1). When it is 0, the ellipse becomes a circle.
- equivalent_diameter_area: float
The diameter of a circle with the same area as the region.
- euler_number: int
Euler characteristic of the set of non-zero pixels. Computed as number of connected components subtracted by number of holes (arr.ndim connectivity). In 3D, number of connected components plus number of holes subtracted by number of tunnels.
- extent: float
Ratio of pixels in the region to pixels in the total bounding box. Computed as area / (nrows * ncols)
- feret_diameter_max: float
Maximum Feret’s diameter computed as the longest distance between points around a region’s convex hull contour as determined by find_contours. [5]
- image(H, J): ndarray
Sliced binary region image which has the same size as bounding box.
- image_convex(H, J): ndarray
Binary convex hull image which has the same size as bounding box.
- image_filled(H, J): ndarray
Binary region image with filled holes which has the same size as bounding box.
- image_intensity: ndarray
Image inside region bounding box.
- inertia_tensor: ndarray
Inertia tensor of the region for the rotation around its mass.
- inertia_tensor_eigvals: tuple
The eigenvalues of the inertia tensor in decreasing order.
- intensity_max: float
Value with the greatest intensity in the region.
- intensity_mean: float
Value with the mean intensity in the region.
- intensity_min: float
Value with the least intensity in the region.
- intensity_std: float
Standard deviation of the intensity in the region.
- label: int
The label in the labeled arr image.
- moments(3, 3): ndarray
Spatial moments up to 3rd order:
m_ij = sum{ array(row, col) * row^i * col^j }
where the sum is over the row, col coordinates of the region.
- moments_central(3, 3): ndarray
Central moments (translation invariant) up to 3rd order:
mu_ij = sum{ array(row, col) * (row - row_c)^i * (col - col_c)^j }
where the sum is over the row, col coordinates of the region, and row_c and col_c are the coordinates of the region’s centroid.
- moments_hu: tuple
Hu moments (translation, scale, and rotation invariant).
- moments_normalized(3, 3): ndarray
Normalized moments (translation and scale invariant) up to 3rd order:
nu_ij = mu_ij / m_00^[(i+j)/2 + 1]
where m_00 is the zeroth spatial moment.
- moments_weighted(3, 3): ndarray
Spatial moments of intensity image up to 3rd order:
wm_ij = sum{ array(row, col) * row^i * col^j }
where the sum is over the row, col coordinates of the region.
- moments_weighted_central(3, 3): ndarray
Central moments (translation invariant) of intensity image up to 3rd order:
wmu_ij = sum{ array(row, col) * (row - row_c)^i * (col - col_c)^j }
where the sum is over the row, col coordinates of the region, and row_c and col_c are the coordinates of the region’s weighted centroid.
- moments_weighted_hu: tuple
Hu moments (translation, scale and rotation invariant) of intensity image.
- moments_weighted_normalized(3, 3): ndarray
Normalized moments (translation and scale invariant) of intensity image up to 3rd order:
wnu_ij = wmu_ij / wm_00^[(i+j)/2 + 1]
where wm_00 is the zeroth spatial moment (intensity-weighted area).
- num_pixels: int
Number of foreground pixels.
- orientation: float
Angle between the 0th axis (nrows) and the major axis of the ellipse that has the same second moments as the region, ranging from -pi/2 to pi/2 counter-clockwise.
- perimeter: float
Perimeter of object which approximates the contour as a line through the centers of border pixels using a 4-connectivity.
- perimeter_crofton: float
Perimeter of object approximated by the Crofton formula in 4 directions.
- slice: tuple of slices
A slice to extract the object from the source image.
- solidity: float
Ratio of pixels in the region to pixels of the convex hull image.
- reset() Type[Image]#
Resets the internal state of the object and returns an updated instance.
This method resets the state of enhanced gray and object map components maintained by the object. It ensures that the object is reset to its original state while maintaining its type integrity. Upon execution, the instance of the calling object itself is returned.
- Returns:
The instance of the object after resetting its internal state.
- Return type:
Type[Image]
- property rgb: ImageRGB#
Returns the ImageArray accessor; An image rgb represents the multichannel information
Note
rgb/gray element data is synced
change image shape by changing the image being represented with Image.set_image()
Raises an error if the arr image has no rgb form
- Returns:
A class that can be accessed like a numpy rgb, but has extra methods to streamline development, or None if not set
- Return type:
ImageRGB
- Raises:
NoArrayError – If no multichannel image data is set as arr.
Example
Image.rgb
from phenotypic import Image from phenotypic.data import load_colony image = Image(load_colony()) # get the rgb data arr = image.rgb[:] print(type(arr)) # set the rgb data # the shape of the new rgb must be the same shape as the original rgb image.rgb[:] = arr # without the bracket indexing the accessor is returned instead sprint(image.rgb[:])
See Also:
ImageArray
- rotate(angle_of_rotation: int, mode: str = 'constant', cval=0, order=0, preserve_range=True) None#
Rotates various data attributes of the object by a specified angle.
The method applies rotation transformations image data. It data that falls outside the border is clipped.
- Parameters:
angle_of_rotation (int) – The angle, in degrees, by which to rotate the data attributes. Positive values indicate counterclockwise rotation.
mode (str) – Mode parameter determining how borders are handled during the rotation. Default is ‘constant’.
cval – Constant value to fill edges in ‘constant’ mode. Default is 0.
order (int) – The order of the spline interpolation for rotating images. Must be an integer in the range [0, 5]. Default is 0 for nearest-neighbor interpolation.
preserve_range (bool) – Whether to keep the original input range of values after performing the rotation. Default is True.
- Returns:
None
- Return type:
None
- save2hdf5(filename, compression='gzip', compression_opts=4, overwrite=False)#
Save the image to an HDF5 file with all data and metadata.
Stores the complete image data (RGB, gray, enhanced gray, object map) and metadata (protected and public) to an HDF5 file. Images are organized under /phenotypic/images/{image_name}/ structure. If the file does not exist, it is created. If it exists, the image is appended or overwritten based on the overwrite flag.
- Parameters:
filename (str | PathLike) – Path to the HDF5 file (.h5 extension recommended). Will be created if it doesn’t exist.
compression (str, optional) – Compression filter to apply to datasets. Options: ‘gzip’ (recommended), ‘szip’, or None for no compression. Defaults to ‘gzip’.
compression_opts (int, optional) – Compression level for ‘gzip’ (1-9, where 1=fastest, 9=best compression). For ‘szip’ and None, this parameter is ignored. Defaults to 4 (balanced compression/speed).
overwrite (bool, optional) – If True, overwrites existing image with the same name in the file. If False, raises an error if image already exists. Defaults to False.
- Raises:
UserWarning – If the PhenoTypic version in the file does not match the current package version, indicating potential compatibility issues.
ValueError – If file is in SWMR (single-write multiple-read) mode and a new group needs to be created (cannot create in SWMR mode).
Notes
Large image arrays are stored as chunked datasets for memory efficiency.
Protected and public metadata are stored in separate HDF5 groups.
Version information is recorded to track HDF5 file compatibility.
All numeric data types are preserved when storing.
Examples
Save to HDF5
>>> img = Image.imread('photo.jpg') >>> img.save2hdf5('output.h5') >>> img.save2hdf5('output.h5', compression='szip')
- save2pickle(filename: str) None#
Save the image to a pickle file for fast serialization and deserialization.
Stores all image data components and metadata in Python’s pickle format, which preserves data types and structure exactly. This is the fastest serialization method but produces larger files than HDF5 and is not suitable for inter-language data exchange.
- Parameters:
filename (str | PathLike) – Path to the pickle file to write (.pkl or .pickle extension recommended).
- Return type:
None
Notes
Pickle format is Python-specific and cannot be read by other languages.
File size is typically larger than HDF5 compressed files.
Load/save is faster than HDF5 for small to medium images.
Pickle files may not be compatible across Python versions.
Examples
Save to pickle
>>> img = Image.imread('photo.jpg') >>> img.save2pickle('image.pkl') >>> loaded = Image.load_pickle('image.pkl')
- set_image(input_image: Image | np.ndarray) None#
Sets the image for the object by processing the provided input, which can be either a NumPy array or an instance of the Image class. If the input type is unsupported, an exception is raised to notify the user.
- Parameters:
input_image (Image | np.ndarray) – A NumPy array or an instance of the Image class representing the image to be set.
- Raises:
ValueError – If the input is not a NumPy array or an Image instance.
- Return type:
None
- property shape#
Returns the shape of the image array or gray depending on arr format or none if no image is set.
- show(ax: plt.Axes = None, figsize: Tuple[int, int] | None = None, **kwargs)#
Displays the image data using matplotlib.
This method renders either the array or gray property of the instance depending on the image format. It either shows the content on the provided matplotlib axes (ax) or creates a new figure and axes for the visualization. Additional display-related customization can be passed using keyword arguments.
- Parameters:
ax (plt.Axes, optional) – The matplotlib Axes object where the image will be displayed. If None, a new Axes object is created.
figsize (Tuple[int, int] | None, optional) – The size of the resulting figure if no ax is provided. Defaults to None.
**kwargs – Additional keyword arguments to customize the rendering behavior when showing the image.
- Returns:
- A tuple consisting of the matplotlib
Figure and Axes that contain the rendered content.
- Return type:
Tuple[plt.Figure, plt.Axes]
- show_overlay(object_label: int | None = None, figsize: Tuple[int, int] = (10, 5), title: str | None = None, show_labels: bool = False, ax: plt.Axes = None, *, label_settings: None | dict = None, overlay_settings: None | dict = None, imshow_settings: None | dict = None)#
Displays an overlay of the provided object label and image using the specified settings.
This method combines an image and its segmentation or annotation mask overlay for visualization. The specific behavior is adjusted based on the instance’s underlying image format (e.g., whether it operates on arrays or matrices).
- Parameters:
object_label (Optional[int]) – The label of the object to overlay. If None, overlays all available objects.
figsize (Tuple[int, int]) – A tuple specifying the figure size in inches.
title (str | None) – The title of the overlay figure. If None, no title will be displayed.
show_labels (bool) – Whether to display object labels on the overlay. Defaults to False.
ax (plt.Axes) – An optional Matplotlib axes object. If provided, the overlay will be plotted on this axes. If None, a new axes object will be created.
label_settings (None | dict) – A dictionary specifying configurations for displaying object labels. If None, default settings will be used.
overlay_settings (None | dict) – A dictionary specifying configurations for the overlay appearance. If None, default settings will be used.
imshow_settings (None | dict) – A dictionary specifying configurations for the image display (e.g., color map or interpolation). If None, default settings will be used.
- Returns:
- A tuple containing the Matplotlib figure and
axes used for the overlay. This allows further customization or saving of the visualization outside this method.
- Return type:
Tuple[plt.Figure, plt.Axes]
- property uuid#
Returns the UUID of the image