notebook: interactive analysis#
One need for bioimage analysts is to interactively perform analysis on images. This interaction could be manual parameter tuning, such as adjusting thresholds, or performing human-in-the-loop analysis through clicking on specific regions of an image.
napari makes such interactive analyses easy because of its easy coupling with Python and the Scientific Python ecosystem, including tools like numpy and scikit-image.
Setup#
# this cell is required to run these notebooks on Binder. Make sure that you also have a desktop tab open.
import os
if 'BINDER_SERVICE_HOST' in os.environ:
os.environ['DISPLAY'] = ':1.0'
We start by importing napari
, our nbscreenshot
utility and instantiating an empty viewer.
import napari
from napari.utils import nbscreenshot
# Create an empty viewer
viewer = napari.Viewer()
There are multiple ways of reading image data into napari: drag-and-drop, File > Open, or programmatic reading.
For example, we could explicitly load a 3D image using the tifffile
library and then use the add_image()
method of our existing Viewer
object named viewer
. This approach allows you to load data into napari even if a Reader plugin doesn’t exist for that file format!
import napari
from tifffile import imread
# load the image data using tifffile
nuclei = imread('data/nuclei.tif')
viewer.add_image(nuclei)
However, here, for simplicity, we will use the cells3d
dataset, provided by scikit-image.
from skimage.data import cells3d
image_data = cells3d() # shape (60, 2, 256, 256)
membranes = image_data[:, 0, :, :]
nuclei = image_data[:, 1, :, :]
Downloading file 'data/cells3d.tif' from 'https://gitlab.com/scikit-image/data/-/raw/2cdc5ce89b334d28f06a58c9f0ca21aa6992a5ba/cells3d.tif' to '/home/runner/.cache/scikit-image/0.25.2'.
Further, we will compute a maximum intensity projection of the nuclei data, reducing the dimensionality of the data from 3D to 2D, prior to adding the image to the viewer.
print('Shape of nuclei array: ', nuclei.shape)
# calculate max projection using numpy method
nuclei_mip = nuclei.max(axis=0)
print('Shape of max projection of nuclei: ', nuclei_mip.shape)
Shape of nuclei array: (60, 256, 256)
Shape of max projection of nuclei: (256, 256)
viewer.add_image(nuclei_mip)
<Image layer 'nuclei_mip' at 0x7fef22b5ff90>
nbscreenshot(viewer)
Visualizing image filtering results#
One common task in image processing is image filtering which can be used to denoise an image or detect edges and other features.
We can use napari to visualize the results of some of the image filters that come with the scikit-image library.
# Import scikit-image's filtering module
from skimage import filters
viewer.add_image(filters.sobel_h(nuclei_mip), name='Horizontal Sobel')
viewer.add_image(filters.sobel_v(nuclei_mip), name='Vertical Sobel')
viewer.add_image(filters.roberts(nuclei_mip), name='Roberts')
viewer.add_image(filters.prewitt(nuclei_mip), name='Prewitt')
viewer.add_image(filters.scharr(nuclei_mip), name='Scharr')
<Image layer 'Scharr' at 0x7fef18f07e90>
nbscreenshot(viewer)
# Remove all filter layers
for l in viewer.layers[1:]:
viewer.layers.remove(l)
Interactive segmentation#
Let’s now perform an interactive segmentation of the nuclei using processing utilities from scikit-image
.
from skimage import morphology
from skimage import feature
from skimage import measure
from skimage import segmentation
from skimage import util
from scipy import ndimage
import numpy as np
First, let’s try and seperate background from foreground using a threshold. Here we’ll use an automatically calculated threshold.
foreground = nuclei_mip >= filters.threshold_li(nuclei_mip)
viewer.add_labels(foreground, name='foreground')
<Labels layer 'foreground' at 0x7fef1808a950>
nbscreenshot(viewer)
Notice the debris located outside the nuclei and some of the holes located inside the nuclei. We will remove the debris by filtering out small objects, and fill the holes using a hole-filling algorithm. We can update the data of the layer in-place.
foreground_processed = morphology.remove_small_holes(foreground, 60)
foreground_processed = morphology.remove_small_objects(foreground_processed, min_size=50)
viewer.layers['foreground'].data = foreground_processed
nbscreenshot(viewer)
We will now convert this binary mask into an instance segmentation where each nuclei is assigned a unique label.
We will do this using a marker-controlled watershed approach. The first step in this procedure is to calculate a distance transform on the binary mask as follows.
distance = ndimage.distance_transform_edt(foreground_processed)
viewer.add_image(distance)
<Image layer 'distance' at 0x7fef12fce250>
nbscreenshot(viewer)
We’ll actually want to smooth the distance transform to avoid over-segmentation artifacts. We can do this on the layer data in-place.
smoothed_distance = filters.gaussian(distance, 10)
viewer.layers['distance'].data = smoothed_distance
nbscreenshot(viewer)
Now we can try to identify the centers of each of the nuclei by finding peaks of the distance transform.
peak_local_max = feature.peak_local_max(
smoothed_distance,
min_distance=7,
exclude_border=False
)
viewer.add_points(peak_local_max, name='peaks', size=5, face_color='red');
nbscreenshot(viewer)
We can now remove any of the points that don’t correspond to nuclei centers, or add any new ones using the GUI.
Based on those peaks we can now seed the watershed algorithm which will find the nuclei boundaries.
markers = util.label_points(viewer.layers['peaks'].data, output_shape=viewer.layers['nuclei_mip'].data.shape)
nuclei_segmentation = segmentation.watershed(
-smoothed_distance,
markers,
mask=foreground_processed
)
viewer.add_labels(nuclei_segmentation)
<Labels layer 'nuclei_segmentation' at 0x7fef209c4090>
nbscreenshot(viewer)
We can now save our segmentation using our builtin save method.
viewer.layers['nuclei_segmentation'].save('nuclei-automated-segmentation.tif', plugin='builtins')
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[26], line 1
----> 1 viewer.layers['nuclei_segmentation'].save('nuclei-automated-segmentation.tif', plugin='builtins')
File /opt/hostedtoolcache/Python/3.11.13/x64/lib/python3.11/site-packages/napari/layers/base/base.py:2260, in Layer.save(self, path, plugin)
2240 """Save this layer to ``path`` with default (or specified) plugin.
2241
2242 Parameters
(...) 2256 File paths of any files that were written.
2257 """
2258 from napari.plugins.io import save_layers
-> 2260 return save_layers(path, [self], plugin=plugin)
File /opt/hostedtoolcache/Python/3.11.13/x64/lib/python3.11/site-packages/napari/plugins/io.py:246, in save_layers(path, layers, plugin, _writer)
242 written, writer_name = _write_multiple_layers_with_plugins(
243 path, layers, plugin_name=plugin, _writer=_writer
244 )
245 elif len(layers) == 1:
--> 246 _written, writer_name = _write_single_layer_with_plugins(
247 path, layers[0], plugin_name=plugin, _writer=_writer
248 )
249 written = [_written] if _written else []
250 else:
File /opt/hostedtoolcache/Python/3.11.13/x64/lib/python3.11/site-packages/napari/plugins/io.py:477, in _write_single_layer_with_plugins(path, layer, plugin_name, _writer)
475 if plugin_name and (plugin_name not in plugin_manager.plugins):
476 names = {i.plugin_name for i in hook_caller.get_hookimpls()}
--> 477 raise ValueError(
478 trans._(
479 "There is no registered plugin named '{plugin_name}'.\nPlugins capable of writing layer._type_string layers are: {names}",
480 deferred=True,
481 plugin_name=plugin_name,
482 names=names,
483 )
484 )
486 # Call the hook_caller
487 written_path: Optional[str] = hook_caller(
488 _plugin=plugin_name,
489 path=abspath_or_url(path),
490 data=layer.data,
491 meta=layer._get_state(),
492 )
ValueError: There is no registered plugin named 'builtins'.
Plugins capable of writing layer._type_string layers are: set()
Interactive thresholding with a custom GUI element#
Interactivity can be greatly enhanced by custom GUI elements like sliders and push buttons, custom mouse functions, or custom keybindings (keyboard shortcuts). napari can easily be extended with these features, and a companion library magicgui maintained by the napari team allows users to make extensions to the GUI without having to write any GUI code.
We’ll now explore adding such interactivity to napari.
# Remove all processed layers
for l in viewer.layers[1:]:
viewer.layers.remove(l)
# Import magicgui
from magicgui import magicgui
from napari.types import ImageData, LabelsData
@magicgui(auto_call=True,
percentile={"widget_type": "IntSlider", "min": 0, "max": 100})
def threshold(image: ImageData, percentile: int = 50) -> LabelsData:
data_min = np.min(image)
data_max = np.max(image)
return image > data_min + percentile / 100 * (data_max - data_min)
viewer.window.add_dock_widget(threshold, area="right")
nbscreenshot(viewer)
Adding a custom keybinding to the viewer for processing foreground data#
@viewer.bind_key('Shift-P')
def process_foreground(viewer):
data = viewer.layers['threshold result'].data
data_processed = morphology.remove_small_holes(np.bool(data), 60)
data_processed = morphology.remove_small_objects(data_processed, min_size=50)
viewer.layers['threshold result'].data = data_processed
nbscreenshot(viewer)
# Add an empty labels layer with the same shape as our maximum intensity projection
viewer.add_labels(np.zeros(nuclei_mip.shape, dtype=int), name='nuclei segmentation')
# Bind another keybinding to complete segmentation
@viewer.bind_key('Shift-S')
def complete_segmentation(viewer):
foreground = viewer.layers['threshold result'].data
distance = ndimage.distance_transform_edt(foreground)
smoothed_distance = filters.gaussian(distance, 10)
peak_local_max = feature.peak_local_max(
smoothed_distance,
min_distance=7,
exclude_border=False
)
markers = util.label_points(peak_local_max, output_shape=foreground.data.shape)
nuclei_segmentation = segmentation.watershed(
-smoothed_distance,
markers,
mask=foreground
)
viewer.layers['nuclei segmentation'].data = nuclei_segmentation
nbscreenshot(viewer)
Conclusions#
We’ve now seen how to interactively perform analyses by adding data to the napari viewer, and editing it as we moved through an analysis workflow. We’ve also seen how to extend the viewer with custom GUI functionality and keybindings, making analyses even more interactive!