Volume Electron Microscopy Denoiser (vEMden) is a python library to automatically denoise stacks of images acquired using a focused ion beam scanning electron microscope (FIB-SEM).
This library
does not require any pre-trained models or any ground truth images. It will automatically train to denoise specifically for the data provided.
Two deep learning based denoising architectures are available:
-
Noise Reconstruction & Removal Network (NRRN)
-
Deep Convolutional Denoising of Low-Light Images (DenoiseNet)





1 - Installation

vEMden is pip installable, so open a terminal, preferably create a dedicated environment, and run:

$ pip install vEMden

If vEMden is already installed and you wish to upgrade it, then run:

$ pip install --upgrade vEMden





2 - Quick start and examples

vEMden can be executed/run using a command line or from a python script.
Here is a quick example to check that everything is working fine.

This example contains:
  • The tiny dataset composed of 100 images of dimensions 1024x1024 "Registered Crop 1024x1024".
  • The denoising results obtained using NRRN "Registered Crop 1024x1024 - Denoised NRRN"
  • The denoising results obtained using DenoiseNet "Registered Crop 1024x1024 - Denoised DenoiseNet"
  • A Python scrip "Denoising.py" to reproduce the results
  • Two command lines which are identical to the Python script to reproduce the results.

So to reproduce the results, then run:

$ python Denoising.py

Or the following command lines:

$ cd /The/Path/To/The/Data/
$ vEMden --dataDir="./Registered Crop 1024x1024/" --batchSize=24 --nThreads=12 --net=DenoiseNet --nBlocks=23 --nFeaturesMaps=32 --nIterations=1001 --cuda &
$ vEMden --dataDir="./Registered Crop 1024x1024/" --batchSize=12 --nThreads=12 --net=NRRN --nBlocks=4 --nFeaturesMaps=32 --nIterations=501 --cuda &

Two directories names "Registered Crop 1024x1024 - Denoised NRRN" and "Registered Crop 1024x1024 - Denoised DenoiseNet" will be automatically created and will contain the denoised results.
Moreover, each experiment will produce and save the deep learning model used and trained, so if necessary, it can be reused for denoising. The saved model will respect the following naming convention: Denoising_Architecture_nBlocks=??_nMaps=??_Date=Year.Month.Day.Hhm.pt





3 - Keywords / parameters

  • 'dataDir' (string, required): The path to the raw (not yet denoised) data.

  • 'batchSize' (int): The batch size used for training and inference (the bigger the better).

  • 'nIterations' (int): The number of weight updates to perform to train the model. At least 500 recommended for NRRN and 1000 recommended for DenoiseNet.

  • 'lr' (float): The learning rate to use to train the model.

  • 'nThread' (int): The number of process to run in parallel. It should be lower than the number of cores available.

  • 'cuda' (boolean): If True, the pipeline will try to use a GPU for the training and inference.

  • 'net' (string): Which model/architecture should be used? If trained from scratch then 'NRRN' or 'DenoiseNet' are expected, else the name of a previously trained model.

  • 'nBlocks' (int): The number of blocks to use. For NRRN it's the number of building blocks (4 recommended), for DenoiseNet it's the number of layers (20 recommended).

  • 'nFeaturesMaps' (int): The number of feature maps per 2D convolutions (32 to 64 recommended).

  • 'trainingSize' (int): the maximum size of the training set. If a negative value is given then the entire dataset will be use, else 'trainingSize pairs (DenoiseNet) / triplets (NRRN) will be randomly picked up from the entire dataset.

  • 'cropSize' (int): A crop/patch of dimensions/size [cropSize,cropSize] pixels will be used for training and inference.





Acknowledgement: the images used in the example above as well as the ones used in the NRRN publication were acquired by Jessica Riesterer at OHSU.
The images in the example above were provided by the Oregon Pancreas Tissue Repository and the Brenden-Colson Center for Pancreatic Care.