-
Notifications
You must be signed in to change notification settings - Fork 59
Inference Model
For the user convenience, the MeshNet segmentation models are trained and converted to Tensorflow.js(tfjs).
While MeshNet Model has fewer number of parameters compared to the classical segmentation model U-Net, it is also can achieve a competitive DICE score.
If you need to import your own 3D segmentation model, please make sure your model layers are compatible with tfjs layers.
If you are using a layer not supported by tfjs, try to find a workaround. For example, Keras batchnorm5d will raise an issue with tfjs model because there is not a batchnorm5d layer in tfjs. One possible workaround here is to use a fusion technique with Keras layers by merging batch normalization layer with convolution layer as shown in this link.
In addition to full-volume inference option, Brainchop is designed also to accept batch input shape such that: [null, batch_D, batch_H, batch_W, 1] (e.g. [null, 38, 38, 38, 1]), the smaller the batch dimensions the better for browser resources management.
After training your model on 3D segmentation task, multiple converters to tfjs can be used from command line , or by python code such as:
# Python sample code
import tensorflowjs as tfjs
# Loading the saved keras model
keras_model = keras.models.load_model('path/to/model/location')
# Convert and save keras to tfjs_target_dir
tfjs.converters.save_keras_model(keras_model, tfjs_target_dir)
For more information about importing a model (e.g. keras) into Tensorflow.js please refere this tutorial
Successful conversion to tfjs will result into two main files, the model json file and the weights file as shown here
- The model.json file consists of model topology and weights manifest.
- The binary weights file (i.e. *.bin) consists of the concatenated weight values.
Importing above files can easily done using model browse option from model list.

Brainchop browsing window to load custom models
In addition to the model and weights files mentioned above, the model browsing form has also the following settings:
-
Labels: The labels.json file that has a schema such as:
{"0": "background", "1": "Grey Matter", "2": "White Matter"} -
Colors: The colorLUT.json file that has a schema such as:
{"0": "rgb(0,0,0)", "1": "rgb(0,255,0)", "2": "rgb(0,0,255)"} -
Transpose Input: Transpose 3D MRI input data axis for best inference input orientation.
-
Crop Input : For speed-up the inference with limited browser memory, cropping brain from background before feeding the result to the inference model can lower memory use.
-
Crop Padding : Add Padding to cropped brain 3D image for better inference results.
-
Pre-Model Masking: Select a masking model for cropping the brain.
-
Filter Output By Mask: This is a voxel-wise multiplication of the resulting output and the mask resulted from the pre-model inference. This option can be used to clean any wrongly segmented regions (e.g. skull areas) but also it can result in removing some properly segmented regions.