Scissors tool is awesome

The new segment editor in the 3D Slicer has features that the older Editor module lacks. One of this features is the awesome Scissors tool, which let’s you remove/retain any portion of the data by drawing an outline either in the slice views or in 3D.

Our goal in this tutorial will be to remove the foam and other support structures that usually show up in CT scans quickly (in about 2 minutes). We will use a dataset from the Terry Collection (TC 815) courtesy of Dr. Lynn Copes. For this tutorial I am using Slicer release r25914 (4.7.0-2017-04-10).

  • Download the nifti file and drag it into Slicer.
  • Go to the Segment Editor and create a new segment
  • Use the threshold tool with 410 – 4096 range to segment bones. Note that this picks up a fair amount of the scanning platform, as well as the foam.
  • If you want to visualize, you can click the create surface to see in 3D (takes a minute)

  • Switch to scissors tool and in the axial plane (red slice view) make a tracing of the scanning platform. Your scissors settings should be ‘erase inside’. This will remove the scanning platform from the segment #1, and assign it to the background.
  • Remaining foam particles can be easily dealt with using the Island tool. Just choose the option ‘Keep only the largest island‘.
  • You should have the skull fully segmented from the scanning platform and free of foam particles.

Segment editor uses a different data structure than regular editor. If you want to save a new dataset without the platform and the foam, you need to:

  1. Convert the segments into a label map. Right click on the Segment 1 and choose ‘convert visible segments to binary label map‘. Note that there is an option to directly export a surface model as well, if that’s what you prefer to work with.
  2. To remove the platform, use the “Mask Scalar Volume” module, specify TC815 as your input volume, the label map from the previous step as your mask volume, and create a new output volume.

You can use the scissors tool directly on the 3D view as well. Remember it is going to carve in (or carve out everything else depending your setting) into the screen plane, so orienting your specimen correctly is very important.

MorphoSource data and dealing with DICOM series in Slicer

DICOM is a standard image data format that is designed to facilitate data exchange between medical imaging facilities. While it is a standard, manufacturers of imaging equipment implemented the format in different ways, which can be a amajor headache. Slicer has a robust DICOM import/export module. It should read almost all well-structured DICOM series. For the ones that fail, DICOM patcher can be used to strip the meta data portion of DICOM and just load the image.

For this tutorial, I will use a DICOM series from the excellent MorphoSource repository (http://morphosource.org/Detail/MediaDetail/Show/media_id/4963). You may have to register for the site to download the data. Registration is well worth the hassle, as it provides thousands of scanned vertebrate specimens in a very nicely curated manner. I also chose MorphoSource, because recently a user had issues getting the data into Slicer. The solution here is based on the response from the Andras Lasso (Thanks Andras!).

Download the zip file (1.9GB) and unzip it to a folder. Open the DICOM module, Click Import and navigate a few folders down to the Saimiri_20187 folder, and hit OK. You can either choose to incorporate the DICOM sequence into the DICOM database or simply link the files. If you choose the latter, and later change or remove the location of Saimiri_20187 folder, you won’t be able to reload your data. This is really a moot point, as in the end we will save the dataset as NRRD or NII, which are far more suitable for our purposes than image sequences.

Slicer will begin loading the DICOM series, and eventually will display a message saying 0 Patient/Studies/Series were imported. We can dig further on the root cause of the problem by hitting CTRL+0 and bringing the Log Messages dialog box. If you scroll down, you should see  error messages labeled critical. It complains about dataset missing the patient and study name, as well as an Unknown DICOM tag. So nothing is loaded.

To fix this problem, open the DICOM Patcher module, specify Saimiri_20187 folder as the DICOM Input, and choose an output folder. Make sure Generate missing patient/study/series ID is checked and hit Patch. This will recreate the DICOM series in the specified folder.

Once patching is completed, you can use the DICOM browser to import the patched dataset. Notice how slow it is now importing (as it should, this is a massive 3GB dataset). Once DICOM browser completes the import, you should now have a record called Saimiri_201870000.dcm click on it and hit load. Once it is loaded, you can cross reference the metadata (namely image dimensions and spacing) from MorphoSource to the values reported by the Volume module. They should be identical.

Remember to save your resultant dataset as Nifti or NRRD.

 

 

A worked example: Getting and Visualizing Data from DigiMorph

Learning goal: We will use Slicer to look at, segment and measure a high-resolution of micro CT scan of a rat from the public specimen repository Digimorph (http://www.digimorph.org/specimens/Rattus_norvegicus/). Our goals are:

  1. Visualize Data in 3D
  2. Take Length measurements and record 3D coordinates of landmarks
  3. Save Skull and Mandible as separate datasets.
  4. Export skull and Mandible as separate 3D models (to be edited)

We will learn and use the following Slicer modules: Load/Save Data, Volumes, Crop Volume, Annotations, Markups, Editor, Volume Rendering, Mask Scalar Volume, Model Maker, Models. Tutorial would take about 20 minutes to complete.

Getting data: Find the ‘Download Data’ and click the CT scan on the Digimorph website. The zip file is 0.6GB in size. Unzip the files into a folder and find the meta data that describe the scan settings in the PDF file. This is a crucial piece of information if you want to get quantitative data out of your specimen.

The first question you should ask to yourself is whether you have the computational resources available to look at the full resolution dataset. Based on the description, this is a 1024×1024 8-bit dataset with 1571 slices.

Since 8-bit is equal to 1 byte, when loaded into the memory, this dataset will take

1024x1024x1571x1= 1,647,312,896 bytes, or 1.5GB of RAM.

Now, that’s just memory needed to store the dataset itself. As a rule of thumb you need, 4-6 times more memory than the dataset you are working on. So, if you have a computer with 8GB or more RAM, you should be fine. Anything less, you will struggle and will definitely need to data downsampling step. We will come to the graphics card issue later.

For this tutorial I am using v4.7.0-2017-02-28 r25734 of 3D Slicer on Windows. Your menus may look slightly different if you are using a different OS. I am assuming you have completed the general introductory tutorials for Slicer and getting comfortable with the UI and now how to navigate or find modules (HINT: use the search tool).

1. Getting Image Stacks into Slicer

Go to the folder where you unzip the data, and drag ANY one of the TIF files into the Slicer.

Click the “show options”, uncheck ‘single file’. You can also enter a new description (in this case I called Rat_scan). Loading the dataset takes about 30 seconds on my laptop with a SSD.

As you can see, the aspect ratios look weird in two planes, and scalebar is off for those views. That’s because this dataset has anisotropic voxel size, i.e. the XY -or in plane- resolution is different than the spacing between slices, and tiff files do not have that information. We can fix that by telling Slicer the correct slice spacing, which is in the PDF file that came with the dataset. For that, go to Volumes module, and enter the correct spacing which is: 0.02734×0.02734×0.02961mm. Make sure to adjust the field of view (the little square left of  sliders in each slice view control bar) to bring them back into the center.

Slicer guesses the window/level appropriate for your data. For variety of reasons, this typically fails for CT data in my experience, so go to Display tab of the Volumes module and set the Window/Level editor to Manual Min/Max, and set enter 0 for Min and 255 for Max, which are the extremes values in this dataset. This will fix the contrast issue in the dataset. If you prefer to change the color same from the traditional Grayscale lookup table, you can do that here as well.

All should look good now.

Note: Slicer treats PNG/JPG/BMP stacks differently from TIFF. For those, it assumes that data is multichannel image (RGB). You can see the difference in the Volume Information tab of the Volumes module. The number of scalars field for the current rat dataset is 1, if it were PNG/JPG stack, this would be 3. If you are working with such stacks, you need to use the Vector to Scalar Volume module to convert is a single channel volume, so that you can do the following steps.

2. Taking a linear measurement in 2D.

Let’s say, we want to measure the skull length. I used the slice number -13.80m (sagittal view)

Click on the Mouse Interactions icon and switch to Ruler (highlighted above). Then click two points on the sagittal view to obtain the measurement. You can move the points to fine tune their placement.

3. Collecting landmarks 2D.

You can use the same tool to collect 3D coordinates of landmarks in Slicer. Just switch back to Fiducial in Mouse interactions tool. Then simply click on any point in cross-sections to record their coordinates. One thing to keep in mind, Slicer will record the coordinates in physical world coordinates (RAS coordinate system) instead of image coordinates (ijk coordinate system). There are ways to convert them back and forth and I will cover it in another tutorial.

The resultant fiducial (landmark) will be saved under a list in the Markup module. Linear measurements will be recorded in the Annotations moduleYou can customize your list names, measurement labels, sizes etc using either the Markup or Annotation module, depending on whether it is a ruler measurement or a landmark. Starting with Slicer v4.5, fiducial lists are saved as in fcsv format, which is a comma-separated (csv) file, with some custom headers. These text files can be easily opened in Excel, and Morpho package in R supports reading coordinates from a Slicer fcsv file into an array (Morpho::fcsv.read). I also suggest locking each landmark after digitizing it to avoid the issue of accidentally moving it after it has been placed.

It is also possible to do these measurements directly on the 3D renderings. You can either use the volume rendering module, or a create a surface model to achieve 3D rendering of the specimen. We will first do a 3D rendering using Volume Rendering module, as this gives the highest detail possible with the volumetric data.

4. Volume Rendering

Slicer supports two different methods of Volume Rendering (VR), either CPU or GPU based. CPU based will always work, whether you are on a computer without a dedicated graphics card, or on a remote connection (which typically doesn’t support hardware accelerated graphics), but it is slow, excruciatingly slow. GPU is much faster, but would require you have a dedicated graphics card with 1GB or more videoRAM. On my laptop, CPU rendering of this dataset will take 30 seconds to bring a view, and another 30 seconds if I adjust the colors, opacity or move the specimen. Basically every operation has a 30 second delay. GPU rendering, on the other hand, is real-time. I have Nvidia Quadro K4100M with 4GB of VRAM. However, any recent Geforce line of products from Nvidia would also work equally well. I have no experience with AMD GPUs, but there is no reason for them not to work unless it is a driver issue.

If you have a dedicated graphics card, enable the GPU rendering and set the amount of available VRAM from the Edit->Preferences menu.  From now on, I will only be using the GPU option. If you can’t use the GPU rendering because, your computer doesn’t have a dedicated GPU, skip to the Downsampling your data section, do those tasks, and come back and try these with your steps with your downsampled data. Otherwise, these steps will take far too long with CPU rendering to keep your sanity.

Go to Volume Rendering module, and adjust the highlighted settings and then turn on the Rendering for the volume by clicking the eye next to the volume name. The most important setting here is the Scalar Opacity Mapping, in which the point values determine which voxels will be turned off (Opacity set to 0), which will be completely opaque (O set to 1.0) and something in between. My suggestion would be turn off anything that is not bone (for this dataset grayscale values 40 or less is a good approximation), and create a ramp that gives you a nice detail. It is more of an art than science. You can then adjust the Scalar color mapping and material properties to give you photorealistic rendering.

You can clip the specimen (to reveal interior for example) using the Crop settings under the Display tab. Make sure to enable the Region of Interest (ROI) and then use the colored spheres on the clip box to adjust the region to be clipped. If you want to revert, hit the Fit to Volume again. We will use this ROI tool to down sample or generate subsets of our datasets in the next section.

At this point you can try collecting landmarks or taking linear measurements directly from the 3D renderer window (as we previously did from the cross-sections). Make sure to be on the orthographic projection while measuring (Hint: Scalebar in 3D will disappear if the rendering is not set to orthographic).

5. Downsampling the data

Segmentation and some of the other image processing tasks (such as isosurfacing and model building) are computationally intensive. For the expediency, we will down sample our dataset to cut down processing times.

First, adjust the ROI from the previous step that it covers the entire specimen. Then go to Crop Volume module (not to be confused with the inaptly named crop setting of the Volume Rendering module). Check the Input volume and Input ROI are correctly set, then choose to Create a New Volume As and call id Rat_ds4. Set the scaling to 4.00x, which means the data will be downsampled by factor of 4 in each dimension, for a total reduction of 4^3=64 folds in size. If you expand the volume Information tab, you will see the current spacing and image dimensions as well as what they will be after the downsampling. This step takes about a minute on my laptop. Once it is completed you may have to adjust the Window/Level settings for the newly created volume.

Another use for this module would be to crop out unwanted data from scan without downsampling the data. Let’s say our project calls for measuring/segmenting just the tooth row and the rest of the data are simply nuisance. We can adjust the ROI to retain the region we are interested (I decided to keep the left upper and lower tooth row along with their alveolar processes), and again use the Crop Volume, this time with slightly different settings. We are no longer interested in down sampling our volume, so we will uncheck the interpolated scaling which will automatically turn off the spacing scale. Enter the Left_tooth_row as the new output volume name. Compare the cropped volume vs original spacing in the volume information.

6. Segmentation and Image Masking

There are two  modules in Slicer to do interactive segmentation. The more robust and stable one is the Editor module. Segment Editor module has more features (such as propagating segmentation through slices, interpolation) and is in active development.  It is expected that Segment Editor will eventually replace Editor. Both tools are very similar to each other in terms of functionality. In this tutorial we will use the Editor, as it is more stable and has all the features we need.

Again our goal is to remove the mandible and the skull as separate elements for our downstream analysis. With a bit of planning and thinking ahead, this can be accomplished effectively with the tools in the Editor module in about 3 minutes. Basically we will segment the down sampled volume, since it will be faster, and we will use that as a guide to mask out the regions of skull and mandible separately from the full resolution volume. The 3D rendering of the specimen shows that the skull and mandible are only in contact at the temporamandibular joint (TMJ). So if we remove the articulation between these two structure at TMJ, we can then use the Island Tool in the editor to convert them into separate structures with a single click.

Go to the Editor module and set the down sampled volume (rat-ds4 in my case) as your master volume and accept to generate the label map. Label maps are binary images, in which each label defines a separate structure. First, we will use the threshold effect to isolate the bone from everything else. A quick evaluation with the Threshold effect tells us that about 40, we get good separation between bone and none bone.  After thresholding, our first labels are 0 (background) and 1 (bone).

Now find the TMJ, which is around 33-35mm range in Axial (red) plane.  Switch to Paint effect, make sure the Paint over option is checked (because we do want to modify the existing labels), and sphere option is also checked.  With sphere option, instead of modifying a single voxel, we get to choose a radius (in mms) that will modify a cluster of voxels. For this downsampled data 0.2184mm (2 voxel radius) is sufficient. Set the tissue label to 0, so that we will be using the paint brush tool as an eraser (selected voxels will be removed from bone label 1, and will be assigned to background label 0). If you are familiar with anatomy, it should take you a minute to disconnect all the voxels in TMJ. You don’t have to be entirely accurate as this will serve as a mask to remove data, and we have other tools at our disposal to fix it. Once you confirm that TMJ on both sides are disconnected, click the Island Effect and set the minimum size to 10,000 and hit apply. In about 10 seconds, you should see the mandible and skull as separate label maps.  If you don’t, it means there are still some connected voxels, and mandible and skull are not separate islands. Find and remove this voxels, and repeat the Island Effect.

This would be a good time to save your work.

Now, use the Dilate effect to expand the Skull label (#1). Make sure the Eight Neighbors option is checked (this means it will include the diagonal pixels in calculation), and click Apply. Next, find the Mask Scalar Volume module. The Input volume will be our full resolution dataset (In my case Rat_scan), the mask will be label map we just created from the down sampled volume (Rat_ds4-label), specify the Masked Volume as the Rat_Skull. Make sure the label is set to 1 (Skull) and hit Apply.

To segment the mandible, first we should remove our dilation effect from skull. So you can either choose to do Undo, or you can do Erode effect on the Skull label with the same options as the Dilate effect. Then repeat the dilation effect and the Mask Scalar Volume on the Mandible label (#2) this time. You should now have the skull and the mandible of the Rat scan as two separate volumes. There will be large amounts of black space, especially in the mandible volume. You can use the ROI tool and the Crop Volume (without the interpolated scaling) as described in down sampling your data section to remove those and further reduce your dataset in size.

7. Saving

Slicer uses a markup language called Medical Reality Markup Language (.mrml) to save your ‘scenes’. You can think of the scene as a web page, and everything you loaded or created in Slicer as a component of it. It may get a bit confusing, so I suggest reading the Saving Scene documentation in https://www.slicer.org/wiki/Documentation/4.6/SlicerApplication/SavingData.

For 3D data that you imported into or created in Slicer, I suggest using either NRRD or NII (Nifti) formats.

Getting started with 3D Slicer

3D Slicer is an imaging suite that supports almost all functionalities a biologist would need to process volumetric datasets (CT/CBCT, MR, OCT, OPT, EM,…). Some of its key features are: Reading image stacks (DICOM and else), format conversion, segmentation, image processing (filters), 3D visualization, making surface models and measurements (distances, angle, surface areas, volumes and recording landmarks coordinates). It works on all major OSes (Windows, Mac and Linux). More features outside of the core functionality is available through the Extension Manager. Computationally apt people can design their own functionalities and extend its feature set by using the built-in Python Interactor.

A few words of caution: Slicer is a research software that’s under constant and active (and I do mean active, as in daily feature additions) development. As such, some of state-of-the-art functionalities can be experimental, and may not be refined. Having said that, I have been using Slicer for more than 5 years and the core modules I use (Volume rendering, volumes, transforms, segmentation) is as solid as any commercial package. When it fails, it is usually due to lack of sufficient RAM.

How to get started with 3D Slicer?

Once you download the software go to the New Users Wiki, and familiarize yourself with the UI. These two tutorials are a must.

  1. https://www.slicer.org/wiki/Documentation/4.6/Training#Slicer_Welcome_Tutorial
  2. https://www.slicer.org/w/images/5/51/3DDataLoadingandVisualization_Slicer4.5_SoniaPujol.pdf

Depending on the task, there are useful videos and tutorials on the youtube. Make sure to check out the FAQ.

There is an excellent introduction to Slicer by the Open Source Paleontologist. Unfortunately, given the pace of development in Slicer, it is now quite a bit dated. While the workflow and processing steps are still valid, menus and modules are quite different than the version used for that tutorial and there are now more tools which wasn’t available at the time.

Slicer contains some example datasets (brain MR, facial CBCT, chest CT etc) for your to play with right away.

In addition to these, I will be adding step-by-step tutorial for common tasks using publicly available datasets from imaging repositories such as DigiMorph and MorphoSource.

I need more help? Once you completed the tutorials and did a reasonable effort to search for FAQ and documentation on Slicer website and yet still have some unanswered questions, you might get help from the community. You can browse archives of the user list at http://slicer-users.65878.n3.nabble.com/. If you still couldn’t find an answer, feel free to post a question at the Slicer forum (https://discourse.slicer.org/)

Your likelihood of getting answers will be much higher, if you ask a well-framed question with sufficient (not too much) specifics of what you want to accomplish and the issues you encountered (“How can I read my CT scans into Slicer” would be a poor question).

If you end up publishing research that you conduct with the help of Slicer, please cite it, and add your publication to the DB. As an open-source publicly funded project, this is the only way to secure its future support and development.