Using landmarks to manually superimpose volumes

While microCT can provide excellent detail from mineralized tissue, usually some sort of a diffusable  radio-contrast is necessary to obtain detail from soft-tissue. Here, I will use landmarks to approximately superimpose three scans of a mouse kidney after it has been stained with phosphotungstic acid (PTA) for 48, 72 and 96h in 3D Slicer using Fiducial Registration. We need the superimposition to evaluate when the whole kidney is fully stained. All data for this tutorial as available at https://app.box.com/s/q5qntejysg8v9dxz9l6yl8o0pqm0r8ep. Datasets are courtesy of Dr. David Beier (Seattle Children’s). In addition to the Fiducial Registration, you will also become familiar with Volume Rendering, Markups, Volumes and Transforms modules of the Slicer. You should be generally familiar with the Slicer UI and how to navigate modules. If not, you can review my introductory 3D Slicer tutorial Getting and Visualizing data from DigiMorph

Drag and drop all three scans into Slicer. From the Volumes module, set the lookup Table (under Display Properties) such that 96h volume is set red, 72h volume is set green and 48h is set to green. Also, adjust the slice viewers to display 96h, 72h, and 48h in red, green and yellow viewers respectively (Figure 1). Quickly review datasets by looking at the cross-sections to see that while they are from the same specimen, scan orientations and diffusion of contrast is variable. Such datasets are very challenging to align automatically using image registration tools, because image contents are very different. Instead, we will find 6 features that are common in all scans, and use them to align them manual into a common orientation so that we can more readily compare stain differences.

Figure 1.

Now switch to Volume Rendering, and enable the rendering for 72h volume by clicking the eye button next to it. You need to adjust the Scalar Opacity Mapping to a ramp that will get rid of the noise surrounding the kidney (having a ramp from 55 to 70 seems to do the trick). Review the 3D rendering, and note that there are many bumps and holes on the surface that can be used as a references. The concern is whether they are visible in the other scans as well. But first determine your 5-6 references and annotate them in 72h volume.

Figure 2.

To do that, switch to Markups (blue circle in Figure 1) volume, and create a next Markups Node called F_72h (Figure 3). Then using the fiducial tool (red circle in Figure 1), record your reference points (you can find my landmarks in the same folder as the volumes). You can adjust the size of the spherical glyph using the Scale slider bar in the Markups module. If you are choosing your own landmarks, make sure the points are scattered around the sample uniformly.

Figure 3

Once you finish annotating 72h volume, you need to switch two Triple 3D view so that you can visualize the other two scans. This is a bit tricky, you need to enable rendering for all three volumes (don’t forget to adjust the ramps as well) in the Volume Rendering module. The for each of the volumes, expand the Inputs tab, and adjust them such that 72h is in view1, 96h is in view2, and 48h is in view3 (See the highlighted sections in Figure 2). Spin each volume independently and align them roughly in the similar orientation to the 72h one in 3D rendering window.

At this point, fiducials from 72h volume is visible in all three render windows. That’s very confusing. So we need to adjust Display property of F_72h such that it is only visible in view1, where the 72h volume is rendered (Figure 3).

Now you are ready to annotate the remaining volumes. But first create a new Markup node as F_96h to store the coordinates of those six fiducials on the 96h volume. Make sure to adjust their Display property as well (so that they are only visible in view2). Using 72h as your visual guide, annotate the same set of 6 landmarks. Order is important. Make sure you follow the same sequences as your reference set.  Create a new fiducial node for 48h as F_48h, and repeat the procedure. In the end you should have three sets of fiducials (F_48h, F_72h, F_96h) that goes with their respective volumes (48h, 72h, 96h), and the sequence of landmarks in each list is identical.

Now we are ready to align them. For that, we will use the Fiducial Registration module (Figure 4). I chose 72h to be my fixed landmarks, F_96h to be the moving one. I created a new linear transformation that was called 96h_to_72h that I will use to align my volume (keep in mind this procedure is only aligning points, it is the result of this transformation we will use to align the volumes). I do not expect my specimen to change in size, so I use the rigid transform type, which has six degrees of freedom (three axis of translation + three axis of rotation). Similarity transform will have one more degree of freedom, which is uniform size (all axes scaled the same way). Repeat the same procedure for 48h landmarks as well.

Figure 4

To superimpose 48h and 96h volumes onto 72h, we need to apply these transforms to their volumes. Go to the Transforms module, select active transform (e.g., 96h_to_72h), and scroll down to find the 96h volume under the Transformable tab and move it under the Transformed. Now switch to 48h_to_72h transform and repeat the same step for 48h volume (Figure 5).

Now both volumes should be in the same orientation as 72h. You can switch to 4up view and set each volume as a background/foreground pair, and set the alpha channel (opacity) at 0.5 to see them superimposed. Your final view should look something like this:

Figure 5

 

 

 

Importing microCT data from Scanco into Slicer

Importing microCT data from Scanco into Slicer

In this tutorial, I will show how to import a ‘raw image’ data into Slicer directly using Nearly Raw Raster Data (NRRD) format specification. I will be using a microCT data sample from a Scanco scanner courtesy of Dr. Chris Percival, however steps are equally applicable to raw image data from any other scanner vendor. For this to work, we need to know the dimension, data type and other metadata associated with the imaging session. These are typically provided as a log file. All files associated with this tutorial can be found in here.

Log file contains all sorts of potentially useful information about the scanning session, but the things we are interested are located in the very first section: Where image data starts in the raw volume (byte offset), XYZ dimensions (622 642 1085), data type (short, or 16 bit), and voxel size (0.02, 0.02, 0.02). These are key pieces of information that is necessary to describe the image data correctly through a ‘detach header’ (for more information see http://teem.sourceforge.net/nrrd/format.html).

Once you download the sample .aim file, open a text editor and paste the following lines, and save it in the same folder as the file you download with extension .nhdr (e.g, test.nhdr).

NRRD0004
type: short

dimension: 3

sizes: 622 642 1085

spacings: 0.02 0.02 0.02

byteskip: 3195

data file: ./PERCIVAL_370.AIM

endian: big

encoding: raw

If you drag and drop the nhdr file into Slicer, you should see the sample data being loaded (this is a fairly large dataset, may take sometime). If this is the first time you will be working with this dataset, you are done. However, if you have additional data (e.g., landmarks) collected outside of Slicer, it is important to check whether the coordinate systems are consistent between two different programs. Dr. Percival also provided a set of landmarks coordinates that was collected from this specimen using Amira. If you drag and drop the landmarks.fcsv into Slicer, you will see that landmarks do not correctly line up.

We need to identify the coordinate system the original landmarks were collected, and describe in the NHDR file. This step will involve some troubleshooting (e.g., collecting a few of the landmarks in Slicer, and comparing reported coordinates.) In this case, all positive values of the original landmark coordinates suggested to me that it is likely the original coordinate system was Right-Anterior-Superior (or RAS in short). I modified the header file to include this as the space tag. One other difference is that you need to remove the spacings tag, and then redefine it as the space directions.

NRRD0004
type: short
dimension: 3
sizes: 622 642 1085
byteskip: 3195
data file: ./PERCIVAL_370.AIM
endian: big
encoding: raw
space: RAS
space directions: (0.02,0,0) (0,0.02,0) (0,0,0.02)

If you change the content of the NHDR file with the code above, and reload your volume and render, now you should see all the landmarks line up correctly. It is always good to familiarize yourself with the coordinate systems of the digitization program you are using. You can find a discussion of coordinate systems here.

I

Saving and exchanging data with Slicer

Any dataset loaded into or derived from existing data (e.g., segmentation) becomes part of the Slicer ‘scene’. In an active project, where you might be doing segmentation, measurements, or landmarking, scene may get complex quickly. Saving (or rather forgetting to save) these different components is a typical point of frustration for the end user. Here, I will use one of the nightly builds (r27390) of Slicer to demonstrate the complex Save As dialog box.

First use the Sample Data module to obtain the MR head dataset, and visualize in 3D renderer using the Volume Rendering Preset MR-Default. Then, place some landmarks using the Fiducial tool, and finally take 3D distance measurement. This will generate enough components to create a semi-complex scene like this:

Now, go to the File->Save as to see the File Save Dialog box, which looks like this for my scene:

As you can see there is a volume (MR-head.nrrd), a volume rendering property (MR-Default.vp), a markups list that contain the four landmark (F.fcsv), and a measurement annotation file (M.acsv) that contain the 3D distance. Notice that all files, except the volume has a check mark next to it. That’s because they have not saved before. The reason MR-head does not have a check mark, because it hasn’t been changed by the user since the last time it is loaded. So, any file that’s previously saved in the session, and remain unchanged will be not have a check mark in the save as dialog box, next time you invoke the save command. You can manually change these. Every time you are saving your scene, make sure the files you care about have a check mark next to them.

A complex scene may have tens of components. If you find yourself in a situation that you need to share your scene and all of its component, or looking for a convenient option to move data from one computer to the next, consider trying the medical reality bundle:

It is the rightmost icon (the gift packaged one) on the top row. As you click, it grays out all the fields, and only lets you specify the name of the MRB file and its location. In essence, MRB is a self-contained zip-file with all the data loaded into the scene. Once it is saved, you can send the mrb file to your collaborator, or move it another computer without having to worry about where each of those component files are located on your hard drive. You can drag’n’drop MRB files into Slicer, and it will automatically load the scene.

The downside is you will have to unpack the MRB, if you need to access the components directly (e.g., you want to read your landmark coordinates into R directly).

 

NSF Advances in Biological Informatics grant to Maga lab to develop 3D morphometrics toolkit for 3D Slicer

We (along with UW Friday Harbor Lab and Duke University colleagues) are recently awarded a NSF ABI Grant to develop an integrated toolkit to retrieve, visualize and analyze 3D specimens within open-source 3D Slicer visualization and image analysis software.

With the addition of new functionalities, biologists will be available to do all image clean up and processing, visualization, data capture and basic landmark-based shape analysis of their 3D data within 3D-Slicer. It will also have a functionality to query and retrieve imagery from the 3D specimen repository Morphosource.org.

Grant will also allow us to offer NSF-sponsored intense, short summer workshops in 3D image analytics and morphometrics at the UW Friday Harbor Lab that will focus on the toolkit.

Stay tuned for future updates!

Scissors tool is awesome

The new segment editor in the 3D Slicer has features that the older Editor module lacks. One of this features is the awesome Scissors tool, which let’s you remove/retain any portion of the data by drawing an outline either in the slice views or in 3D.

Our goal in this tutorial will be to remove the foam and other support structures that usually show up in CT scans quickly (in about 2 minutes). We will use a dataset from the Terry Collection (TC 815) courtesy of Dr. Lynn Copes. For this tutorial I am using Slicer release r25914 (4.7.0-2017-04-10).

  • Download the nifti file and drag it into Slicer.
  • Go to the Segment Editor and create a new segment
  • Use the threshold tool with 410 – 4096 range to segment bones. Note that this picks up a fair amount of the scanning platform, as well as the foam.
  • If you want to visualize, you can click the create surface to see in 3D (takes a minute)

  • Switch to scissors tool and in the axial plane (red slice view) make a tracing of the scanning platform. Your scissors settings should be ‘erase inside’. This will remove the scanning platform from the segment #1, and assign it to the background.
  • Remaining foam particles can be easily dealt with using the Island tool. Just choose the option ‘Keep only the largest island‘.
  • You should have the skull fully segmented from the scanning platform and free of foam particles.

Segment editor uses a different data structure than regular editor. If you want to save a new dataset without the platform and the foam, you need to:

  1. Convert the segments into a label map. Right click on the Segment 1 and choose ‘convert visible segments to binary label map‘. For that, switch to Segmentations module, scroll down half to fine the tab Export/Import models and label maps. There you can specify to export the segmentation either as a label map or a surface model.  Note that there is an option to directly export a surface model as well, if that’s what you prefer to work with.
  2. To remove the platform, use the “Mask Scalar Volume” module, specify TC815 as your input volume, the label map from the previous step as your mask volume, and create a new output volume.

Edit on 9/25: There is now an extension that adds masking functionality to Segment Editor. It is called “slicer segment editor extra effects” and can be installed through the Extension Manager using a nightly (it isn’t available for the latest stable 4.6).  There is a video tutorial demonstrating the functionality.

 

You can use the scissors tool directly on the 3D view as well. Remember it is going to carve in (or carve out everything else depending your setting) into the screen plane, so orienting your specimen correctly is very important.

MorphoSource data and dealing with DICOM series in Slicer

DICOM is a standard image data format that is designed to facilitate data exchange between medical imaging facilities. While it is a standard, manufacturers of imaging equipment implemented the format in different ways, which can be a amajor headache. Slicer has a robust DICOM import/export module. It should read almost all well-structured DICOM series. For the ones that fail, DICOM patcher can be used to strip the meta data portion of DICOM and just load the image.

For this tutorial, I will use a DICOM series from the excellent MorphoSource repository (http://morphosource.org/Detail/MediaDetail/Show/media_id/4963). You may have to register for the site to download the data. Registration is well worth the hassle, as it provides thousands of scanned vertebrate specimens in a very nicely curated manner. I also chose MorphoSource, because recently a user had issues getting the data into Slicer. The solution here is based on the response from the Andras Lasso (Thanks Andras!).

Download the zip file (1.9GB) and unzip it to a folder. Open the DICOM module, Click Import and navigate a few folders down to the Saimiri_20187 folder, and hit OK. You can either choose to incorporate the DICOM sequence into the DICOM database or simply link the files. If you choose the latter, and later change or remove the location of Saimiri_20187 folder, you won’t be able to reload your data. This is really a moot point, as in the end we will save the dataset as NRRD or NII, which are far more suitable for our purposes than image sequences.

Slicer will begin loading the DICOM series, and eventually will display a message saying 0 Patient/Studies/Series were imported. We can dig further on the root cause of the problem by hitting CTRL+0 and bringing the Log Messages dialog box. If you scroll down, you should see  error messages labeled critical. It complains about dataset missing the patient and study name, as well as an Unknown DICOM tag. So nothing is loaded.

To fix this problem, open the DICOM Patcher module, specify Saimiri_20187 folder as the DICOM Input, and choose an output folder. Make sure Generate missing patient/study/series ID is checked and hit Patch. This will recreate the DICOM series in the specified folder.

Once patching is completed, you can use the DICOM browser to import the patched dataset. Notice how slow it is now importing (as it should, this is a massive 3GB dataset). Once DICOM browser completes the import, you should now have a record called Saimiri_201870000.dcm click on it and hit load. Once it is loaded, you can cross reference the metadata (namely image dimensions and spacing) from MorphoSource to the values reported by the Volume module. They should be identical.

Remember to save your resultant dataset as Nifti or NRRD.

 

 

A worked example: Getting and Visualizing Data from DigiMorph

Learning goal: We will use Slicer to look at, segment and measure a high-resolution of micro CT scan of a rat from the public specimen repository Digimorph (http://www.digimorph.org/specimens/Rattus_norvegicus/). Our goals are:

  1. Visualize Data in 3D
  2. Take Length measurements and record 3D coordinates of landmarks
  3. Save Skull and Mandible as separate datasets.
  4. Export skull and Mandible as separate 3D models (to be edited)

We will learn and use the following Slicer modules: Load/Save Data, Volumes, Crop Volume, Annotations, Markups, Editor, Volume Rendering, Mask Scalar Volume, Model Maker, Models. Tutorial would take about 20 minutes to complete.

Getting data: Find the ‘Download Data’ and click the CT scan on the Digimorph website. The zip file is 0.6GB in size. Unzip the files into a folder and find the meta data that describe the scan settings in the PDF file. This is a crucial piece of information if you want to get quantitative data out of your specimen.

The first question you should ask to yourself is whether you have the computational resources available to look at the full resolution dataset. Based on the description, this is a 1024×1024 8-bit dataset with 1571 slices.

Since 8-bit is equal to 1 byte, when loaded into the memory, this dataset will take

1024x1024x1571x1= 1,647,312,896 bytes, or 1.5GB of RAM.

Now, that’s just memory needed to store the dataset itself. As a rule of thumb you need, 4-6 times more memory than the dataset you are working on. So, if you have a computer with 8GB or more RAM, you should be fine. Anything less, you will struggle and will definitely need to data downsampling step. We will come to the graphics card issue later.

For this tutorial I am using v4.7.0-2017-02-28 r25734 of 3D Slicer on Windows. Your menus may look slightly different if you are using a different OS. I am assuming you have completed the general introductory tutorials for Slicer and getting comfortable with the UI and now how to navigate or find modules (HINT: use the search tool).

1. Getting Image Stacks into Slicer

Go to the folder where you unzip the data, and drag ANY one of the TIF files into the Slicer.

Click the “show options”, uncheck ‘single file’. You can also enter a new description (in this case I called Rat_scan). Loading the dataset takes about 30 seconds on my laptop with a SSD.

As you can see, the aspect ratios look weird in two planes, and scalebar is off for those views. That’s because this dataset has anisotropic voxel size, i.e. the XY -or in plane- resolution is different than the spacing between slices, and tiff files do not have that information. We can fix that by telling Slicer the correct slice spacing, which is in the PDF file that came with the dataset. For that, go to Volumes module, and enter the correct spacing which is: 0.02734×0.02734×0.02961mm. Make sure to adjust the field of view (the little square left of  sliders in each slice view control bar) to bring them back into the center.

Slicer guesses the window/level appropriate for your data. For variety of reasons, this typically fails for CT data in my experience, so go to Display tab of the Volumes module and set the Window/Level editor to Manual Min/Max, and set enter 0 for Min and 255 for Max, which are the extremes values in this dataset. This will fix the contrast issue in the dataset. If you prefer to change the color same from the traditional Grayscale lookup table, you can do that here as well.

All should look good now.

Note: Slicer treats PNG/JPG/BMP stacks differently from TIFF. For those, it assumes that data is multichannel image (RGB). You can see the difference in the Volume Information tab of the Volumes module. The number of scalars field for the current rat dataset is 1, if it were PNG/JPG stack, this would be 3. If you are working with such stacks, you need to use the Vector to Scalar Volume module to convert is a single channel volume, so that you can do the following steps.

2. Taking a linear measurement in 2D.

Let’s say, we want to measure the skull length. I used the slice number -13.80m (sagittal view)

Click on the Mouse Interactions icon and switch to Ruler (highlighted above). Then click two points on the sagittal view to obtain the measurement. You can move the points to fine tune their placement.

3. Collecting landmarks 2D.

You can use the same tool to collect 3D coordinates of landmarks in Slicer. Just switch back to Fiducial in Mouse interactions tool. Then simply click on any point in cross-sections to record their coordinates. One thing to keep in mind, Slicer will record the coordinates in physical world coordinates (RAS coordinate system) instead of image coordinates (ijk coordinate system). There are ways to convert them back and forth and I will cover it in another tutorial.

The resultant fiducial (landmark) will be saved under a list in the Markup module. Linear measurements will be recorded in the Annotations moduleYou can customize your list names, measurement labels, sizes etc using either the Markup or Annotation module, depending on whether it is a ruler measurement or a landmark. Starting with Slicer v4.5, fiducial lists are saved as in fcsv format, which is a comma-separated (csv) file, with some custom headers. These text files can be easily opened in Excel, and Morpho package in R supports reading coordinates from a Slicer fcsv file into an array (Morpho::fcsv.read). I also suggest locking each landmark after digitizing it to avoid the issue of accidentally moving it after it has been placed.

It is also possible to do these measurements directly on the 3D renderings. You can either use the volume rendering module, or a create a surface model to achieve 3D rendering of the specimen. We will first do a 3D rendering using Volume Rendering module, as this gives the highest detail possible with the volumetric data.

4. Volume Rendering

Slicer supports two different methods of Volume Rendering (VR), either CPU or GPU based. CPU based will always work, whether you are on a computer without a dedicated graphics card, or on a remote connection (which typically doesn’t support hardware accelerated graphics), but it is slow, excruciatingly slow. GPU is much faster, but would require you have a dedicated graphics card with 1GB or more videoRAM. On my laptop, CPU rendering of this dataset will take 30 seconds to bring a view, and another 30 seconds if I adjust the colors, opacity or move the specimen. Basically every operation has a 30 second delay. GPU rendering, on the other hand, is real-time. I have Nvidia Quadro K4100M with 4GB of VRAM. However, any recent Geforce line of products from Nvidia would also work equally well. I have no experience with AMD GPUs, but there is no reason for them not to work unless it is a driver issue.

If you have a dedicated graphics card, enable the GPU rendering and set the amount of available VRAM from the Edit->Preferences menu.  From now on, I will only be using the GPU option. If you can’t use the GPU rendering because, your computer doesn’t have a dedicated GPU, skip to the Downsampling your data section, do those tasks, and come back and try these with your steps with your downsampled data. Otherwise, these steps will take far too long with CPU rendering to keep your sanity.

Go to Volume Rendering module, and adjust the highlighted settings and then turn on the Rendering for the volume by clicking the eye next to the volume name. The most important setting here is the Scalar Opacity Mapping, in which the point values determine which voxels will be turned off (Opacity set to 0), which will be completely opaque (O set to 1.0) and something in between. My suggestion would be turn off anything that is not bone (for this dataset grayscale values 40 or less is a good approximation), and create a ramp that gives you a nice detail. It is more of an art than science. You can then adjust the Scalar color mapping and material properties to give you photorealistic rendering.

You can clip the specimen (to reveal interior for example) using the Crop settings under the Display tab. Make sure to enable the Region of Interest (ROI) and then use the colored spheres on the clip box to adjust the region to be clipped. If you want to revert, hit the Fit to Volume again. We will use this ROI tool to down sample or generate subsets of our datasets in the next section.

At this point you can try collecting landmarks or taking linear measurements directly from the 3D renderer window (as we previously did from the cross-sections). Make sure to be on the orthographic projection while measuring (Hint: Scalebar in 3D will disappear if the rendering is not set to orthographic).

5. Downsampling the data

Segmentation and some of the other image processing tasks (such as isosurfacing and model building) are computationally intensive. For the expediency, we will down sample our dataset to cut down processing times.

First, adjust the ROI from the previous step that it covers the entire specimen. Then go to Crop Volume module (not to be confused with the inaptly named crop setting of the Volume Rendering module). Check the Input volume and Input ROI are correctly set, then choose to Create a New Volume As and call id Rat_ds4. Set the scaling to 4.00x, which means the data will be downsampled by factor of 4 in each dimension, for a total reduction of 4^3=64 folds in size. If you expand the volume Information tab, you will see the current spacing and image dimensions as well as what they will be after the downsampling. This step takes about a minute on my laptop. Once it is completed you may have to adjust the Window/Level settings for the newly created volume.

Another use for this module would be to crop out unwanted data from scan without downsampling the data. Let’s say our project calls for measuring/segmenting just the tooth row and the rest of the data are simply nuisance. We can adjust the ROI to retain the region we are interested (I decided to keep the left upper and lower tooth row along with their alveolar processes), and again use the Crop Volume, this time with slightly different settings. We are no longer interested in down sampling our volume, so we will uncheck the interpolated scaling which will automatically turn off the spacing scale. Enter the Left_tooth_row as the new output volume name. Compare the cropped volume vs original spacing in the volume information.

6. Segmentation and Image Masking

There are two  modules in Slicer to do interactive segmentation. The more robust and stable one is the Editor module. Segment Editor module has more features (such as propagating segmentation through slices, interpolation) and is in active development.  It is expected that Segment Editor will eventually replace Editor. Both tools are very similar to each other in terms of functionality. In this tutorial we will use the Editor, as it is more stable and has all the features we need.

Again our goal is to remove the mandible and the skull as separate elements for our downstream analysis. With a bit of planning and thinking ahead, this can be accomplished effectively with the tools in the Editor module in about 3 minutes. Basically we will segment the down sampled volume, since it will be faster, and we will use that as a guide to mask out the regions of skull and mandible separately from the full resolution volume. The 3D rendering of the specimen shows that the skull and mandible are only in contact at the temporamandibular joint (TMJ). So if we remove the articulation between these two structure at TMJ, we can then use the Island Tool in the editor to convert them into separate structures with a single click.

Go to the Editor module and set the down sampled volume (rat-ds4 in my case) as your master volume and accept to generate the label map. Label maps are binary images, in which each label defines a separate structure. First, we will use the threshold effect to isolate the bone from everything else. A quick evaluation with the Threshold effect tells us that about 40, we get good separation between bone and none bone.  After thresholding, our first labels are 0 (background) and 1 (bone).

Now find the TMJ, which is around 33-35mm range in Axial (red) plane.  Switch to Paint effect, make sure the Paint over option is checked (because we do want to modify the existing labels), and sphere option is also checked.  With sphere option, instead of modifying a single voxel, we get to choose a radius (in mms) that will modify a cluster of voxels. For this downsampled data 0.2184mm (2 voxel radius) is sufficient. Set the tissue label to 0, so that we will be using the paint brush tool as an eraser (selected voxels will be removed from bone label 1, and will be assigned to background label 0). If you are familiar with anatomy, it should take you a minute to disconnect all the voxels in TMJ. You don’t have to be entirely accurate as this will serve as a mask to remove data, and we have other tools at our disposal to fix it. Once you confirm that TMJ on both sides are disconnected, click the Island Effect and set the minimum size to 10,000 and hit apply. In about 10 seconds, you should see the mandible and skull as separate label maps.  If you don’t, it means there are still some connected voxels, and mandible and skull are not separate islands. Find and remove this voxels, and repeat the Island Effect.

This would be a good time to save your work.

Now, use the Dilate effect to expand the Skull label (#1). Make sure the Eight Neighbors option is checked (this means it will include the diagonal pixels in calculation), and click Apply. Next, find the Mask Scalar Volume module. The Input volume will be our full resolution dataset (In my case Rat_scan), the mask will be label map we just created from the down sampled volume (Rat_ds4-label), specify the Masked Volume as the Rat_Skull. Make sure the label is set to 1 (Skull) and hit Apply.

To segment the mandible, first we should remove our dilation effect from skull. So you can either choose to do Undo, or you can do Erode effect on the Skull label with the same options as the Dilate effect. Then repeat the dilation effect and the Mask Scalar Volume on the Mandible label (#2) this time. You should now have the skull and the mandible of the Rat scan as two separate volumes. There will be large amounts of black space, especially in the mandible volume. You can use the ROI tool and the Crop Volume (without the interpolated scaling) as described in down sampling your data section to remove those and further reduce your dataset in size.

7. Saving

Slicer uses a markup language called Medical Reality Markup Language (.mrml) to save your ‘scenes’. You can think of the scene as a web page, and everything you loaded or created in Slicer as a component of it. It may get a bit confusing, so I suggest reading the Saving Scene documentation in https://www.slicer.org/wiki/Documentation/4.6/SlicerApplication/SavingData.

For 3D data that you imported into or created in Slicer, I suggest using either NRRD or NII (Nifti) formats.

Getting started with 3D Slicer

3D Slicer is an imaging suite that supports almost all functionalities a biologist would need to process volumetric datasets (CT/CBCT, MR, OCT, OPT, EM,…). Some of its key features are: Reading image stacks (DICOM and else), format conversion, segmentation, image processing (filters), 3D visualization, making surface models and measurements (distances, angle, surface areas, volumes and recording landmarks coordinates). It works on all major OSes (Windows, Mac and Linux). More features outside of the core functionality is available through the Extension Manager. Computationally apt people can design their own functionalities and extend its feature set by using the built-in Python Interactor.

A few words of caution: Slicer is a research software that’s under constant and active (and I do mean active, as in daily feature additions) development. As such, some of state-of-the-art functionalities can be experimental, and may not be refined. Having said that, I have been using Slicer for more than 5 years and the core modules I use (Volume rendering, volumes, transforms, segmentation) is as solid as any commercial package. When it fails, it is usually due to lack of sufficient RAM.

How to get started with 3D Slicer?

Once you download the software go to the New Users Wiki, and familiarize yourself with the UI. These two tutorials are a must.

  1. https://www.slicer.org/wiki/Documentation/4.6/Training#Slicer_Welcome_Tutorial
  2. https://www.slicer.org/w/images/5/51/3DDataLoadingandVisualization_Slicer4.5_SoniaPujol.pdf

Depending on the task, there are useful videos and tutorials on the youtube. Make sure to check out the FAQ.

There is an excellent introduction to Slicer by the Open Source Paleontologist. Unfortunately, given the pace of development in Slicer, it is now quite a bit dated. While the workflow and processing steps are still valid, menus and modules are quite different than the version used for that tutorial and there are now more tools which wasn’t available at the time.

Slicer contains some example datasets (brain MR, facial CBCT, chest CT etc) for your to play with right away.

In addition to these, I will be adding step-by-step tutorial for common tasks using publicly available datasets from imaging repositories such as DigiMorph and MorphoSource.

I need more help? Once you completed the tutorials and did a reasonable effort to search for FAQ and documentation on Slicer website and yet still have some unanswered questions, you might get help from the community. You can browse archives of the user list at http://slicer-users.65878.n3.nabble.com/. If you still couldn’t find an answer, feel free to post a question at the Slicer forum (https://discourse.slicer.org/)

Your likelihood of getting answers will be much higher, if you ask a well-framed question with sufficient (not too much) specifics of what you want to accomplish and the issues you encountered (“How can I read my CT scans into Slicer” would be a poor question).

If you end up publishing research that you conduct with the help of Slicer, please cite it, and add your publication to the DB. As an open-source publicly funded project, this is the only way to secure its future support and development.