A worked example: Getting and Visualizing Data from DigiMorph

Learning goal: We will use Slicer to look at, segment and measure a high-resolution of micro CT scan of a rat from the public specimen repository Digimorph (http://www.digimorph.org/specimens/Rattus_norvegicus/). Our goals are:

  1. Visualize Data in 3D
  2. Take Length measurements and record 3D coordinates of landmarks
  3. Save Skull and Mandible as separate datasets.
  4. Export skull and Mandible as separate 3D models (to be edited)

We will learn and use the following Slicer modules: Load/Save Data, Volumes, Crop Volume, Annotations, Markups, Editor, Volume Rendering, Mask Scalar Volume, Model Maker, Models. Tutorial would take about 20 minutes to complete.

Getting data: Find the ‘Download Data’ and click the CT scan on the Digimorph website. The zip file is 0.6GB in size. Unzip the files into a folder and find the meta data that describe the scan settings in the PDF file. This is a crucial piece of information if you want to get quantitative data out of your specimen.

The first question you should ask to yourself is whether you have the computational resources available to look at the full resolution dataset. Based on the description, this is a 1024×1024 8-bit dataset with 1571 slices.

Since 8-bit is equal to 1 byte, when loaded into the memory, this dataset will take

1024x1024x1571x1= 1,647,312,896 bytes, or 1.5GB of RAM.

Now, that’s just memory needed to store the dataset itself. As a rule of thumb you need, 4-6 times more memory than the dataset you are working on. So, if you have a computer with 8GB or more RAM, you should be fine. Anything less, you will struggle and will definitely need to data downsampling step. We will come to the graphics card issue later.

For this tutorial I am using v4.7.0-2017-02-28 r25734 of 3D Slicer on Windows. Your menus may look slightly different if you are using a different OS. I am assuming you have completed the general introductory tutorials for Slicer and getting comfortable with the UI and now how to navigate or find modules (HINT: use the search tool).

1. Getting Image Stacks into Slicer

Go to the folder where you unzip the data, and drag ANY one of the TIF files into the Slicer.

Click the “show options”, uncheck ‘single file’. You can also enter a new description (in this case I called Rat_scan). Loading the dataset takes about 30 seconds on my laptop with a SSD.

As you can see, the aspect ratios look weird in two planes, and scalebar is off for those views. That’s because this dataset has anisotropic voxel size, i.e. the XY -or in plane- resolution is different than the spacing between slices, and tiff files do not have that information. We can fix that by telling Slicer the correct slice spacing, which is in the PDF file that came with the dataset. For that, go to Volumes module, and enter the correct spacing which is: 0.02734×0.02734×0.02961mm. Make sure to adjust the field of view (the little square left of  sliders in each slice view control bar) to bring them back into the center.

Slicer guesses the window/level appropriate for your data. For variety of reasons, this typically fails for CT data in my experience, so go to Display tab of the Volumes module and set the Window/Level editor to Manual Min/Max, and set enter 0 for Min and 255 for Max, which are the extremes values in this dataset. This will fix the contrast issue in the dataset. If you prefer to change the color same from the traditional Grayscale lookup table, you can do that here as well.

All should look good now.

Note: Slicer treats PNG/JPG/BMP stacks differently from TIFF. For those, it assumes that data is multichannel image (RGB). You can see the difference in the Volume Information tab of the Volumes module. The number of scalars field for the current rat dataset is 1, if it were PNG/JPG stack, this would be 3. If you are working with such stacks, you need to use the Vector to Scalar Volume module to convert is a single channel volume, so that you can do the following steps.

2. Taking a linear measurement in 2D.

Let’s say, we want to measure the skull length. I used the slice number -13.80m (sagittal view)

Click on the Mouse Interactions icon and switch to Ruler (highlighted above). Then click two points on the sagittal view to obtain the measurement. You can move the points to fine tune their placement.

3. Collecting landmarks 2D.

You can use the same tool to collect 3D coordinates of landmarks in Slicer. Just switch back to Fiducial in Mouse interactions tool. Then simply click on any point in cross-sections to record their coordinates. One thing to keep in mind, Slicer will record the coordinates in physical world coordinates (RAS coordinate system) instead of image coordinates (ijk coordinate system). There are ways to convert them back and forth and I will cover it in another tutorial.

The resultant fiducial (landmark) will be saved under a list in the Markup module. Linear measurements will be recorded in the Annotations moduleYou can customize your list names, measurement labels, sizes etc using either the Markup or Annotation module, depending on whether it is a ruler measurement or a landmark. Starting with Slicer v4.5, fiducial lists are saved as in fcsv format, which is a comma-separated (csv) file, with some custom headers. These text files can be easily opened in Excel, and Morpho package in R supports reading coordinates from a Slicer fcsv file into an array (Morpho::fcsv.read). I also suggest locking each landmark after digitizing it to avoid the issue of accidentally moving it after it has been placed.

It is also possible to do these measurements directly on the 3D renderings. You can either use the volume rendering module, or a create a surface model to achieve 3D rendering of the specimen. We will first do a 3D rendering using Volume Rendering module, as this gives the highest detail possible with the volumetric data.

4. Volume Rendering

Slicer supports two different methods of Volume Rendering (VR), either CPU or GPU based. CPU based will always work, whether you are on a computer without a dedicated graphics card, or on a remote connection (which typically doesn’t support hardware accelerated graphics), but it is slow, excruciatingly slow. GPU is much faster, but would require you have a dedicated graphics card with 1GB or more videoRAM. On my laptop, CPU rendering of this dataset will take 30 seconds to bring a view, and another 30 seconds if I adjust the colors, opacity or move the specimen. Basically every operation has a 30 second delay. GPU rendering, on the other hand, is real-time. I have Nvidia Quadro K4100M with 4GB of VRAM. However, any recent Geforce line of products from Nvidia would also work equally well. I have no experience with AMD GPUs, but there is no reason for them not to work unless it is a driver issue.

If you have a dedicated graphics card, enable the GPU rendering and set the amount of available VRAM from the Edit->Preferences menu.  From now on, I will only be using the GPU option. If you can’t use the GPU rendering because, your computer doesn’t have a dedicated GPU, skip to the Downsampling your data section, do those tasks, and come back and try these with your steps with your downsampled data. Otherwise, these steps will take far too long with CPU rendering to keep your sanity.

Go to Volume Rendering module, and adjust the highlighted settings and then turn on the Rendering for the volume by clicking the eye next to the volume name. The most important setting here is the Scalar Opacity Mapping, in which the point values determine which voxels will be turned off (Opacity set to 0), which will be completely opaque (O set to 1.0) and something in between. My suggestion would be turn off anything that is not bone (for this dataset grayscale values 40 or less is a good approximation), and create a ramp that gives you a nice detail. It is more of an art than science. You can then adjust the Scalar color mapping and material properties to give you photorealistic rendering.

You can clip the specimen (to reveal interior for example) using the Crop settings under the Display tab. Make sure to enable the Region of Interest (ROI) and then use the colored spheres on the clip box to adjust the region to be clipped. If you want to revert, hit the Fit to Volume again. We will use this ROI tool to down sample or generate subsets of our datasets in the next section.

At this point you can try collecting landmarks or taking linear measurements directly from the 3D renderer window (as we previously did from the cross-sections). Make sure to be on the orthographic projection while measuring (Hint: Scalebar in 3D will disappear if the rendering is not set to orthographic).

5. Downsampling the data

Segmentation and some of the other image processing tasks (such as isosurfacing and model building) are computationally intensive. For the expediency, we will down sample our dataset to cut down processing times.

First, adjust the ROI from the previous step that it covers the entire specimen. Then go to Crop Volume module (not to be confused with the inaptly named crop setting of the Volume Rendering module). Check the Input volume and Input ROI are correctly set, then choose to Create a New Volume As and call id Rat_ds4. Set the scaling to 4.00x, which means the data will be downsampled by factor of 4 in each dimension, for a total reduction of 4^3=64 folds in size. If you expand the volume Information tab, you will see the current spacing and image dimensions as well as what they will be after the downsampling. This step takes about a minute on my laptop. Once it is completed you may have to adjust the Window/Level settings for the newly created volume.

Another use for this module would be to crop out unwanted data from scan without downsampling the data. Let’s say our project calls for measuring/segmenting just the tooth row and the rest of the data are simply nuisance. We can adjust the ROI to retain the region we are interested (I decided to keep the left upper and lower tooth row along with their alveolar processes), and again use the Crop Volume, this time with slightly different settings. We are no longer interested in down sampling our volume, so we will uncheck the interpolated scaling which will automatically turn off the spacing scale. Enter the Left_tooth_row as the new output volume name. Compare the cropped volume vs original spacing in the volume information.

6. Segmentation and Image Masking

There are two  modules in Slicer to do interactive segmentation. The more robust and stable one is the Editor module. Segment Editor module has more features (such as propagating segmentation through slices, interpolation) and is in active development.  It is expected that Segment Editor will eventually replace Editor. Both tools are very similar to each other in terms of functionality. In this tutorial we will use the Editor, as it is more stable and has all the features we need. (Update 3/20/2020. Editor module is now deprecated and soon will be removed from Slicer. Segment Editor has many more features, including the ability to allow overlap between different segments. The workflow below is still the same for the Segment Editor. If you would like to review the capabilities of Segment Editor, please take a look at our segmentation tutorial at https://github.com/SlicerMorph/W_2020/tree/master/Lab03_Slicer_3_Segmentation)

Again our goal is to remove the mandible and the skull as separate elements for our downstream analysis. With a bit of planning and thinking ahead, this can be accomplished effectively with the tools in the Editor module in about 3 minutes. Basically we will segment the down sampled volume, since it will be faster, and we will use that as a guide to mask out the regions of skull and mandible separately from the full resolution volume. The 3D rendering of the specimen shows that the skull and mandible are only in contact at the temporamandibular joint (TMJ). So if we remove the articulation between these two structure at TMJ, we can then use the Island Tool in the editor to convert them into separate structures with a single click.

Go to the Editor module and set the down sampled volume (rat-ds4 in my case) as your master volume and accept to generate the label map (Update 03/20/2020: Segment Editor generates Segmentations, which is a different type of data representation than label map. Segmentations can be converted to label maps and vice versa. See the section on segmentation vs labelmap representation of our tutorial. https://github.com/SlicerMorph/W_2020/tree/master/Lab03_Slicer_3_Segmentation#segmentations-module). Label maps are binary images, in which each label defines a separate structure. First, we will use the threshold effect to isolate the bone from everything else. A quick evaluation with the Threshold effect tells us that about 40, we get good separation between bone and none bone.  After thresholding, our first labels are 0 (background) and 1 (bone).

Now find the TMJ, which is around 33-35mm range in Axial (red) plane.  Switch to Paint effect, make sure the Paint over option is checked (because we do want to modify the existing labels), and sphere option is also checked.  With sphere option, instead of modifying a single voxel, we get to choose a radius (in mms) that will modify a cluster of voxels. For this downsampled data 0.2184mm (2 voxel radius) is sufficient. Set the tissue label to 0, so that we will be using the paint brush tool as an eraser (selected voxels will be removed from bone label 1, and will be assigned to background label 0). If you are familiar with anatomy, it should take you a minute to disconnect all the voxels in TMJ. You don’t have to be entirely accurate as this will serve as a mask to remove data, and we have other tools at our disposal to fix it. Once you confirm that TMJ on both sides are disconnected, click the Island Effect and set the minimum size to 10,000 and hit apply. In about 10 seconds, you should see the mandible and skull as separate label maps.  If you don’t, it means there are still some connected voxels, and mandible and skull are not separate islands. Find and remove this voxels, and repeat the Island Effect.

This would be a good time to save your work.

Now, use the Dilate effect to expand the Skull label (#1). Make sure the Eight Neighbors option is checked (this means it will include the diagonal pixels in calculation), and click Apply. Next, find the Mask Scalar Volume module. The Input volume will be our full resolution dataset (In my case Rat_scan), the mask will be label map we just created from the down sampled volume (Rat_ds4-label), specify the Masked Volume as the Rat_Skull. Make sure the label is set to 1 (Skull) and hit Apply.

To segment the mandible, first we should remove our dilation effect from skull. So you can either choose to do Undo, or you can do Erode effect on the Skull label with the same options as the Dilate effect. Then repeat the dilation effect and the Mask Scalar Volume on the Mandible label (#2) this time. You should now have the skull and the mandible of the Rat scan as two separate volumes. There will be large amounts of black space, especially in the mandible volume. You can use the ROI tool and the Crop Volume (without the interpolated scaling) as described in down sampling your data section to remove those and further reduce your dataset in size.

7. Saving

Slicer uses a markup language called Medical Reality Markup Language (.mrml) to save your ‘scenes’. You can think of the scene as a web page, and everything you loaded or created in Slicer as a component of it. It may get a bit confusing, so I suggest reading the Saving Scene documentation in https://www.slicer.org/wiki/Documentation/4.6/SlicerApplication/SavingData.

For 3D data that you imported into or created in Slicer, I suggest using either NRRD or NII (Nifti) formats.

8 thoughts on “A worked example: Getting and Visualizing Data from DigiMorph

  1. Ramon

    Dear Dr.Murat Maga:Usually the XYZ values from Digimorph are clearly exposed.Nevertheless,i can’t find these at Thorius Pinicola
    Please,could you help me to find these XYZ values?http://digimorph.org/specimens/Thorius_pinicola/
    (University of Texas High-Resolution X-ray CT Facility Archive 2470

    136429A: Scans of the head and forelimbs of Thorius sp. (MVZ 136429; Mexico, Oaxaca, 1.7 km N of San Miguel Suchixtepec [VERBATIM ELEVATION:2630m); G. Parra-Olea, J. Hanken, T. Hsieh, M. Garcia-Paris) for James Hanken of Harvard University and David Wake of the University of California Berkeley. Specimen scanned by Jessie Maisano 6 June 2011. Scanned in two parts and matched.

    scan parameters: Xradia. 4X objective, 80kV, 10W, 6s acquisition time, detector 17.9 mm, source -37.5 mm, XYZ [-2, 16070, 18], camera bin 2, angles ±96, 577 views, no filter, dithering, no sample drift correction. End reference (45 frames, each for 6s). Reconstructed with center shift -5.75 to -5, beam hardening 0, theta –97, byte scaling [-40, 450], binning 1, recon filter smooth (kernel size = 0.8). Total slices = 1817.

    16bit: 16bit TIFF images reconstructed by Xradia Reconstructor. Voxels are 4.53 microns.)

    1. maga Post author

      I assume you tried using the specified spacing, 4.53microns and it didn’t work?
      As you said data from Digimorph usually has some metadata (a word or a pdf document) that contains the voxel spacing information. If this is missing, the best way to get the information is to contact the Digimorph folks and see if they can provide you what you want.

      1. Ramon

        Dear Dr.Murat Maga.
        Thanks for your kindly reply.Maybe a naive question.
        If voxels are 4.53 microns that means Z value is 0.453 mm?

        1. maga Post author

          It may or may not. Depends whether they acquired images using isotropic voxels (all voxels have the same spacing) or anisotropic voxels (would have different spacing along different axes). Normally Digimorph is very careful reporting these values. Given that they only reported 4.53 microns, I would assume this was indeed an isotropic dataset.

  2. Dexter

    Dear Murat,

    At the beginning of the article you mention one of the goals is learning how to export the skull and mandible as separate objects to be edited, but then never actually talk about this later. This is the part of the article that I’m most interested in so any help would be appreciated. (In my work I’m trying to just look at the pharyngeal jaws of fish but the original volume contains the whole skull!)

    Dexter Summers

    1. maga Post author

      Dear Dexter,
      This tutorial entry is almost three years old. In the new versions of the Slicer, editor is replaced by Segment Editor module. Workflow is identical to the one presented in this tutorial. When you are done with segmentation, all you need to do is to go the ‘Segmentations’ module, and navigate to the import/export models and labelmaps tab. There you can export your segmentations as individual models into your current scene. Then you can use the regular save as dialog box to save them as ply/obj/stl formats. Hope this helps,


Leave a Reply

Your email address will not be published. Required fields are marked *