Monday 29 November 2010

Associating NIfTI as file type in iOS

I'm currently working on a study and I have been using the NeuroPub Visualizer to review the results. It works great and it's easy to shift between different images. Uploading images through iTunes is easy too, but it requires me to hook up the iPad to my computer with the USB-cable.

I would prefer to download my images wirelessly from my local web server. The next step is therefore to associate the app with NIfTI as file type. This would open up a number of possibilities:
  1. If you get a NIfTI-image as attachment in an email, you can load it into the app directly from the Mail app. This way, you can either send NIfTI-images as email from your computer to load images into the visualizer, or you can receive images from colleagues by email to your iPad. You don't have to go through a computer.
  2. It will also be possible to download NIfTI-images to the app through Safari. Your research group could have a local repository of NIfTI-images within your intranet as a web page that you can access from your iPad.
  3. You will be able to send NIfTI-images by email from the app, given that they are not too big. I only need to figure out how to support compressed NIfTI (.gz), because uncompressed NIfTI would be too big.
In the end, this makes data sharing easier than ever before. Sharing your statistical results with the scientific community is the next step from here, where you can upload your results and download others results. More about that later.

Thursday 25 November 2010

About next versions

I thought I should mention something about the next versions that I'm currently working on.

  1. The version that has been submitted to App Store (version 1.0) is only for iPad, but version 1.1 will be universal and support both iPhone and iPad. It will also make use of the retina display if you have an iPhone 4, but it will also work on iPhone 3Gs. I will probably not support iPhone 3G.
  2. I will support more resolutions, so you can look at any image with a 1x1x1 resolution or lower. The current version only supports 2x2x2. This will be added in version 1.2.
  3. You will also be able to upload and use your own study specific templates in version 1.2.
  4. The current version only allows you to look at one image at a time, but it will be possible to load two or more images as overlays on the template in a future version. This means you will be able to compare different activation images in the same view. This feature will probably be added in version 1.3.
  5. I'm planning to create a version for Mac OS X and release it on Mac App Store, but my focus is to add the above features first.
Do you have any ideas? Feel free to comment and discuss.

Saturday 20 November 2010

Version 1.0 is submitted to App Store!

Exciting news! I have finally submitted the app to App Store. I'm calling it NeuroPub Visualizer. The review process usually takes up to two weeks, but I will let you know when it's available.

This is how the icon looks like:

Friday 19 November 2010

Version 1.0 is finally finished!!!

I'm happy to announce that version 1.0 is finally finished. It has been a long ride and I had hoped to release it sooner, but good things sometimes take time. I will submit it to App Store during the weekend. Then I will start to work on the help text that you will find on this blog.





Help

About NeuroPub

NeuroPub is a visualizer for statistical brain images (fMRI, VBM, etc) and other kind of images (atlases etc) that can be visualised as an overlay on top of the standard MNI brain. It keeps a list of all the images you have imported to the app, so you always have immediate access to your research. This makes it a great tool to bring to conferences and meetings. You can have your own library of statistical images that you always carry with you, so you don't miss a chance of demonstrating your latest results when meeting others in the neuroimaging field.

This help text describes version 1.2.

Starting the app for the first time

Both the iPhone and the iPad version have exactly the same features, but there are some minor differences. These which will be described below.

iPhone

When you start the app the first time on your iPhone, you will see an image list containing one file (example.nii.gz). If you tap on that image, you will get into the visualizer, which will visualise the example image as an overlay on the standard brain.

There are two buttons at the bottom of the image list: Reload and Help. The Help button brings you to this page. The Reload button will be further discussed under the section Controls and buttons.

iPad

When you start the app on your iPad, you will immediately get into the visualizer. If you start the app in landscape mode, you will also see the image list on the left side. Just like on the iPhone, this list will contain one file (example.nii.gz) and the image will get loaded if you tap on it. The image list will not be visible if you start the app in portrait mode, but you can then invoke the list by tapping on the Image List button that you find in the upper left corner.

The iPad version has only one button at the bottom of the image list (Reload). The Help button is located at the upper right corner.

Both iPhone & iPad


The example image (example.nii.gz) is included with the app and listed in the image list when you start the app for the first time. You cannot delete this image, but it will disappear from the image list as soon as you upload your own images.


Uploading images

Requirements

NeuroPub supports images in both .nii and .nii.gz (compressed) format. Images you upload must have the same resolution as the 2x2x2 mm^3 template that comes with SPM and FSL, and they need to be in float-format. Only images satisfying these demands will be listed in the image list. The coordinate transformation matrix must also be the same as the template. If you have FSL, you can use the fslhd command to check if your file fulfils the requirements. You should get these values:

data_type      FLOAT32

dim0           3


dim1           91
dim2           109
dim3           91
dim4           1

The data type must be FLOAT32 and the images must have the size 91x109x91. You cannot upload 4D files. Only 3D files are accepted.

pixdim1        2.0000000000
pixdim2        2.0000000000
pixdim3        2.0000000000

The voxel size must be 2x2x2 mm^3.

sto_xyz:1      -2.000000  0.000000  0.000000  90.000000
sto_xyz:2      0.000000  2.000000  0.000000  -126.000000
sto_xyz:3      0.000000  0.000000  2.000000  -72.000000
sto_xyz:4      0.000000  0.000000  0.000000  1.000000
sform_xorient  Right-to-Left

Finally, the sform matrix must be equal to the values above. Notice that the diagonal is -2, 2, 2. If your diagonal for some reason reads 2, 2, 2, your x-orientation is Right-to-Left. All images must be in the Left-to-Right format.

Please note that if your image does not conform to these requirements, you won't see it in the image list and it will be deleted from the app to save space!


There are two ways to get images to NeuroPub:

Import images from other apps

This is the preferred method to get images to NeuroPub. You can import images from apps such as Safari, Mail, Dropbox, etc. For instance, you can send yourself an email to your iPhone/iPad with your NIfTI files (both .nii and .nii.gz). You can then tap on the image and get a list of apps that can read NIfTI. NeuroPub will be one of the apps. Just select NeuroPub and the image will be loaded into the image list.

It doesn't matter which app you use to upload images to NeuroPub. You can use Mail, Safari, Dropbox or any other app that can export files.

Upload images through iTunes File Sharing

An alternative way to upload images is to use iTunes File Sharing. This is the same procedure as for uploading documents to Apple's Pages app. Please look at Apple's support page to see how this is done: http://support.apple.com/kb/HT4088

The viewer

If you tap on the example.nii.gz (or any image you have uploaded), it will be loaded into the viewer and visualised as an overlay on the MNI template. You will get into a 2x2 view mode, where you can see the brain in a axial, coronal, sagittal, and 3D view at the same time.

You can now drag the red cursor in the different views to change coordinate and slice locations. The slice locations will change as you move the cursor. Thus, to change slice location in the axial view, you have to move the cursor up or down in the coronal or sagittal view. You can also rotate the 3D view by dragging your finger over the brain. Multi-touch is not included in this version of the app, so you won't be able to pinch in any of the views.

Change view by double tapping

You can go from 2x2 mode to any of the sub-views by double tapping on the view you want to see in more detail. This view will then take over the whole screen. For instance, try double tapping on the axial view. You should now see the axial view over the whole screen. In single view mode, you can change slice location by moving your finger up and down farthest to the right of the screen.You can go back to 2x2 mode by double tapping again on the screen. This way, you can move quickly between the different views.

The 3D brain

The viewer is performing volume rendering in real time. This is done by downsampling the brain to 64x64x64. The viewer displays these as a stack of 2D slices in each direction, which is why your might see some artefacts at an angle of 45° where you are in between two directions (e.g. when you are in between axial and sagittal).

Left is left and right is right - neurological convention

I did not add labels on what is left and right in the viewer, but you can see that by looking at where the cursor is in the 3D view. I have implemented the neurological convention, which means left is left and right is right. Labels will come in the next version.

Controls and buttons


Min/Max Apply

This is where you can enter your own threshold levels of your image. Tap on Apply (or hit return on the keyboard) to apply the new threshold settings. Voxels below the min level will be removed. Voxels above the max level will by default still be visible, but will get the same colour as the voxels at the max level. You can make the voxels above the max level disappear if you like by turning on the Upper threshold tool (read more about this below).

Reset

The tool will automatically set default min and max threshold values the first time you load an image. This is not done when you switch between images. If you have set a min value of 2.3 in one image and you switch to another image, the same min value will be applied on this image too.

However, you can reset the settings to the min and max values of the image by tapping the Reset button, just like when you load an image the first time.

X Y Z

The MNI coordinate and the corresponding voxel value of your selected image are printed next to the Reset button on the iPad and next to the Tools button on the iPhone.


Edit 

The edit button is found in the left corner of the image list. It allows you to delete images from your image list. This button will be disabled if you haven't uploaded any of your own images. The example.nii.gz image cannot be deleted, but will be hidden once you have added your own images.

Reload

If you have uploaded your own NIfTI images and you don't see them in the list, tap on the reload button to make them appear. If they still don't appear, the reason is probably that they don't fulfil the requirements.

View

This button gives you a menu of the different views (2x2, Axial, Coronal, Sagittal, 3D). This is an alternative way of moving between the views. However, as explained in The Viewer section, the quickest way is to move between the views by double tapping.

Colour

This button gives you a menu with different colour maps (Hot, Jet, Autumn, Cool). You can thus change the colour of the overlay by choosing any of these colour maps.

Tools

This button gives you a menu with a number of different tools, which will be described in detail below:

Cursor on/off

You can turn on or off the cursor with this tool. You can still move around the cursor if you turn it off. It's just that it's invisible to you. This is a feature you can use if you want to make a screenshot and don't like to have the cursor overlapping the image.

3D Transparency on/off

This tool allows you to make the 3D brain transparent so you can see regions (e.g. statistical activations) inside the brain. If you have this feature turn off, you will only see the overlay image rendered on the surface of the brain.

This is how it looks like with transparency turned off.

With transparency turned on, you can see the overlay image inside the brain.


3D Overlay alpha on/off

Normally, all the voxels above the minimum threshold level will be 100% opaque and all the voxels below the min level will not be seen at all. If you turn on this tool, the transparency level will be changed so that voxels close to the min level will be more transparent and voxels close to the max level will be more opaque. The image will look more soft with the alpha turned on, which might look better for some images. It is best to use this feature with 3D Transparency turned on.

This image shows how the brain looks like without the Overlay alpha feature turned on.

This image shows how the brain looks like with the Overlay alpha feature turned on.




Overlay mask on/off

If you haven't masked your images, you might have voxels outside the standard brain that are above your minimum threshold value. These voxels will not be shown in the 3D view, but they will be visible in the 2D views. You can mask the overlay image so that only voxels inside the brain are visible in the 2D views by using this tool.

Upper threshold on/off

This feature will allow you to make voxels above the max level to disappear. You can thus remove all positive voxels in a statistical image if you only want to look at negative voxels. If you have loaded an atlas, you can remove all voxels below and above a certain value to make sure you only see the region associated with that value. For instance, if you have voxels with higher values than 2.0 and you enter a max value of 2.0, you will not see these voxels.

Seed voxel in Neurosynth (available in v1.2.2, which is submitted to App Store)

Neurosynth is a database for peak coordinates and corresponding meta-data. With this tool, you can automatically look up co-activation maps in Neurosynth given the MNI-coordinate of the cursor. NeuroPub will open Safari with the corresponding Neurosynth homepage, from which you will be able to download a NIfTI file and open it in NeuroPub. Neurosynth is storing seed-voxel maps with a resolution of 4 mm, so NeuroPub will localise the nearest map given the current coordinate. The maps are in 2x2x2 mm^3.

Email image

This tool will allow you to send the selected image with Email. Both compressed and uncompressed files are supported. For instance, if you are at a conference presenting your latest brain study, you might want to share your statistical results with people that you meet. You can then easily send your NIfTI file to their email address and they can immediately view it on their iPhone if they have installed NeuroPub.

Feel free to comment on this help text if you have any questions.

Progress update

I have come quite far with the viewer the last few days. The screenshot below show the latest version:


Click on the image for a bigger version.

The viewer will now display a jetmap instead of a blue colour, which I think is nice. You can control the upper and lower thresholds by typing them into the textfields above the viewer. The tool is now also printing the MNI-coordinates and the value of the voxel.

There are still a few things to do, but I should be able to upload the tool to App Store really soon!!!

Thursday 18 November 2010

Template and other things

I have decided which standard brain that will be included with the tool. It will be the ICBM 2009a non-linear asymmetric template (Copyright (C) 1993-2004 Louis Collins, McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University), subsampled to 2x2x2 resolution. This standard brain is included in FSL (avg152T1_brain.nii.gz).

Some limitations for the first version:
  1. The dimensions of the standard template are 91x109x109, subsampled to 2x2x2 mm^3. The coordinate system is Right-to-Left. Images you add have to be in the same space with the same dimensions as the template. This is the standard resolution that comes with SPM and FSL, so it shouldn't be any problems.
  2. All images you add have to be in float format. You need to convert them to float if you have byte format or 16-bit format.
  3. The app will check for this in the image header and only list images that fulfil this requirement. Only uncompressed .nii files will be supported.
One change is that the tool will support thresholding of images. Instead of visualising all voxels above 0 in the same colour, the app will show the images in a jet-map and you will set the threshold yourself. I'm currently adding this feature and I think it will be quite useful.

Friday 12 November 2010

iTunes upload works

It's now possible to upload images to the app through iTunes. A release is getting closer.

Friday 5 November 2010

Version 1.0 getting closer - but will not support compressed NIfTI.

I don't have any problems reading uncompressed NIfTI, but I can't get it to work when the image has been compressed. Support for compressed NIfTI will have to wait for this reason. It will come in version 1.1 instead.

The first version will be quite simple anyway. You will be able to upload NIfTI images that have the same format as the 2x2x2 standard brain but only voxels above zero will be visualised and they will all have the same colour. It's better to get this version out soon with limited features.

These are the things that I have left to do:
  1. Change standard brain. I have been using a custom made standard brain. It will be changed to the MNI brain.
  2. Limit the user from doing "stupid" things, like moving the cursor outside the brain.
  3. Making it possible to upload NIfTI files through iTunes and list them in the files table. This is the part that still requires some development, but it should be quite easy to implement.
I'll keep you posted on the progress.