This image processing algorithm arises from the need to have a 3D volume starting from the slices of a CT. There are several algorithms online but most require DICOM images. In this case, the reference dataset is made up of 970 png images. Also, although the image is actually in grayscale, the file contains all three RGB channels, we will see that it will be converted. Not finding any tools or algorithms available, I decided to create it myself.
To create that algorithm I used Matlab and image processing tools. The images are taken from a dataset published by ESA following a study on the effects of gravity on bone mass. I will have to exploit some of the extracted data for a trabecular bone homogenization project.
In general, the algorithm foresees a pre-processing of the single images aimed at filtering the noise and making the trabeculae imprints more smooth. Subsequently we proceed to actually binarize the image and then we go to render the 3D image by stacking the different slices. Then we go to process the 3D image. In addition, I also added an example of meshing using Mathematica and related features. This allows, at least in theory, to implement the mesh also in a FEM structural analysis. Clearly the limit is the computing power and the very high number of degrees of freedom that is generated, but I talk about this better in the previously mentioned project.

Image Processing
Unfortunately, the images in the dataset are not of excellent quality. They have a good resolution but have a lot of noise. Also, they are raster images, which makes it difficult to switch to a 3D volume. In this case, to obtain the 3D volume, it would be better to have a set of binary images representing the void (pixel == 0
) and the bone trabeculae (pixel == 1
). The image has a resolution of 2048×2048 obtained from a CT with a Scanco μCT Scanner where the single voxel has a size of 37 μm.
Before starting with the pre-processing we can import all the images and view their sequence:
To do this, let’s consider the 10 folders and load all the files inside a three-dimensional matrix. Then we export the sequence as a video.
selpath = uigetdir; %select image processing folder
folderpath=strcat(selpath,'/file_raw/slice*');
folderlist=dir(folderpath);
for k=1:length(folderlist)
temp_list=dir(strcat(selpath,'/file_raw/',folderlist(k).name,'/ESA*.png'));
if k==1
file_list=append(strcat(selpath,'/file_raw/',folderlist(k).name,'/'),{temp_list.name});
else
file_list=[file_list, append(strcat(selpath,'/file_raw/',folderlist(k).name,'/'),{temp_list.name})];
end
end
xlsfiles=sort(file_list); %sort by name ascending
file1=imread(xlsfiles{1}); %first file for store measures
imm=[zeros(numel(file1(1,:)),numel(file1(:,1)))];
for i=1:length(xlsfiles) %pre allocate volumetrix matrix
imm(:,:,i)=0;
end
imload=imm;
parfor i=1:length(xlsfiles) %load images
imload(:,:,i)=mat2gray(imread(xlsfiles{i}));
end
% export video with tic-toc timing (for benchmarking)
video=VideoWriter(strcat(selpath,'/flow.mp4'),'MPEG-4'); %default 30 fps
open(video); %open the file for writing
tic
for i=1:length(imload(1,1,:))
writeVideo(video,imload(:,:,i));
end
close(video);
toc
Then we can load all the data inside the imload(i,j,k)
matrix where on the i, j
plane we have the single image, ESA29-99-L3-Byte.0 ***. png
.
Preprocessing
Based on the goal of the algorithm we can start by increasing the contrast. Considering the three contrast indices known in the literature, it is evident that only CI2
can be useful to us. This is because the first and third suffer from the saturation pixels that are noticeable in the set, so they will both be stable between images.
\operatorname{CI1}={\max ({I})-\min( I) }\\\:\\ \operatorname{CI2}={\sigma}(I)\\\:\\\operatorname{CI3}={\max ({I})-\min( I)\over\max ({I})+\min( I) }
The second index allows us to choose one or more images to make some considerations.
So there is a cycle that goes to find optimal contrast enhancement values. The method chosen to increase the contrast is the stretching of the histogram to which we pass the two thresholds, minimum and maximum, calculating them from the quantiles and the degree of non-linearity.
g(x,y)=\begin{cases} 0\quad \text{se }I(x,y)< th_1\\ 1\quad \text{se }I(x,y)> th_2\\ \left(I(x,y)-th_1\over th_2-th_1\right)^\gamma \quad \text{se }th_1\leq (x,y)\leq th_2 \end{cases}
In doing this we cannot take into account only the standard deviation but we must also, and above all, take into account the desired result. This leads to the selection of the starting parameters for the optimization.


Clearly, the size of the filtering window has also been optimized.
%%
clear CII2 row col val val_g gamma m M MAT val_g val_g_i
gamma=1.2:0.2:2;
m=1:2:40;
M=97:1:99;
CII2=zeros(m(end),M(end),10*gamma(end));
for i=1:length(m) %lower threeshold
for j=1:length(M) %upper threeshold
for k=1:length(gamma) %linearity coefficient
ima_c=imadjust(ima,[m(i)/100 M(j)/100],[0 1],gamma(k));
CII2(m(i),M(j),k)=std(ima_c(:))/std(ima(:));
end
end
end
%search best index
for k=1:length(gamma)
MAT=CII2(:,:,k);
[row(k),col(k)]=find(MAT==max(max(MAT)));
end
for k=1:length(gamma)
val_g(k)=CII2(row(k),col(k),k);
end
val_g_i=find(val_g==(max(val_g)));
val1=row(val_g_i)
val2=col(val_g_i)
val3=gamma(val_g_i)
%test contrast
contrastim=imadjust(ima,[row(val_g_i)/100 col(val_g_i)/100],[0 1],gamma(val_g_i));
figure;
imshowpair(imload(:,:,best_contrast_index),contrastim,'montage')
mex=strcat('m=',string(val1),', M=',string(val2),', gamma=',string(val3));
title(mex)
%% contrast set
imcontr=imm;
for k=1:length(imload(1,1,:))
imcontr(:,:,k)=imadjust(imload(:,:,k),[row(val_g_i)/100 col(val_g_i)/100],[0 1],gamma(val_g_i));
end
%% filtering test
L=6;
mask1=fspecial('average',L);
mask2=fspecial('gaussian',L,(L-1)/6);
ima_f1=imfilter(ima,mask1);
figure;
imshowpair(ima,ima_f1,'montage')
ima_f2=imfilter(ima,mask2);
figure;
imshowpair(ima,ima_f2,'montage')
%% filtering
imfilt=imm;
for k=1:length(imload(1,1,:))
imfilt(:,:,k)=imfilter(imcontr(:,:,k),mask1);
end
This increase in contrast goes in the opposite direction to the second operation, which is denoising. If we want to remove some noise we can try with a Gaussian filter or an average filter. Once the mask is generated, let’s see how we get a better result with the averaging filter. There is no need to exaggerate, on the contrary, by exaggerating we will remove the increase in contrast. Although it may seem counterintuitive, it is good to underline that the combination of these two procedures allows for a smoother and less angular 3D rendering. Furthermore, it allows you to have more margin on the binarization threshold.


Binarizzazione
Then it is possible to proceed to binarize the image considering a threshold such as to guarantee both a good quality of the trabeculae and a sufficient removal of the background. A good idea to optimize this threshold is to consider the data in the literature on trabecular mass, corresponding to 20% of the volume (see homogenization project).
figure()
imshow(imfilt(:,:,best_contrast_index));
taglio=drawfreehand(gca);
BW=createMask(taglio);
figure;
imshow(BW);
%...%
imbin=imm;
parfor i=1:length(imload(1,1,:))
imbin(:,:,i)=imbinarize(imfilt(:,:,i),0.46);
imbin(:,:,i)=imbin(:,:,i).*BW;
imwrite(imbin(:,:,i),strcat(selpath,'\binarized\',string(i),'.png'))
end
In the same cycle we also use a manually drawn mask to filter the border elements and export the individual images, to use them later in Mathematica.










3D rendering
At this point, having optimized all the previous steps, it is possible to proceed with 3D rendering using the VolumeViewer
toolbox. The tool is quite intuitive and is also equipped with an app to set the different parameters via the GUI.
figure()
volshow(imbin,config)
Pore fraction
We can also go to verify the actual fraction of pores with respect to the trabeculae. To do this we can use the Image Region Analyzer
tool that allows us to identify the properties of individual areas in the binarized image. So let’s take slice no. 650 and clean it up with a BW mask. Let’s read its properties with the tool and let’s take the sum of the individual areas identified, this will give us the total number of pixels of trabecular bone (area
).
slice=imread('binarized/600.png');
sliceb=imbinarize(slice); %effective logic image
figure()
imshow(sliceb);
taglio=drawfreehand(gca);
BW=createMask(taglio);
sliceb=sliceb.*BW; %clean image
area=sum(propsTable.Area);%total area calculated by tool exported properties
AA=imclose(sliceb,strel('disk',20));
AA1=imfill(AA,8,'holes');
figure;imshowpair(AA,AA1);
total=numel(AA1(find(AA1==1)));
fraction=area/total
So we can use commands to fill the area inside the trabeculae and this will give us the total number of pixels (total
). Therefore their ratio of 27.98% is found to be in line with the data in the literature.

Mesh
The same result can also be obtained in Mathematica. Furthermore, a simpler albeit less precise algorithm could be to go directly to binarization, skipping preprocessing. The result is certainly different and less precise but it can be a good alternative to speed up the times.
files = FileNames["*.png",
"C:\\Users\\bigba\\OneDrive\\Documenti\\GitHub\\bone\
homogenization\\image_processing\\file_raw\\slices600-699\\"];
DDD = Table[Import[files[[n]]], {n, 1, 90}];
(* riprendi dalla binarizzazione in matlab*)
Do[DDD[[n]] = Binarize[ColorConvert[DDD[[n]], "Grayscale"], 0.8], {n, 1, 90}];
(*
Imm=Binarize[ColorConvert[Imm,"Grayscale"],0.8]
Imm=Binarize[ColorConvert[Imm2,"Grayscale"],0.8];
*)
A = Image3D[DDD]
res = Timing[MESH = ImageMesh[A]];
Export[FileNameJoin[{NotebookDirectory[], "mesh.png"}], MESH]
NotebookDirectory[]
FileNameJoin[{NotebookDirectory[], "mesh.png"}]
res[[1]]/60(*minute to generate mesh*)
Timing[Export[FileNameJoin[{NotebookDirectory[], "mesh.png"}], MESH,ImageResolution -> 300]]
Timing[Export[FileNameJoin[{NotebookDirectory[], "mesh.svg"}], MESH]]
Timing[Export[FileNameJoin[{NotebookDirectory[], "mesh.obj"}], MESH]]


Benchmarking
These algorithms are not very light given the amount of data analyzed (2048x2048x1000 levels) and the different operations. For example, creating the video took 150.25 sec. For contrast enhancement 225.40 sec and for filtering 220.11 sec. The computationally heaviest parts were the loading with 714.27 sec and the reassignments. Some variables also had to be deleted several times to free up memory.
In Mathematica it took 10.07 seconds to create the mesh on 200 slices (500 ÷ 700) but over 20 minutes to generate the graphic render and export. This was due to the video card being continuously saturated.
These data refer to a Workstation with i7-108050H, 32 GB DDR4 2933 MHz RAM, NVMe SSD, NVIDIA Quadro P620.
More about trabecular bone
Full code in project repo: