"Never doubt that a small group of thoughtful, committed citizens can change the world. Indeed, it is the only thing that ever has."

Margaret Mead
Original article
peer-reviewed

Cone Beam Computed Tomography Image Quality Improvement Using a Deep Convolutional Neural Network



Abstract

Introduction

Cone beam computed tomography (CBCT) plays an important role in image-guided radiation therapy (IGRT), while having disadvantages of severe shading artifact caused by the reconstruction using scatter contaminated and truncated projections. The purpose of this study is to develop a deep convolutional neural network (DCNN) method for improving CBCT image quality.

Methods

CBCT and planning computed tomography (pCT) image pairs from 20 prostate cancer patients were selected. Subsequently, each pCT volume was pre-aligned to the corresponding CBCT volume by image registration, thereby leading to registered pCT data (pCTr). Next, a 39-layer DCNN model was trained to learn a direct mapping from the CBCT to the corresponding pCTimages. The trained model was applied to a new CBCT data set to obtain improved CBCT (i-CBCT) images. The resulting i-CBCT images were compared to pCTr using the spatial non-uniformity (SNU), the peak-signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM).

Results

The image quality of the i-CBCT has shown a substantial improvement on spatial uniformity compared to that of the original CBCT, and a significant improvement on the PSNR and the SSIM compared to that of the original CBCT and the enhanced CBCT by the existing pCT-based correction method.

Conclusion

We have developed a DCNN method for improving CBCT image quality. The proposed method may be directly applicable to CBCT images acquired by any commercial CBCT scanner.

Introduction

Cone beam computed tomography (CBCT) plays an important role in image-guided radiation therapy (IGRT), while having disadvantages of severe shading artifact caused by the reconstruction using scatter contaminated and truncated projections [1-4]. Scatter correction for the CBCT has been extensively studied so far [3]. To remove scattered photons from the projection images, some attempts were made by changing the irradiating arrangement [5] or by inserting an anti-scatter grid [6]. A more effective correction method was proposed by directly estimating the scatter component using a lattice-shaped lead beam stopper positioned in front of the X-ray tube [7], thus partly blocking the X-ray beams. The scatter component may be therefore estimated from the pixel values in the X-ray blocked region on the acquired projection image. Several methods for estimating the scatter component without modifying the X-ray hardware have been proposed by way of either analytical [8, 9] or Monte Carlo modeling approach [10, 11]). The analytical modeling approach has been widely employed for the scatter correction in the current clinical CBCT systems, where the detected image was given by the convolution integral of a direct transmission component and a point spread function. Then, the direct transmission component can be restored by deconvolution [9]. An iterative method for more accurate scatter correction was also proposed, where the deconvolution for the scattering correction was repeatedly performed until the voxel values of the reconstructed image converged [12].

Knowing the existing correction techniques mentioned above, the CBCT image quality may still have room for improvement [2]. For example, planning computed tomography (pCT) was used as prior information for further improvement since the image quality of the pCT was higher than that of CBCT [13-15]. A known problem of this technique was that the scatter correction may generate false structures in the CBCT images by considering anatomical structures appeared only in the pCT images. In order to solve the issue, low spatial-frequency scatter correction was performed without considering high-frequency components, based on the fact that the anatomical structures had high-frequency components [13-15]. However, some image errors still emerge in the area where pCT and CBCT have different anatomies, causing contrast deterioration and tissue alteration [14]. In order to suppress the false structures in the CBCT images, a method using sparse sampled low-frequency components has been proposed [15], but high-frequency artifacts such as streaks still cannot be removed.

In the meantime, several deep learning algorithms were proposed to improve image quality using different network models [16-18]. In particular, deep convolutional neural networks (DCNN) are powerful techniques for feature extraction and were applied to image denoising, deblurring and super-resolution [17, 18]. In the field of medical image processing, the DCNN was already applied to lesion detection [19] and image segmentation [20]; however, there are few studies for image quality improvement.

In this work, we propose a DCNN method for producing high-quality CBCT images. So far, magnetic resonance imaging (MRI)-based synthetic computed tomography (CT) generation was reported by applying a DCNN to learn a direct mapping from MRI images to their corresponding CT images [21]. Low-dose CT image restoration was also realized by applying a DCNN to learn mapping from low-dose CT images to corresponding normal-dose CT images [22]. We thus aim to produce high-quality CBCT images by applying a DCNN to learn a direct mapping from the original CBCT images to their corresponding pCT images. To our knowledge, this is the first report proposing a DCNN method for improving CBCT image quality.

Materials & Methods

Data acquisition and image processing

In this study, CBCT and pCT image pairs from 20 prostate cancer patients were used who underwent stereotactic radiotherapy with an Elekta Synergy linear accelerator (Elekta AB, Stockholm, Sweden). The pCT images had a matrix size of 512 by 512 on the axial plane with a pixel size of 1.074 mm by 1.074 mm, and a slice thickness of 1 mm. Five CBCT data sets per patient were obtained during the course of the treatment using a kV on-board imager (XVI) with a tube voltage 120 kV, an exposure of 350 mAs and were output to match the resolution and the slice thickness of the pCT images automatically using XVI.

Workflow of the preprocessing to create a pair image of CBCT and pCT to be learned by DCNN was shown in Figure 1. To avoid any adverse impact of non-anatomical structures on a CBCT to pCT registration and a model training procedure, binary masks were created to separate the pelvic region from non-anatomical regions based on each pair of the CBCT and pCT images. These masks were achieved by applying the Otsu auto-thresholding method on each CBCT and pCT image, and the voxel values outside the mask region were entirely replaced with a Hounsfield Unit (HU) of -1000. Then, the masked pCT volume data for each patient were three-dimensionally pre-aligned to each of the five masked CBCT images by rigid and deformable image registration (DIR) using an open source software, Elastix [23], resulting in a registered pCT hereinafter referred to as pCTr. The binary mask image of each CBCT was applied to each pCTr, and the voxel values outside the mask region were replaced with an HU of -1000.

 

DCNN model

U-net convolutional network architecture was recently developed for fast and precise object segmentation in a 2D image [24]. We modified the network as shown in Figure 2. The DCNN we designed has 39 layers in total and has approximately 125.8 million parameters. These parameters (θ) are optimized by minimizing a loss error between the predicted images and the corresponding ground truth pCTr images Y. Given a set of CBCT images and their corresponding pCTr images, the mean absolute error (MAE) as the loss function is defined as follows:

\(MAE\left ( \Theta \right )= \frac{1}{N}\frac{1}{M}\sum_{i= 1}^{N}\sum_{j= 1}^{M}\left | Y_{i, j} - P\left ( X_{i, j};\Theta \right ) \right |,\)

where N is the number of training 2D images and M is the total number of pixel per 2D image. The MAE loss function facilitates robust machine learning of outliers caused by noises, artifacts and misalignment between the training CBCT and pCTr data. The proposed DCNN model was implemented in a graphics processing unit (GPU) using the publicly available Keras package [25]. The model was trained using the Adam stochastic optimization method [26]. The learning rate \(\alpha\) was set to 0.001 and the exponential decay rates \(\beta _{1}\) and \(\beta _{2}\) were set to 0.9 and 0.999, respectively, which were the same values as the default setting suggested in the original paper [26]. A batch size, which is a subset size of training samples for each iteration of the optimization, was set to 10 as permitted by the GPU card. The calculation time for training was nearly a day using a single NVIDIA Titan X GPU with 3584 cores and 12 GB memory.

 

The performance of the proposed DCNN was evaluated using a fivefold cross-validation procedure. Twenty cases were randomly divided into five groups of the same size, whilst five pairs of the CBCT and pCTr images per patient were not divided. One group was retained as test data, and the remaining four groups were used for training. Each of the CBCT and pCTr images had approximately 180 slices in all the 20 cases. We employed 16 training cases of five image pairs per patient; and therefore, the total number of training slices was approximately 14400. If the MAE of the test data was not updated more than 20 epochs while the MAE of the training data kept falling, it was regarded as over-learning and the training was stopped. Finally, the test CBCT images were fed into the trained DCNN model to obtain the improved CBCT (i-CBCT) images.

Evaluation method

To evaluate the performance of the proposed method, the comparison to another existing planning CT-based correction method (not in the projection domain but in the image domain) [22] was made. The existing method has demonstrated great successes on enhancing CBCT image qualities, and is considered as a benchmark method in clinical studies [27]. The corrected CBCT by the existing method hereinafter referred to as “enhanced CBCT”.

In this study, pCTr was considered as ground truth, and image quality was evaluated on selected slices in terms of spatial nonuniformity (SNU) and mean pixel values. SNU was defined as follows [15]:

\(SNU= \bar{\mu }_{max} - \bar{\mu }_{min}\)

where  \(\bar{\mu }\) denotes a mean pixel value inside a square region of interest (ROI) with 10 by 10 pixels. \(\bar{\mu }_{max}\) and \(\bar{\mu }_{min}\) are the maximum and the minimum mean pixel values among five square ROIs having 10 by 10 pixels, which were arbitrary positioned in the regions of the same soft tissue area (fat or muscle) in distant locations. The pixel value consistency of the same soft tissue in distant locations means the spatial uniformity of the entire image.

The mean pixel value among the five ROIs giving the maximum difference between pCTr and original CBCT images (defined as ROIm) was also used as an index of image quality improvement. For this evaluation, fat ROIs and muscle ROIs were used for patients 1-10 and 11-20, respectively. The root mean square differences (RMSD) of the SNU and the mean pixel value of ROIm between pCTr and CBCT were calculated.

The peak-signal-to-noise-ratio (PSNR) was measured to capture the reduction of noise and the structural similarity index measure (SSIM), which is one of the human visual system-based metrics, and to evaluate different attributes such as luminance, contrast and structure comprehensively. The PSNR and the SSIM of CBCTs were measured based on pCTr in the five ROIs used in calculation of SNU for each patient and resulting five values were averaged. Suppose \(x\) are CBCT images (original CBCT, enhanced CBCT and i-CBCT) and \(y\) are pCTr images, we defined PSNR and SSIM as follows:

\(PSNR(x, y)= 20\cdot log_{10}(MAX) - 10\cdot log_{10}(\frac{1}{MN}\sum_{i=0}^{M-1}\sum_{j=0}^{N-1}\left [x(i,j)-y(i,j) \right ]^{2})\)

where \(MAX= 65535\), \(M=10\), \(N=10\).

\(SSIM(x,y)= \frac{(2\mu _{x}\mu _{y}+C_{1})(2\sigma _{xy}+C_{2})}{(\mu {_{x}}^{2}+\mu {_{y}}^{2}+C_{1})(\sigma {_{x}}^{2}+\sigma {_{y}}^{2}+C_{2})}\)

where \(C_{1} = (K_{1}L)^{2}\), \(C_{2}= (K_{2}L)^{2}\),

\(\mu _{x}=\frac{1}{MN}\sum_{i=0}^{M-1}\sum_{j=0}^{N-1}x(i,j)\),

\(\sigma _{x}= \sqrt{\frac{1}{MN-1}\sum_{i=0}^{M-1}\sum_{j=0}^{N-1}(x(i,j)-\mu _{x})^{2}}\),

\(\sigma _{xy}= \frac{1}{MN-1}\sum_{i=0}^{M-1}\sum_{j=0}^{N-1}(x(i,j)-\mu _{x})(y(i,j)-\mu _{y})\),

where \(L=3000\), \(K_{1}=0.01\), \(K_{2}=0.03\), \(M=10\), \(N=10\).

In order to highlight the differences of the images (pCTr, original CBCT, enhanced CBCT and i-CBCT), differences between some images were calculated.

Results

Once the model had been trained, approximately 20 seconds were required to convert from 180 slices of a new patient CBCT data set to corresponding i-CBCT images. The axial, sagittal, and coronal slices of eight patient cases are shown in Figures 34. The quantitative analysis on the axial slices is summarized in Table 1. As mentioned earlier, five ROIs were selected on the pCTr for calculating the SNU of the pCTr, original CBCT, enhanced CBCT and i-CBCT. The RMSD of the SNU between pCTr and CBCT for fat and muscle ROIs were reduced by the proposed method from 109 to 13 HU and from 57 to 11 HU respectively, suggesting that the spatial uniformity of CBCT was markedly improved to a level close to that of pCT. The RMSD of the SNU between pCTr and enhanced CBCT for fat and muscle ROIs were 14 and 7 HU, respectively, suggesting that the spatial uniformity of CBCT was improved to a level close to pCT to the same extent by both the proposed method and the existing method. In addition, the RMSD of the mean pixel values of ROIm between pCTr and CBCT for the fat and muscle ROIs were reduced by the proposed method from 216 to 11 HU and from 247 to 14 HU, respectively, suggesting that the pixel values of CBCT were improved substantially close to that of pCT. The RMSD of the mean pixel values of ROIm between pCTr and enhanced CBCT for fat and muscle ROIs were 10 and 10 HU, respectively, suggesting that the pixel values of CBCT were improved close to pCT to the same extent by both the proposed method and the existing method. The i-CBCTs are significantly better than the enhanced CBCTs on the PSNR (p < 0.01) and the SSIM (p < 0.01), suggesting that the noise reduction was performed more effectively and image quality become closer to pCT by the proposed method than by the existing method.

 

 

 
  pCTr   original CBCT   enhanced CBCT   i-CBCT
  SNU (HU) ROIm (HU)   SNU (HU) ROIm (HU) PSNR SSIM   SNU (HU) ROIm (HU) PSNR SSIM   SNU (HU) ROIm (HU) PSNR SSIM
Patient (Fat)                                  
1 34 -118   135 -333 32.9 0.915   52 -132 48.5 0.930   54 -119 49.3 0.944
2 10 -106   94 -324 31.7 0.909   9 -109 49.5 0.937   5 -111 51.8 0.961
3 7 -100   69 -302 32.0 0.937   8 -102 50.5 0.954   20 -104 51.4 0.968
4 4 -103   160 -342 32.4 0.940   28 -103 50.1 0.959   24 -95 52.2 0.976
5 5 -110   91 -333 32.0 0.940   15 -118 51.0 0.960   19 -92 51.3 0.974
6 19 -114   131 -303 34.3 0.941   38 -121 49.5 0.955   29 -109 53.0 0.977
7 6 -121   194 -395 31.5 0.908   9 -119 48.1 0.912   18 -95 48.4 0.955
8 19 -100   88 -287 34.4 0.941   10 -101 48.8 0.945   23 -93 49.3 0.965
9 8 -120   103 -332 32.6 0.906   22 -137 47.7 0.922   24 -117 49.5 0.967
10 23 -120   89 -304 33.1 0.928   38 -140 49.3 0.944   17 -111 50.4 0.964
RMSD (Fat)       109 216       14 10       13 11    
Average (Fat)           32.7 0.926       49.3 0.942       50.6 0.965
Patient (Muscle)                                  
11 29 34   98 -219 29.9 0.930   19 35 50.1 0.953   22 46 52.2 0.974
12 14 53   62 -209 29.5 0.932   16 51 50.0 0.948   18 30 49.6 0.973
13 10 48   76 -200 29.4 0.942   17 37 51.4 0.965   30 29 52.8 0.977
14 19 47   43 -124 32.7 0.954   20 43 51.7 0.966   27 34 52.9 0.976
15 11 54   70 -185 30.5 0.940   22 31 49.3 0.955   30 33 51.8 0.977
16 28 46   32 -163 30.7 0.943   18 43 50.9 0.962   16 53 52.1 0.974
17 16 41   112 -272 23.7 0.910   16 27 48.9 0.940   16 37 51.4 0.966
18 4 53   41 -194 29.2 0.937   3 51 50.6 0.957   9 38 49.4 0.975
19 24 40   99 -222 29.5 0.892   29 39 47.4 0.909   15 52 47.5 0.931
20 20 59   52 -183 30.0 0.917   30 47 48.6 0.934   13 54 52.7 0.971
RMSD (Muscle)       57 247       7 10       11 14    
Average (Muscle)           29.5 0.930       49.9 0.949       51.3 0.969
RMSD (Total)       87 232       11 10       12 13    
Average (Total)           31.1 0.928       49.6 0.945       50.9 0.967
 

The subtraction images to highlight the differences of pCTr minus original CBCT are shown in Figure 5e, 5E, i-CBCT minus original CBCT are shown in Figure 5f, 5F, enhanced CBCT minus original CBCT are shown in Figure 5g, 5G, pCTr minus i-CBCT are shown in Figure 5h, 5H and pCTr minus enhanced CBCT are shown in Figure 5i, 5I. These differences in images shown in Figure 5 were calculated in the regions where registration errors between the original CBCT and the pCT were relatively large for patient 2 and 8. It was found that the structure difference of the rectum was clearly observed in Figure 5e, 5h but not in Figure 5f for the patient 2 and that of the bladder was clearly observed in Figure 5E, 5H but not in Figure 5F for the patient 8. This suggests that the proposed method successfully suppressed false structures from the pCTr to appear on the i-CBCT. However, some structures (e.g., small intestines and right gluteus maximus muscle) on the original CBCT slice tended to be deformed or disappear on the corresponding i-CBCT slice in both cases.

 

Discussion

A major advantage of the DCNN method may be the fast computation time. The training for the deep learning required a day, but once learning is done, i-CBCT was generated approximately in 20 seconds per patient using GPU. It is expected that the accuracy of the DCNN method will be further improved by using more training data. Although training time increases with larger training data, the size of the final model and the speed of processing a new test image will not be increased.

Table 1 showed that the RMSD of the SNU between the pCTr and the original CBCT highly differed between fat (109) and muscle (57) HU. A possible interpretation may be that the proportion of fat near the body surface is greater than that of muscle, so the shading artifacts are more strongly observed in the fat region. In contrast, little differences were observed in the RMSD of the SNU between fat (13) and muscle (11) HU, and the RMSD of the mean pixel values of the ROIm between fat (11) and muscle (14) HU for the i-CBCT. These findings may suggest that the spatial uniformity and the pixel values of the CBCT images were improved close to those of pCT regardless of the position and the anatomical structures on the CBCT images. Improvement of the PSNR and the SSIM on the enhanced CBCT than the original CBCT may be mainly due to the improvement of spatial uniformity and pixel value. On the other hand, improvement of the PSNR and the SSIM on the i-CBCT than the enhanced CBCT may be mainly due to the suppression of high-frequency artifact such as streaks.

Since pCT and CBCT images are typically acquired several days or weeks apart, differences in anatomical structure are often observed. We noticed that there was some residue deformation in the pairs of pCTr and CBCT even after DIR, which might be due to the poor image quality of CBCT thus leading to inaccuracy of DIR. The registration errors of the training data may cause inaccuracy of the model since it may be trained to make false image predictions. The proposed method successfully suppressed false structures from the pCTr to appear on the i-CBCT, even where the registration errors between CBCT and pCT were large. However, deformation and elimination of some structures (e.g., small intestines) were seen on the corresponding i-CBCT slice for some cases as shown in Figure 5, which may result in the registration error in IGRT. One potential approach to better preserve the anatomical structures on CBCT may be to improve the CBCT/pCT alignment of the training data (e.g., hybrid DIR [28]). Our future study will demonstrate the correlation between the alignment of the training data and the performance of the network to preserve the anatomical structures on CBCT. In addition, increasing learning data sequentially to construct a highly generalized network may also contribute to preserve the anatomical structures on CBCT.

In the proposed DCNN model, a 2D CBCT slice and a corresponding 2D pCTr slice were used for learning. Employing multiple pairs of adjacent slices as input and output images in the model may be effective for making more accurate mapping from CBCT to corresponding pCTr. Alternatively, DCNN models can be trained in three different ways where the axial, sagittal or coronal slices of the 3D volume are separately used; subsequently, the three results may be averaged to produce final 3D i-CBCT. The evaluation of these extended approaches will be our future work.

All the training data used for learning the DCNN model in this study were obtained from a single pair of pCT and CBCT scanners. A DCNN model trained by a single-scanner training data may not be applicable to a test data acquired from a different scanner. In order to solve this issue, it may be necessary to standardize the CBCT images by preprocessing such as histogram-matching [29].

Conclusions

We have developed a DCNN method for producing high-quality CBCT. The proposed method may be directly applied to the CBCT images acquired from a commercial CBCT scanner with high computational efficiency, allowing easy handling of large training data without sacrificing the speed of processing test data. Future work includes better preservation of the anatomical structures of CBCT during the improvement process, correlation between alignment of the training data and the performance of the network and further evaluation of the proposed method on larger data sets including other anatomical regions.


References

  1. Bissonnette JP, Moseley DJ, Jaffray DA: A quality assurance program for image quality of cone-beam CT guidance in radiation therapy. Med Phys. 2008, 35:1807-1815. 10.1118/1.2900110
  2. Siewerdsen JH, Jaffray DA: Cone-beam computed tomography with a flat-panel imager: magnitude and effects of x-ray scatter. Med Phys. 2001, 28:220-231. 10.1118/1.1339879
  3. Niu TY, Zhu L: Overview of X-ray scatter in cone-beam computed tomography and its correction methods. Curr Med Imaging Rev. 2010, 6:82-89. 10.2174/157340510791268515
  4. Ruhrnschopf EP, Klingenbeck K: A general framework and review of scatter correction methods in x-ray cone-beam computerized tomography. Part 1: scatter compensation approaches. Med Phys. 2011, 38:4296-4311. 10.1118/1.3599033
  5. Siewerdsen JH, Jaffray DA: Optimization of x-ray imaging geometry (with specific application to flat-panel cone-beam computed tomography). Med Phys. 2000, 27:1903-1914. 10.1118/1.1286590
  6. Siewerdsen JH, Moseley DJ, Bakhtiar B, Richard S, Jaffray DA: The influence of antiscatter grids on soft-tissue detectability in cone-beam computed tomography with flat-panel detectors. Med Phys. 2004, 31:3506-3520. 10.1118/1.1819789
  7. Siewerdsen JH, Daly MJ, Bakhtiar B, Moseley DJ, Richard S, Keller H, Jaffray DA: A simple, direct method for X-ray scatter estimation and correction in digital radiography and cone beam CT. Med Phys. 2006, 33:187-197. 10.1118/1.2148916
  8. Wiegert J, Bertram M, Rose G, Aach T: Model based scatter correction for cone-beam computed tomography. Proc SPIE. 2005, 5745:271-282. 10.1117/12.594520
  9. Sun M, Star-Lack JM: Improved scatter correction using adaptive scatter kernel superposition. Phys Med Biol. 2010, 55:6695-6720. 10.1088/0031-9155/55/22/007
  10. Zbijewski W, Beekman FJ: Efficient Monte Carlo based scatter artifact reduction in cone-beam micro-CT. IEEE Trans Med Imaging. 2006, 25:817-827. 10.1109/TMI.2006.872328
  11. Xu Y, Bai T, Yan H, et al.: A practical cone-beam CT scatter correction method with optimized Monte Carlo simulations for image-guided radiation therapy. Phys Med Biol. 2015, 60:3567-3587. 10.1088/0031-9155/60/9/3567
  12. Yao W, Leszczynski KW: An analytical approach to estimating the first order scatter in heterogeneous medium: II. A practical application. Med Phys. 2009, 36:3157-3167. 10.1118/1.3152115
  13. Niu T, Sun M, Star-Lack J, Gao H, Fan Q, Zhu L: Shading correction for on-board cone-beam CT in radiation therapy using planning MDCT images. Med Phys. 2010, 37:5395-5406. 10.1118/1.3483260
  14. Marchant TE, Moore CJ, Rowbottom CG, MacKay RI, Williams PC: Shading correction algorithm for improvement of cone-beam CT images in radiotherapy. Phys Med Biol. 2008, 53:5719-5733. 10.1088/0031-9155/53/20/010
  15. Shi L, Tsui T, Wei J, Zhu L: Fast shading correction for cone beam CT in radiation therapy via sparse sampling on planning CT. Med Phys. 2017, 44:1796-1808. 10.1002/mp.12190
  16. Xie J, Xun L, Chen E: Image denoising and inpainting with deep neural networks. Adv Neural Inf Process Syst. 2012, 350-352.
  17. Jain V, Seung H: Natural image denoising with convolutional networks. Adv Neural Inf Process Syst. 2008, 769-776.
  18. Dong C, Loy C, He K, Tang X: Image super-resolution using deep convolutional networks. IEEE Trans Pattern Anal Mach Intell. 2016, 38:295-307. 10.1109/TPAMI.2015.2439281
  19. Cicero M, Bilbily A, Colak E, Dowdell T, Gray B, Perampaladas K, Barfett J: Training and validation a deep convolutional neural network for computer-aided detection and classification of abnormalities on frontal chest radiographs. Invest Radiol. 2017, 52:281-287. 10.1097/RLI.0000000000000341
  20. Cha KH, Hadjiiski L, Samala RK, Chan HP, Caoili EM, Cohan RH: Urinary bladder segmentation in CT urography using deep-learning convolutional neural network and level sets. Med Phys. 2016, 43:1882-1896. 10.1118/1.4944498
  21. Han X: MR-based synthetic CT generation using a deep convolutional neural network method. Med Phys. 2017, 44:1408-1419. 10.1002/mp.12155
  22. Chen H, Zhang Y, Kalra MK, et al.: Low-dose CT with a residual encoder-decoder convolutional neural network. IEEE Trans Med Imaging. 2017, 36:2524-2535. 10.1109/TMI.2017.2715284
  23. Klein S, Staring M, Murphy K, Viergever MA, Pluim JPW: Elastix: a toolbox for intensity-based medical image registration. IEEE Trans Med Imaging. 2010, 29:196-205. 10.1109/TMI.2009.2035616
  24. Ronneberger O, Fischer P, Brox T: U-Net: convolutional networks for biomedical image segmentation. Medical Image Computing and Computer-Assisted Intervention - MICCAI 2015. Lecture Notes in Computer Science. Navab N, Hornegger J, Wells W, Frangi A (ed): Springer, Cham; 2015. 9351:10.1007/978-3-319-24574-4_28
  25. Deep learning for humans. (2015). Accessed: July 25, 2017: https://github.com/fchollet/keras.
  26. Kingma D, Ba J: Adam: a method for stochastic optimization. Proc Int Conf Learning Representations. 2014, 1-13.
  27. Park YK, Sharp GC, Philips J, Winey BA: Proton dose calculation on scatter-corrected CBCT image: feasibility study for adaptive proton therapy. Med Phys. 2015, 42:4449-4459. 10.1118/1.4923179
  28. Takayama Y, Kadoya N, Yamamoto T, et al.: Evaluation of the performance of deformable image registration between planning CT and CBCT images for the pelvic region: comparison between hybrid and intensity-based DIR. J Rad Res. 2017, 58:567-571. 10.1093/jrr/rrw123
  29. Cox IJ, Roy S, Hingorani SL: Dynamic histogram warping of image pairs for contrast image brightness. Proc Int Conf Image Proc. 1995, 2:366-369. 10.1109/ICIP.1995.537491
Original article
peer-reviewed

Cone Beam Computed Tomography Image Quality Improvement Using a Deep Convolutional Neural Network


Author Information

Satoshi Kida Corresponding Author

Radiology, The University of Tokyo Hospital

Takahiro Nakamoto

Radiology, The University of Tokyo Hospital

Masahiro Nakano

Radiation Oncology, The Cancer Institute Hospital, Japanese Foundation for Cancer Research

Kanabu Nawa

Radiology, The University of Tokyo Hospital

Department of Radiology, University of Tokyo Hospital

Akihiro Haga

Radiology, Biomedical Sciences, Tokushima University Graduate School

Jun'ichi Kotoku

Graduate School of Medical Care and Technology, Teikyo University

Medical Technology, Teikyo University

Hideomi Yamashita

Radiology, The University of Tokyo Hospital

Keiichi Nakagawa

Radiology, The University of Tokyo Hospital


Ethics Statement and Conflict of Interest Disclosures

Human subjects: Consent was obtained by all participants in this study. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: S. K. wishes to thank Kiyoshi Yoda (Elekta K. K.) for his advice regarding the on board CBCT system (XVI).
K. N. received research funding from Elekta K. K. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.


Original article
peer-reviewed

Cone Beam Computed Tomography Image Quality Improvement Using a Deep Convolutional Neural Network


Figures etc.

SIQ
8.9
RATED BY 5 READERS
CONTRIBUTE RATING

Scholary Impact Quotient™ (SIQ™) is our unique post-publication peer review rating process. Learn more here.