Speed and convergence properties of gradient algorithms for optimization of IMRT

被引:43
|
作者
Zhang, XD
Liu, H
Wang, XC
Dong, L
Wu, QW
Mohan, R
机构
[1] Univ Texas, MD Anderson Canc Ctr, Dept Radiat Phys, Houston, TX 77030 USA
[2] William Beaumont Hosp, Dept Radiat Oncol, Royal Oak, MI 48073 USA
关键词
IMRT; optimization; multi-objective; local minima; gradient;
D O I
10.1118/1.1688214
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Gradient algorithms are the most commonly employed search methods in the routine optimization of IMRT plans. It is well known that local minima can exist for dose-volume-based and biology-based objective functions. The purpose of this paper is to compare the relative speed of different gradient algorithms, to investigate the strategies for accelerating the optimization process, to assess the validity of these strategies, and to study the convergence properties of these algorithms for dose-volume and biological objective functions. With these aims in mind, we implemented Newton's, conjugate gradient (CG), and the steepest decent (SD) algorithms for dose-volume- and EUD-based objective functions. Our implementation of Newton's algorithm approximates the second derivative matrix (Hessian) by its diagonal. The standard SD algorithm and the CG algorithm with "line minimization" were also implemented. In addition, we investigated the use of a variation of the CG algorithm, called the "scaled conjugate gradient" (SCG) algorithm. To accelerate the optimization process, we investigated the validity of the use of a "hybrid optimization" strategy, in which approximations to calculated dose distributions are used during most of the iterations. Published studies have indicated that getting trapped in local minima is not a significant problem. To investigate this issue further, we first obtained, by trial and error, and starting with uniform intensity distributions, the parameters of the dose-volume- or EUD-based objective functions which produced IMRT plans that satisfied the clinical requirements. Using the resulting optimized intensity distributions as the initial guess, we investigated the possibility of getting trapped in a local minimum. For most of the results presented, we used a lung cancer case. To illustrate the generality of our methods, the results for a prostate case are also presented. For both dose-volume and EUD based objective functions, Newton's method far outperforms other algorithms in terms of speed. The SCG algorithm, which avoids expensive "line minimization," can speed up the standard CG algorithm by at least a factor of 2. For the same initial conditions, all algorithms converge essentially to the same plan. However, we demonstrate that for any of the algorithms Studied, starting with previously optimized intensity distributions as the initial guess but for different objective function parameters, the solution frequently gets trapped in local minima. We found that the initial intensity distribution obtained from IMRT optimization utilizing objective function parameters, which favor a specific anatomic structure, would lead to a local minimum corresponding to that structure. Our results indicate that from among the gradient algorithms tested, Newton's method appears to be the fastest by far. Different gradient algorithms have the same convergence properties for dose-volume- and EUD-based objective functions. The hybrid dose calculation strategy is valid and can significantly accelerate the optimization process. The degree of acceleration achieved depends on the type of optimization problem being addressed (e.g., IMRT optimization, intensity modulated beam configuration optimization, or objective function parameter optimization). Under special conditions, gradient algorithms will get trapped in local minima, and reoptimization, starting with the results of previous optimization, will lead to solutions that are generally not significantly different from the local minimum. (C) 2004 American Association of Physicists in Medicine.
引用
收藏
页码:1141 / 1152
页数:12
相关论文
共 50 条
  • [21] Convergence analysis of gradient descent stochastic algorithms
    Shapiro, A
    Wardi, Y
    JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 1996, 91 (02) : 439 - 454
  • [22] The convergence of broyden algorithms for LC gradient function
    PU Dingguo
    Acta Mathematicae Applicatae Sinica, 2000, 16 (3) : 313 - 319
  • [25] New convergence aspects of stochastic gradient algorithms
    Nguyen, Lam M.
    Nguyen, Phuong Ha
    Richtárik, Peter
    Scheinberg, Katya
    Takáč, Martin
    van Dijk, Marten
    Journal of Machine Learning Research, 2019, 20
  • [26] New Convergence Aspects of Stochastic Gradient Algorithms
    Nguyen, Lam M.
    Phuong Ha Nguyen
    Richtarik, Peter
    Scheinberg, Katya
    Takac, Martin
    van Dijk, Marten
    JOURNAL OF MACHINE LEARNING RESEARCH, 2019, 20
  • [27] ON THE CONVERGENCE RATE OF INCREMENTAL AGGREGATED GRADIENT ALGORITHMS
    Gurbuzbalaban, M.
    Ozdaglar, A.
    Parrilo, P. A.
    SIAM JOURNAL ON OPTIMIZATION, 2017, 27 (02) : 1035 - 1048
  • [28] Convergence properties of genetic algorithms
    Lee, HK
    Lee, GK
    COMPUTERS AND THEIR APPLICATIONS, 2004, : 172 - 176
  • [29] CONVERGENCE PROPERTIES OF RELAXATION ALGORITHMS
    MEYER, GGL
    MATHEMATICAL PROGRAMMING, 1985, 31 (01) : 15 - 24
  • [30] Convergence Properties of E-Optimality Algorithms for Many Objective Optimization Problems
    Kang, Zhuo
    Kang, Lishan
    Li, Changhe
    Chen, Yuping
    Liu, Minzhong
    2008 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION, VOLS 1-8, 2008, : 472 - +