Statistical learning in control and matrix theory

被引:0
|
作者
Vidyasagar, M [1 ]
机构
[1] Ctr Artificial Intelligence & Robot, Bangalore 560001, Karnataka, India
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
By now it is known that several problems in control and matrix theory are NP-hard. These include matrix problems that arise in control theory, as well as other problems in the robustness analysis and synthesis of control systems. These negative results force us to modify our notion of "solving" a given problem. If we cannot solve a problem exactly because it is NP-hard, then we must settle for solving it approximately. If we cannot solve all instances of a problem, we must settle for solving "almost all" instances of a problem. An approach that is recently gaining popularity is that of using randomized algorithms. The notion of a randomized algorithm as defined here is somewhat different from that in the computer science literature, and enlarges the class of problems that can be efficiently solved. We begin with the premise that many problems in robustness analysis and synthesis can be formulated as the minimization of an objective function with respect to the controller parameters. It is argued that, in order to assess the performance of a controller as the plant varies over a prespecified family, it is better to use the average performance of the controller as the objective function to be minimized, rather than its worst-case performance, as the worst-case objective function usually leads to rather conservative designs. Then it is shown that a property from statistical learning theory known as uniform convergence of empirical means (UCEM) plays an important role in allowing us to construct efficient randomized algorithms for a wide variety of controller synthesis problems. In particular, whenever the UCEM property holds, there exists an efficient (i.e., polynomial-time) randomized algorithm. Using very recent results in VC-dimension theory, it is shown that the UCEM property holds in several problems such as robust stabilization and weighted H-infinity-norm minimization. Hence it is possible to solve such problems efficiently using randomized algorithms. The paper is concluded by showing that the statistical learning methodology is also applicable to some NP-hard matrix problems.
引用
收藏
页码:177 / 207
页数:31
相关论文
共 50 条
  • [21] Statistical Learning Theory: A Primer
    Theodoros Evgeniou
    Massimiliano Pontil
    Tomaso Poggio
    [J]. International Journal of Computer Vision, 2000, 38 : 9 - 13
  • [22] Statistical Baselines from Random Matrix Theory
    Voultsidou, Marotesa
    Herrmann, J. Michael
    [J]. INTELLIGENT DATA ENGINEERING AND AUTOMATED LEARNING - IDEAL 2008, 2008, 5326 : 362 - +
  • [23] Random matrix theory in lattice statistical mechanics
    d'Auriac, JCA
    Maillard, JM
    [J]. PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS, 2003, 321 (1-2) : 325 - 333
  • [24] Complexity control in statistical learning
    Jalnapurkar, SM
    [J]. SADHANA-ACADEMY PROCEEDINGS IN ENGINEERING SCIENCES, 2006, 31 (2): : 155 - 171
  • [25] Complexity control in statistical learning
    Sameer M. Jalnapurkar
    [J]. Sadhana, 2006, 31 : 155 - 171
  • [26] New approaches to statistical learning theory
    Bousquet, O
    [J]. ANNALS OF THE INSTITUTE OF STATISTICAL MATHEMATICS, 2003, 55 (02) : 371 - 389
  • [27] Statistical learning theory of structured data
    Pastore, Mauro
    Rotondo, Pietro
    Erba, Vittorio
    Gherardi, Marco
    [J]. PHYSICAL REVIEW E, 2020, 102 (03)
  • [28] A short review of statistical learning theory
    Pontil, M
    [J]. NEURAL NETS, 2002, 2486 : 233 - 242
  • [29] STATISTICAL THEORY OF DISTRIBUTIONAL PHENOMENA IN LEARNING
    ESTES, WK
    [J]. PSYCHOLOGICAL REVIEW, 1955, 62 (05) : 369 - 377
  • [30] On adaptive estimators in statistical learning theory
    S. V. Konyagin
    E. D. Livshits
    [J]. Proceedings of the Steklov Institute of Mathematics, 2008, 260 : 185 - 193