Analysis of fine-tuning measures in models with extended Higgs sectors

被引:2
|
作者
Boer, Daniel [1 ]
Peeters, Ruud [1 ]
Zeinstra, Sybrand [2 ]
机构
[1] Univ Groningen, Van Swinderen Inst Particle Phys & Grav, Nijenborgh 4, NL-9747 AG Groningen, Netherlands
[2] Westfalische Wilhelms Univ Munster, Inst Theoret Phys, Wilhelm Klemm Str 9, D-48149 Munster, Germany
关键词
BOUNDS;
D O I
10.1016/j.nuclphysb.2019.114695
中图分类号
O412 [相对论、场论]; O572.2 [粒子物理学];
学科分类号
摘要
In the literature measures of fine-tuning have been discussed as one of the tools to assess the feasibility of beyond the Standard Model theories. In this paper we focus on two specific measures and investigate what kind of fine-tuning they actually quantify. First we apply both measures to the two Higgs doublet model, for which one can analyze the numerical results in terms of available analytic expressions. After drawing various conclusions about the fine-tuning measures, we investigate a particular left-right symmetric model for which it has been claimed that already at tree-level it suffers from a high amount of fine-tuning. We will reach a different conclusion, although left-right symmetric models may require a modest amount of fine-tuning if phenomenological constraints are imposed. Our analysis shows that the two considered measures can probe different aspects of fine-tuning and are both useful if applied and interpreted in the appropriate way. (C) 2019 The Author(s). Published by Elsevier B.V.
引用
收藏
页数:17
相关论文
共 50 条
  • [31] SYMMETRY-BREAKING AND RESTORATION IN MODELS WITH EXTENDED HIGGS SECTORS
    FARRIS, TH
    KEPHART, TW
    JOURNAL OF MATHEMATICAL PHYSICS, 1991, 32 (08) : 2219 - 2223
  • [32] Another Solution to the Fine-Tuning Problem in the extended Standard Model?
    Hwang, W-Y. Pauchy
    UNIVERSE-TAIPEI, 2014, 2 (01): : 41 - 50
  • [33] Equi-Tuning: Group Equivariant Fine-Tuning of Pretrained Models
    Basu, Sourya
    Sattigeri, Prasanna
    Ramamurthy, Karthikeyan Natesan
    Chenthamarakshan, Vijil
    Varshney, Kush R.
    Varshney, Lav R.
    Das, Payel
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 6788 - 6796
  • [34] FINE-TUNING FINE CHEMICALS
    ROYSE, S
    EUROPEAN CHEMICAL NEWS, 1995, 64 (1693): : 28 - &
  • [35] Phased Instruction Fine-Tuning for Large Language Models
    Pang, Wei
    Zhou, Chuan
    Zhou, Xiao-Hua
    Wang, Xiaojie
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 5735 - 5748
  • [36] How fine can fine-tuning be? Learning efficient language models
    Radiya-Dixit, Evani
    Wang, Xin
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 2435 - 2442
  • [37] Improve Performance of Fine-tuning Language Models with Prompting
    Yang, Zijian Gyozo
    Ligeti-Nagy, Noenn
    INFOCOMMUNICATIONS JOURNAL, 2023, 15 : 62 - 68
  • [38] HackMentor: Fine-Tuning Large Language Models for Cybersecurity
    Zhang, Jie
    Wen, Hui
    Deng, Liting
    Xin, Mingfeng
    Li, Zhi
    Li, Lun
    Zhu, Hongsong
    Sun, Limin
    2023 IEEE 22ND INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, BIGDATASE, CSE, EUC, ISCI 2023, 2024, : 452 - 461
  • [39] Robust fine-tuning of zero-shot models
    Wortsman, Mitchell
    Ilharco, Gabriel
    Kim, Jong Wook
    Li, Mike
    Kornblith, Simon
    Roelofs, Rebecca
    Lopes, Raphael Gontijo
    Hajishirzi, Hannaneh
    Farhadi, Ali
    Namkoong, Hongseok
    Schmidt, Ludwig
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 7949 - 7961
  • [40] CONVFIT: Conversational Fine-Tuning of Pretrained Language Models
    Vulic, Ivan
    Su, Pei-Hao
    Coope, Sam
    Gerz, Daniela
    Budzianowski, Pawel
    Casanueva, Inigo
    Mrksic, Nikola
    Wen, Tsung-Hsien
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 1151 - 1168