Towards the holistic design of alloys with large language models

被引:2
|
作者
Pei, Zongrui [1 ]
Yin, Junqi [2 ]
Neugebauer, Joerg [3 ]
Jain, Anubhav [4 ]
机构
[1] NYU, New York, NY 10012 USA
[2] Oak Ridge Natl Lab, Oak Ridge, TN USA
[3] Max Planck Inst Eisenforschung, Dusseldorf, Germany
[4] Lawrence Berkeley Natl Lab, Berkeley, CA USA
来源
NATURE REVIEWS MATERIALS | 2024年 / 9卷 / 12期
关键词
D O I
10.1038/s41578-024-00726-6
中图分类号
TB3 [工程材料学];
学科分类号
0805 ; 080502 ;
摘要
Large language models are very effective at solving general tasks, but can also be useful in materials design and extracting and using information from the scientific literature and unstructured corpora. In the domain of alloy design and manufacturing, they can expedite the materials design process and enable the inclusion of holistic criteria.
引用
收藏
页码:840 / 841
页数:2
相关论文
共 50 条
  • [1] WaterBench: Towards Holistic Evaluation of Watermarks for Large Language Models
    Tul, Shangqing
    Sun, Yuliang
    Bail, Yushi
    Yu, Jifan
    Hou, Lei
    Li, Juanzi
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 1517 - 1542
  • [2] Towards A Holistic Landscape of Situated Theory of Mind in Large Language Models
    Ma, Ziqiao
    Sansom, Jacob
    Peng, Run
    Chai, Joyce
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 1011 - 1031
  • [3] Towards Trustworthy Large Language Models
    Koyejo, Sanmi
    Li, Bo
    PROCEEDINGS OF THE 17TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, WSDM 2024, 2024, : 1126 - 1127
  • [4] TOWARDS A CONVERSATIONAL ETHICS OF LARGE LANGUAGE MODELS
    Kempt, Hendrik
    Lavie, Alon
    Nagel, Saskia K.
    AMERICAN PHILOSOPHICAL QUARTERLY, 2024, 61 (04) : 339 - 354
  • [5] Towards Reasoning in Large Language Models: A Survey
    Huang, Jie
    Chang, Kevin Chen-Chuan
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 1049 - 1065
  • [6] Towards Safer Large Language Models (LLMs)
    Lawrence, Carolin
    Bifulco, Roberto
    Gashteovski, Kiril
    Hung, Chia-Chien
    Ben Rim, Wiem
    Shaker, Ammar
    Oyamada, Masafumi
    Sadamasa, Kunihiko
    Enomoto, Masafumi
    Takeoka, Kunihiro
    NEC Technical Journal, 2024, 17 (02): : 64 - 74
  • [7] Quartet: A Holistic Hybrid Parallel Framework for Training Large Language Models
    Zhang, Weigang
    Zhou, Biyu
    Wu, Xing
    Gao, Chaochen
    Liu, Zhibing
    Tang, Xuehai
    Li, Ruixuan
    Han, Jizhong
    Hu, Songlin
    EURO-PAR 2024: PARALLEL PROCESSING, PART II, EURO-PAR 2024, 2024, 14802 : 424 - 438
  • [8] Holistic Evaluation of Language Models
    Bommasani, Rishi
    Liang, Percy
    Lee, Tony
    ANNALS OF THE NEW YORK ACADEMY OF SCIENCES, 2023, 1525 (01) : 140 - 146
  • [9] Towards an Explorable Conceptual Map of Large Language Models
    Bertetto, Lorenzo
    Bettinelli, Francesca
    Buda, Alessio
    Da Mommio, Marco
    Di Bari, Simone
    Savelli, Claudio
    Baralis, Elena
    Bernasconi, Anna
    Cagliero, Luca
    Ceri, Stefano
    Pierri, Francesco
    INTELLIGENT INFORMATION SYSTEMS, CAISE FORUM 2024, 2024, 520 : 82 - 90
  • [10] Towards Concept-Aware Large Language Models
    Shani, Chen
    Vreeken, Jilles
    Shahaf, Dafna
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 13158 - 13170