Zero-Shot Learning With Large Language Models Enhances Drilling-Information Retrieval

被引:0
|
作者
机构
来源
| 2025年 / 77卷 / 01期
关键词
Petroleum engineering - Question answering - Wages;
D O I
10.2118/0125-0092-JPT
中图分类号
学科分类号
摘要
Finding information across multiple databases, formats, and documents remains a manual job in the drilling industry. Large language models (LLMs) have proven effective in data-aggregation tasks, including answering questions. However, using LLMs for domain-specific factual responses poses a nontrivial challenge. The expert-labor cost for training domain-specific LLMs prohibits niche industries from developing custom questionanswering bots. The complete paper tests several commercial LLMs for information-retrieval tasks for drilling data using zero-shot in-context learning. In addition, the model's calibration is tested with a few-shot multiple-choice drilling questionnaire. © 2025 Society of Petroleum Engineers (SPE). All rights reserved.
引用
收藏
页码:92 / 95
相关论文
共 50 条
  • [21] Improving Zero-Shot Text Matching for Financial Auditing with Large Language Models
    Hillebrand, Lars
    Berger, Armin
    Deusser, Tobias
    Dilmaghani, Tim
    Khaled, Mohamed
    Kliem, Bernd
    Loitz, Ruediger
    Pielka, Maren
    Leonhard, David
    Bauckhage, Christian
    Sifa, Rafet
    PROCEEDINGS OF THE 2023 ACM SYMPOSIUM ON DOCUMENT ENGINEERING, DOCENG 2023, 2023,
  • [22] Zero-shot interpretable phenotyping of postpartum hemorrhage using large language models
    Emily Alsentzer
    Matthew J. Rasmussen
    Romy Fontoura
    Alexis L. Cull
    Brett Beaulieu-Jones
    Kathryn J. Gray
    David W. Bates
    Vesela P. Kovacheva
    npj Digital Medicine, 6
  • [23] Combining Small Language Models and Large Language Models for Zero-Shot NL2SQL
    Fan, Ju
    Gu, Zihui
    Zhang, Songyue
    Zhang, Yuxin
    Chen, Zui
    Cao, Lei
    Li, Guoliang
    Madden, Samuel
    Du, Xiaoyong
    Tang, Nan
    PROCEEDINGS OF THE VLDB ENDOWMENT, 2024, 17 (11): : 2750 - 2763
  • [24] Towards Cognition-Aligned Visual Language Models via Zero-Shot Instance Retrieval
    Ma, Teng
    Organisciak, Daniel
    Ma, Wenbao
    Long, Yang
    ELECTRONICS, 2024, 13 (09)
  • [25] Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors
    Zhang, Kai
    Gutierrez, Bernal Jimenez
    Su, Yu
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 794 - 812
  • [26] CLAREL: Classification via retrieval loss for zero-shot learning
    Oreshkin, Boris N.
    Rostamzadeh, Negar
    Pinheiro, Pedro O.
    Pal, Christopher
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 3989 - 3993
  • [27] ZVQAF: Zero-shot visual question answering with feedback from large language models
    Liu, Cheng
    Wang, Chao
    Peng, Yan
    Li, Zhixu
    NEUROCOMPUTING, 2024, 580
  • [28] Learning exclusive discriminative semantic information for zero-shot learning
    Mi, Jian-Xun
    Zhang, Zhonghao
    Tai, Debao
    Zhou, Li-Fang
    Jia, Wei
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2023, 14 (03) : 761 - 772
  • [29] The unreasonable effectiveness of large language models in zero-shot semantic annotation of legal texts
    Savelka, Jaromir
    Ashley, Kevin D.
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2023, 6
  • [30] Improving Zero-Shot Stance Detection by Infusing Knowledge from Large Language Models
    Guo, Mengzhuo
    Jiang, Xiaorui
    Liao, Yong
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT XIII, ICIC 2024, 2024, 14874 : 121 - 132