Optimizing Image Enhancement: Feature Engineering for Improved Classification in AI-Assisted Artificial Retinas

被引:0
|
作者
Mehmood, Asif [1 ]
Ko, Jungbeom [2 ]
Kim, Hyunchul [3 ]
Kim, Jungsuk [1 ,4 ]
机构
[1] Gachon Univ, Coll IT Convergence, Dept Biomed Engn, 1342 Seongnamdaero, Seongnam Si 13120, South Korea
[2] Gachon Univ, Gachon Adv Inst Hlth Sci & Technol GAIHST, Dept Hlth Sci & Technol, Incheon 21936, South Korea
[3] Univ Calif Berkeley, Sch Informat, 102 South Hall 4600, Berkeley, CA 94720 USA
[4] Cellico Co, Res & Dev Lab, Seongnam Si 13449, South Korea
基金
新加坡国家研究基金会;
关键词
classification; deep neural network; image processing; artificial intelligence; artificial retina; AI-enabled sensors; smart sensors;
D O I
10.3390/s24092678
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Artificial retinas have revolutionized the lives of many blind people by enabling their ability to perceive vision via an implanted chip. Despite significant advancements, there are some limitations that cannot be ignored. Presenting all objects captured in a scene makes their identification difficult. Addressing this limitation is necessary because the artificial retina can utilize a very limited number of pixels to represent vision information. This problem in a multi-object scenario can be mitigated by enhancing images such that only the major objects are considered to be shown in vision. Although simple techniques like edge detection are used, they fall short in representing identifiable objects in complex scenarios, suggesting the idea of integrating primary object edges. To support this idea, the proposed classification model aims at identifying the primary objects based on a suggested set of selective features. The proposed classification model can then be equipped into the artificial retina system for filtering multiple primary objects to enhance vision. The suitability of handling multi-objects enables the system to cope with real-world complex scenarios. The proposed classification model is based on a multi-label deep neural network, specifically designed to leverage from the selective feature set. Initially, the enhanced images proposed in this research are compared with the ones that utilize an edge detection technique for single, dual, and multi-object images. These enhancements are also verified through an intensity profile analysis. Subsequently, the proposed classification model's performance is evaluated to show the significance of utilizing the suggested features. This includes evaluating the model's ability to correctly classify the top five, four, three, two, and one object(s), with respective accuracies of up to 84.8%, 85.2%, 86.8%, 91.8%, and 96.4%. Several comparisons such as training/validation loss and accuracies, precision, recall, specificity, and area under a curve indicate reliable results. Based on the overall evaluation of this study, it is concluded that using the suggested set of selective features not only improves the classification model's performance, but aligns with the specific problem to address the challenge of correctly identifying objects in multi-object scenarios. Therefore, the proposed classification model designed on the basis of selective features is considered to be a very useful tool in supporting the idea of optimizing image enhancement.
引用
收藏
页数:22
相关论文
共 50 条
  • [31] AI-assisted architectural design studio (AI-a-ADS): How artificial intelligence join the architectural design studio?
    Ozorhon, Gueliz
    Gelirli, Dilara Nitelik
    Lekesiz, Gulbin
    Muezzinoglu, Can
    INTERNATIONAL JOURNAL OF TECHNOLOGY AND DESIGN EDUCATION, 2025,
  • [32] Artificial Intelligence Serves Legal Education: The Application and Challenges of Chinese AI-assisted Judicial Trials
    Bi, Fan
    Yu, Xiong
    Chen, Zhaoying
    Xu, Mingyue
    Xiao, Kun
    INTERNATIONAL JOURNAL OF MULTIPHYSICS, 2024, 18 (03) : 140 - 149
  • [33] Real-Time Classroom Behavior Analysis for Enhanced Engineering Education: An AI-Assisted Approach
    Hu, Jia
    Huang, Zhenxi
    Li, Jing
    Xu, Lingfeng
    Zou, Yuntao
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2024, 17 (01)
  • [34] AI-ASSISTED THERMOGRAPHIC AND VISUAL CLASSIFICATION OF LEADING-EDGE EROSION OF WIND TURBINE BLADES
    Stamm, Michael
    MATERIALS EVALUATION, 2024, 82 (06) : 14 - 15
  • [35] Picture This: AI-Assisted Image Generation as a Resource for Problem Construction in Creative Problem-Solving
    Rafner, Janet
    Zana, Blanka
    Biskjaer, Michael Mose
    Dalsgaard, Peter
    Sherson, Jacob
    2023 PROCEEDINGS OF THE 15TH CONFERENCE ON CREATIVITY AND COGNITION, C&C 2023, 2023, : 262 - 268
  • [36] Toward AI-Assisted Clinical Assessment for Patients with Multiple Myeloma: Feature Selection for Large Language Models
    Malek, Ehsan
    Wang, Gi-Ming
    Madabhushi, Anant
    Cullen, Jennifer
    Tatsuoka, Curtis
    James, Driscoll J., II
    BLOOD, 2023, 142
  • [37] Image capture: AI-assisted sexually transmitted infection diagnosis tool for clinicians in a clinical setting
    Soe, N. N.
    Latt, P. M.
    Yu, A.
    Lee, D.
    Rahman, R.
    Ge, Z.
    Ong, J.
    Fairley, C. K.
    Zhang, L.
    SEXUAL HEALTH, 2023, 20 (05) : XVII - XVII
  • [38] Performance of three artificial intelligence (AI)-based large language models in standardized testing; implications for AI-assisted dental education
    Sabri, Hamoun
    Saleh, Muhammad H. A.
    Hazrati, Parham
    Merchant, Keith
    Misch, Jonathan
    Kumar, Purnima S.
    Wang, Hom-Lay
    Barootchi, Shayan
    JOURNAL OF PERIODONTAL RESEARCH, 2025, 60 (02) : 121 - 133
  • [39] REDUCTION OF ADENOMA MISS RATE WITH ARTIFICIAL INTELLIGENCE (AI): A META-ANALYSIS OF RANDOMIZED TANDEM TRIALS OF AI-ASSISTED COLONOSCOPY
    Tolosa, Celestina
    Arayakarnkul, Suchapa
    Inal, Ekin
    White, Jacob
    Al Ghamdi, Sarah
    Ngamruengphong, Saowanee
    GASTROINTESTINAL ENDOSCOPY, 2023, 97 (06) : AB727 - AB727
  • [40] AI-Assisted Rational Design and Activity Prediction of Biological Elements for Optimizing Transcription-Factor-Based Biosensors
    Ding, Nana
    Yuan, Zenan
    Ma, Zheng
    Wu, Yefei
    Yin, Lianghong
    MOLECULES, 2024, 29 (15):