Playing optical tweezers with deep reinforcement learning: in virtual, physical and augmented environments

被引:10
|
作者
Praeger, Matthew [1 ]
Xie, Yunhui [1 ]
Grant-Jacob, James A. [1 ]
Eason, Robert W. [1 ]
Mills, Ben [1 ]
机构
[1] Univ Southampton, Optoelect Res Ctr, Southampton SO17 1BJ, Hants, England
来源
基金
英国工程与自然科学研究理事会;
关键词
optical tweezers; laser trapping; machine learning; reinforcement learning; NEURAL-NETWORKS; LEVEL; DESIGN; GO;
D O I
10.1088/2632-2153/abf0f6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reinforcement learning was carried out in a simulated environment to learn continuous velocity control over multiple motor axes. This was then applied to a real-world optical tweezers experiment with the objective of moving a laser-trapped microsphere to a target location whilst avoiding collisions with other free-moving microspheres. The concept of training a neural network in a virtual environment has significant potential in the application of machine learning for experimental optimization and control, as the neural network can discover optimal methods for problem solving without the risk of damage to equipment, and at a speed not limited by movement in the physical environment. As the neural network treats both virtual and physical environments equivalently, we show that the network can also be applied to an augmented environment, where a virtual environment is combined with the physical environment. This technique may have the potential to unlock capabilities associated with mixed and augmented reality, such as enforcing safety limits for machine motion or as a method of inputting observations from additional sensors.
引用
下载
收藏
页数:11
相关论文
共 50 条
  • [1] Eye Control and Motion with Deep Reinforcement Learning: In Virtual and Physical Environments
    Arizmendi, Sergio
    Paz, Asdrubal
    Gonzalez, Javier
    Ponce, Hiram
    ADVANCES IN COMPUTATIONAL INTELLIGENCE, MICAI 2023, PT I, 2024, 14391 : 99 - 109
  • [2] Deep learning for optical tweezers
    Ciarlo, Antonio
    Ciriza, David Bronte
    Selin, Martin
    Marago, Onofrio M.
    Sasso, Antonio
    Pesce, Giuseppe
    Volpe, Giovanni
    Goksor, Mattias
    NANOPHOTONICS, 2024, 13 (17) : 3017 - 3035
  • [3] Playing FPS Games with Deep Reinforcement Learning
    Lample, Guillaume
    Chaplot, Devendra Singh
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 2140 - 2146
  • [4] Deep Reinforcement Learning for General Game Playing
    Goldwaser, Adrian
    Thielscher, Michael
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 1701 - 1708
  • [5] Bridging the Reality Gap Between Virtual and Physical Environments Through Reinforcement Learning
    Ranaweera, Mahesh
    Mahmoud, Qusay H.
    IEEE ACCESS, 2023, 11 : 19914 - 19927
  • [6] Playing and learning in digitally-augmented physical worlds
    Rogers, Y
    Price, S
    LEARNING ZONE OF ONE'S OWN: SHARING REPRESENTATIONS AND FLOW IN COLLABORATIVE LEARNING ENVIRONMENTS, 2004, : 173 - 192
  • [7] Deep Reinforcement Learning for Conversational Robots Playing Games
    Cuayahuitl, Heriberto
    2017 IEEE-RAS 17TH INTERNATIONAL CONFERENCE ON HUMANOID ROBOTICS (HUMANOIDS), 2017, : 771 - 776
  • [8] ROBOPIANIST: Dexterous Piano Playing with Deep Reinforcement Learning
    Zakka, Kevin
    Wu, Philipp
    Smith, Laura
    Gileadi, Nimrod
    Howell, Taylor
    Peng, Xue Bin
    Singh, Sumeet
    Tassa, Yuval
    Florence, Pete
    Zeng, Andy
    Abbeel, Pieter
    CONFERENCE ON ROBOT LEARNING, VOL 229, 2023, 229
  • [9] Deep Reinforcement Learning for Procedural Content Generation of 3D Virtual Environments
    Lopez, Christian E.
    Cunningham, James
    Ashour, Omar
    Tucker, Conrad S.
    JOURNAL OF COMPUTING AND INFORMATION SCIENCE IN ENGINEERING, 2020, 20 (05)
  • [10] Perception and Skill Learning for Augmented and Virtual Reality Learning Environments
    Weng, Ng Giap
    Sing, Angeline Lee Ling
    COMPUTATIONAL SCIENCE AND TECHNOLOGY, 2019, 481 : 391 - 400