Learning to Paint With Model-based Deep Reinforcement Learning

被引:74
|
作者
Huang, Zhewei [1 ,2 ]
Heng, Wen [1 ]
Zhou, Shuchang [1 ]
机构
[1] Megvii Inc, Beijing, Peoples R China
[2] Peking Univ, Beijing, Peoples R China
关键词
D O I
10.1109/ICCV.2019.00880
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We show how to teach machines to paint like human painters, who can use a small number of strokes to create fantastic paintings. By employing a neural renderer in model-based Deep Reinforcement Learning (DRL), our agents learn to determine the position and color of each stroke and make long-term plans to decompose texture-rich images into strokes. Experiments demonstrate that excellent visual effects can be achieved using hundreds of strokes. The training process does not require the experience of human painters or stroke tracking data. The code is available at https://github.com/hzwer/ICCV2019-LearningToPaint.
引用
收藏
页码:8708 / 8717
页数:10
相关论文
共 50 条
  • [21] A Performance Evaluation of Deep Reinforcement Learning for Model-Based Intrusion Response
    Iannucci, Stefano
    Barba, Ovidiu Daniel
    Cardellini, Valeria
    Banicescu, Ioana
    [J]. 2019 IEEE 4TH INTERNATIONAL WORKSHOPS ON FOUNDATIONS AND APPLICATIONS OF SELF* SYSTEMS (FAS*W 2019), 2019, : 158 - 163
  • [22] A Deep Reinforcement Learning Model-Based Optimization Method for Graphic Design
    Guo Q.
    Wang Z.
    [J]. Informatica (Slovenia), 2024, 48 (05): : 121 - 134
  • [23] Model-Based Deep Learning: On the Intersection of Deep Learning and Optimization
    Shlezinger, Nir
    Eldar, Yonina C.
    Boyd, Stephen P.
    [J]. IEEE ACCESS, 2022, 10 : 115384 - 115398
  • [24] Learning to see via epiretinal implant stimulation in silico with model-based deep reinforcement learning
    Lavoie, Jacob
    Besrour, Marwan
    Lemaire, William
    Rouat, Jean
    Fontaine, Rejean
    Plourde, Eric
    [J]. BIOMEDICAL PHYSICS & ENGINEERING EXPRESS, 2024, 10 (02)
  • [25] Incremental Learning of Planning Actions in Model-Based Reinforcement Learning
    Ng, Jun Hao Alvin
    Petrick, Ronald P. A.
    [J]. PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 3195 - 3201
  • [26] Learning to Reweight Imaginary Transitions for Model-Based Reinforcement Learning
    Huang, Wenzhen
    Yin, Qiyue
    Zhang, Junge
    Huang, Kaiqi
    [J]. THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 7848 - 7856
  • [27] Model gradient: unified model and policy learning in model-based reinforcement learning
    Chengxing Jia
    Fuxiang Zhang
    Tian Xu
    Jing-Cheng Pang
    Zongzhang Zhang
    Yang Yu
    [J]. Frontiers of Computer Science, 2024, 18
  • [28] Model gradient: unified model and policy learning in model-based reinforcement learning
    Jia, Chengxing
    Zhang, Fuxiang
    Xu, Tian
    Pang, Jing-Cheng
    Zhang, Zongzhang
    Yu, Yang
    [J]. FRONTIERS OF COMPUTER SCIENCE, 2024, 18 (04)
  • [29] Model-Based Off-Policy Deep Reinforcement Learning With Model-Embedding
    Tan, Xiaoyu
    Qu, Chao
    Xiong, Junwu
    Zhang, James
    Qiu, Xihe
    Jin, Yaochu
    [J]. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024, 8 (04): : 2974 - 2986
  • [30] On Effective Scheduling of Model-based Reinforcement Learning
    Lai, Hang
    Shen, Jian
    Zhang, Weinan
    Huang, Yimin
    Zhang, Xing
    Tang, Ruiming
    Yu, Yong
    Li, Zhenguo
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34