An Efficient and Resource-Aware Hashtag Recommendation Using Deep Neural Networks

被引:5
|
作者
Kao, David [1 ]
Lai, Kuan-Ting [2 ]
Chen, Ming-Syan [1 ]
机构
[1] Natl Taiwan Univ, Taipei 10617, Taiwan
[2] Natl Taipei Univ Technol, Taipei 10608, Taiwan
关键词
Hashtag recommendation; Zero-shot learning; Deep learning;
D O I
10.1007/978-3-030-16145-3_12
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The goal of this research is to design a system that can predict and recommend hashtags to users when new images are uploaded. The proposed hashtag recommendation system is called HAZEL (HAshtag ZEro-shot Learning). Selecting right hashtags can increase exposure and attract more fans on a social media platform. With the help of the state-of-the-art deep learning technologies such as Convolutional Neural Network (CNN), the recognition accuracy has improved significantly. However, hashtag prediction is still an open problem due to the large amount of media contents and hashtag categories. Using single machine learning method will not be sufficient. To address this issue, we combine image classification and semantic embedding models to achieve the expansion of recommended hashtags. In this research, we show that not all hashtags are equally meaningful, and some are not suitable in recommendation. In addition, by periodically updating semantic embedding model, we ensure that the hashtags being recommended follow the latest trends. Since the recommended hashtags have not received any training examples in the first place, it fulfills the concept of Zero-shot learning. We demonstrate that our system HAZEL can successfully recommend hashtags that are the most relevant to each image input by applying our design to a larger scale of image-hashtag pairs on Instagram.
引用
收藏
页码:150 / 162
页数:13
相关论文
共 50 条
  • [1] Resource-Aware Saliency-Guided Differentiable Pruning for Deep Neural Networks
    Kallakuri, Uttej
    Humes, Edward
    Mohsenin, Tinoosh
    PROCEEDING OF THE GREAT LAKES SYMPOSIUM ON VLSI 2024, GLSVLSI 2024, 2024, : 694 - 699
  • [2] Exploring Resource-Aware Deep Neural Network Accelerator and Architecture Design
    Li, Baoting
    Liu, Longjun
    Liang, Jiahua
    Sun, Hongbin
    Geng, Li
    Zheng, Nanning
    2018 IEEE 23RD INTERNATIONAL CONFERENCE ON DIGITAL SIGNAL PROCESSING (DSP), 2018,
  • [3] Instagram Hashtag Prediction Using Deep Neural Networks
    Beketova, Anna
    Makarov, Ilya
    ADVANCES IN COMPUTATIONAL INTELLIGENCE (IWANN 2021), PT II, 2021, 12862 : 28 - 42
  • [4] Efficient Resource-Aware Neural Architecture Search with a Neuro-Symbolic Approach
    Bellodi, Elena
    Bertozzi, Davide
    Bizzarri, Alice
    Favalli, Michele
    Fraccaroli, Michele
    Zese, Riccardo
    2023 IEEE 16TH INTERNATIONAL SYMPOSIUM ON EMBEDDED MULTICORE/MANY-CORE SYSTEMS-ON-CHIP, MCSOC, 2023, : 171 - 178
  • [5] Efficient Resource-aware Neural Architecture Search with Dynamic Adaptive Network Sampling
    Yang, Zhao
    Sun, Qingshuang
    2021 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2021,
  • [6] Resource-aware speculative prefetching in wireless networks
    Tuah, NJ
    Kumar, M
    Venkatesh, S
    WIRELESS NETWORKS, 2003, 9 (01) : 61 - 72
  • [7] FPGA Resource-aware Structured Pruning for Real-Time Neural Networks
    Ramhorst, Benjamin
    Loncar, Vladimir
    Constantinides, George A.
    2023 INTERNATIONAL CONFERENCE ON FIELD PROGRAMMABLE TECHNOLOGY, ICFPT, 2023, : 282 - 283
  • [8] Enabling Resource-Aware Mapping of Spiking Neural Networks via Spatial Decomposition
    Balaji, Adarsha
    Song, Shihao
    Das, Anup
    Krichmar, Jeffrey
    Dutt, Nikil
    Shackleford, James
    Kandasamy, Nagarajan
    Catthoor, Francky
    IEEE EMBEDDED SYSTEMS LETTERS, 2021, 13 (03) : 142 - 145
  • [9] Resource-aware Speculative Prefetching in Wireless Networks
    N.J. Tuah
    M. Kumar
    S. Venkatesh
    Wireless Networks, 2003, 9 : 61 - 72
  • [10] RACC: Resource-Aware Container Consolidation using a Deep Learning Approach
    Nanda, Saurav
    Hacker, Thomas J.
    PROCEEDINGS OF THE 1ST WORKSHOP ON MACHINE LEARNING FOR COMPUTING SYSTEMS (MLCS 2018), 2018,