共 50 条
- [1] Sampling-Bias-Corrected Neural Modeling for Large Corpus Item Recommendations RECSYS 2019: 13TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, 2019, : 269 - 277
- [2] RecJPQ: Training Large-Catalogue Sequential Recommenders PROCEEDINGS OF THE 17TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, WSDM 2024, 2024, : 538 - 547
- [3] Cross-Batch Negative Sampling for Training Two-Tower Recommenders SIGIR '21 - PROCEEDINGS OF THE 44TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2021, : 1632 - 1636
- [4] Layer-Dependent Importance Sampling for Training Deep and Large Graph Convolutional Networks ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
- [5] Training Large-Scale News Recommenders with Pretrained Language Models in the Loop PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 4215 - 4225
- [6] Importance sampling in neural detector training phase Soft Computing with Industrial Applications, Vol 17, 2004, 17 : 43 - 48
- [7] Enhancing Siamese Networks Training with Importance Sampling PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE (ICAART), VOL 2, 2019, : 610 - 615
- [8] COUNTEREXAMPLES IN IMPORTANCE SAMPLING FOR LARGE DEVIATIONS PROBABILITIES ANNALS OF APPLIED PROBABILITY, 1997, 7 (03): : 731 - 746
- [9] The Norwegian Colossal Corpus: A Text Corpus for Training Large Norwegian Language Models LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 3852 - 3860
- [10] Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences PROCEEDINGS OF THE 17TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2023, 2023, : 890 - 896