In recent years, the use of asynchronous discussion forums in online learning has increased rapidly. In earlier studies, content analysis is the most-used research method in exploring the discussion transcripts. However, conventional content analysis is a time-consuming task and requires experienced coders. There is a need for an automated approach to analyse the online discussion transcripts to help instructors optimise the learners' learning experiences. This article presents a systematic literature review of the automated content analysis approach in online discussion transcripts. Fifty-four relevant peer-reviewed conference and journal papers were found between January 2016 and October 2021, using the PRISMA and snowball methods. Eight measurement dimensions were studied from online learning transcripts: cognitive, social, relevance/importance, summary, pattern, behaviour, topic, and learning resources. Six theoretical frameworks were used in the selected studies, namely the Community of Inquiry, Stump's Post Classification, Arguello's Speech Acts, ICAP, Wise's Speaking Behaviour, and Bloom's Taxonomy. All selected studies are experimental, with 93% using machine learning to analyse discussion transcripts. English is the most-used language dataset, used in 78% of studies. These studies reported promising results in accuracy and precision. However, this research area still has room for improvement, especially in the reliability and generalisability of cross-domain context.