Crowdsourcing contests have become increasingly important and prevalent with the ubiquity of the Internet. Designing efficient crowdsourcing contests is not possible without the deep understanding of the factors affecting individuals' continuous participation and their performance. Prior studies have mainly focused on identifying the effect of task-specific, environment-specific, organisation-specific, and individual-specific factors on individuals' performance in crowdsourcing contests. And to our knowledge, there are no or very few studies on evaluating the effect of individuals' participation history on their performance. This paper aims to address this research gap using a data set from TopCoder. This study derives competitors' participation history factors, such as participation frequency, participation recency, winning frequency, winning recency, tenure, and last performance to construct models depicting effects of these factors on competitors' performance in online crowdsourcing contests. The research findings demonstrate that most of competitors' participation history factors have significant effect on their performance. This paper also indicates that competitors' participation frequency and winning frequency moderate the relationship between last performance and performance, and relationship between tenure and performance positively. On the other hand, individuals' participation recency and winning recency moderate relationship between last performance and performance negatively, but have no significant effect on the relationship between tenure and performance.