The increasing demand for big data processing leads to commercial off-the-shelf (COTS) and cloud-based big data analytics services. Giant cloud service vendors provide customized big data processing systems (BDPS), which are more cost-effective for operation and maintenance than self-owned platforms. End users can rent big data analytics services with a pay-as-you-go cost model. However, when users’ data size increases, they need to scale their rental BDPS in order to achieve approximately the same performance, such as task completion time and normalized system throughput. Unfortunately, there is no effective way to help end-users to choose between scale-up direction and scale-out direction to expand their existing rental BDPS. Moreover, there is no any metric to measure the scalability of BDPS, either. Furthermore, the performance of BDPS services at different time slots is not consistent due to co-location and workload placement policies in modern internet data centers. To this end, this paper proposes scalability metric for BDPS in clouds, which can mitigate the aforementioned issues. This scalability metric quantifies the scalability of BDPS consistently under different system expansion configurations. This paper also conducts experiments on real BDPS platforms and derives optimization approaches for better scalability of BDPS, such as file compression during Shuffle process in MapReduce. The experiment results demonstrate the validity of the proposed optimization strategies.