Argumentation skills are emphasized by the common core state standards (CCSS) and are viewed as essential for success in college, career, and life. Our project aims to develop formative assessment tasks measuring students’ argumentative reading and writing skills. We used the framework of the Discuss and Debate Ideas key practice (describing the key argumentation skills) to guide the task development and gathered evidence of students’ argumentation skills. Specifically, we designed 27 tasks targeting various argumentation skills, spanning three learning progression (LP) levels aligned with the grade level expectations for argumentation in CCSS. The goal was to evaluate the potential utility of an LP-based approach to task design in assessing argumentation skills. We conducted a study with 786 seventh- and eighth-grade students to examine how well these tasks function and identify patterns of student performance. We also examined whether student performance patterns aligned with predicted LP levels, using task progression maps derived from item response theory (IRT) models. Results suggested that the majority of tasks were reliable, and that the LP-based tasks were significantly related to students’ reading skills. Student LP performance was most strongly correlated with reading comprehension as measured by the RISE assessment, and was also significantly associated with foundational reading skills, such as word recognition, decoding, and vocabulary. However, some lower-level LP tasks appeared to be unexpectedly difficult. We found evidence that such factors as topic, task format, and scoring could have contributed to item difficulty and affected student task performance. Implications for future LP research are provided.