In 2021, the National Institute of Justice - the research arm of the United States Department of Justice - released the "Recidivism Forecasting Challenge" ("the Challenge") with the stated goals of "increas[ing] public safety and improv[ing] the fair administration of justice across the United States," providing "critical information to community corrections departments...," and ultimately "improv[ing]" the ability to forecast recidivism using person-and place-based variables" [68]. The Challenge was also designed, in part, to encourage "non-criminal justice forecasting researchers to compete against more 'traditional' criminal justice researchers" [68]. Challenge contestants had the opportunity to win part of the $723,000 in prize money for their submitted models. In this work, we highlight how the Challenge was underpinned by a technosolutionist framing (emphasizing technical interventions without addressing underlying structural problems) [66] and plagued by serious ethical and methodological issues, including (1) the choice of training data and the selection of an outcome variable extracted from racially biased and inaccurate law enforcement data systems, (2) data leakage that may have seriously compromised the Challenge, (3) the choice of a faulty fairness metric, leading to the inability of submitted models to accurately surface any bias issues in the data selected for the Challenge, (4) the inclusion of candidate variables that created the potential for feedback loops, (5) a Challenge structure that arguably incentivized exploiting the metrics used to judge entrants, leading to the development of trivial solutions that could not realistically work in practice, and (6) the participation of Challenge contestants who demonstrated a lack of understanding of basic aspects of the U.S. criminal legal system's structure and functions. We analyze the Challenge and its shortcomings through the lens of participatory design, applying emerging principles for robust participatory design practices in artificial intelligence (AI) and machine learning (ML) development to evaluate the Challenge's structure and results. We argue that if the Challenge's designers had adhered to these principles, the Challenge would have looked dramatically different or would not have occurred at all. We highlight several urgent needs and potential paths forward for any future efforts of this nature, recognizing the real and significant harms of recidivism prediction tools and the need to center communities directly impacted by policing and incarceration when thinking about whether to develop risk assessment tools.