Conducting survival analysis on distributed healthcare data is an important research problem, as privacy laws and emerging data-sharing regulations prohibit the sharing of sensitive patient data across multiple institutions. The distributed healthcare survival data often exhibit heterogeneity, non-uniform censoring and involve patients with multiple health conditions (competing risks), which can result in biased and unreliable risk predictions. To address these challenges, we propose employing federated learning (FL) for survival analysis with competing risks. In this work, we present two main contributions. Firstly, we propose a simple algorithm for estimating consistent federated pseudo values (FPV) for survival analysis with competing risks and censoring. Secondly, we introduce a novel and flexible FPV-based deep learning framework named Fedora, which jointly trains our proposed transformer-based model, TransPseudo, specific to the participating institutions (clients) within the Fedora framework without accessing clients' data, thus, preserving data privacy. We conducted extensive experiments on both real-world distributed healthcare datasets characterized by non-IID and non-uniform censoring properties, as well as synthetic data with various censoring settings. Our results demonstrate that our Fedora framework with the TransPseudo model performs better than the federated learning frameworks employing state-of-the-art survival models for competing risk analysis.