Federated learning (FL) is a promising approach for participants' collaborative learning tasks with cross-silo data. Participants benefit from FL since heterogeneous data can contribute to the generalization of the global model while keeping private data locally. However, practical issues of FL, such as security and fairness, keep emerging, impeding its further development. One of the most threatening security issues is the poisoning attack, corrupting the global model by an adversary's will. Recent studies have demonstrated that elaborate model poisoning attacks can breach the existing Byzantine-robust FL solutions. Although various defenses have been proposed to mitigate poisoning attacks, participants will sacrifice learning performance and fairness due to strict regulations. Considering that the importance of fairness is no less than security, it is crucial to explore alternative solutions that can secure FL while ensuring both robustness and fairness. This paper introduces a robust and fair model aggregation solution, Romoa-AFL, for cross-silo FL in an agnostic data setting. Unlike a previous study named Romoa and other similarity-based solutions, Romoa-AFL ensures robustness against poisoning attacks and learning fairness in agnostic FL, which has no assumptions of participants' data distributions and the server's auxiliary dataset.