Recent developments in computational machine ethics have adopted the assumption of a fully observable environment. However, such an assumption is not realistic for the ethical decision-making process. Epistemic reasoning is one approach to deal with a non-fully observable environment and non-determinism. Current approaches to computational machine ethics require careful designs of aggregation functions (strategies). Different strategies to consolidate non-deterministic knowledge will result in different actions determined to be ethically permissible. However, recent studies have not tried to formalise a proper evaluation of these strategies. On the other hand, strategies for a partially observable universe are also studied in the game theory literature, with studies providing axioms, such as Linearity and Symmetry, to evaluate strategies in situations where agents need to interact with the uncertainty of nature. Regardless of the resemblance, strategies in game theory have not been applied to machine ethics. Therefore, in this study, we propose to adopt four game theoretic strategies to three approaches of machine ethics with epistemic reasoning so that machines can navigate complex ethical dilemmas. With our formalisation, we can also evaluate these strategies using the proposed axioms and show that a particular aggregation function is more volatile in a specific situation but more robust in others.