Autonomy is the ability of a system to make decisions and take actions in the presence of uncertainty. Validating autonomy, therefore, is a matter of validating that the system makes intelligent decisions even when there might be discrepancies between the observed and the expected state of the world as well as when the possible outcome of the possible courses of action is uncertain. During the requirements engineering phase, the functions and their expected behavior are specified. However, in the case of autonomous system, it is challenging to specify all possible scenarios that they may encounter. Since autonomous behaviors cannot always be predetermined, it is difficult to reason about their completeness and correctness. While these challenges are also faced with non-autonomous system, they are more prevalent for autonomous system due to their complexity and emergent behavior. Our efforts have been threefold. We developed a goal-based method for requirements decomposition. We embedded the method into a system engineering framework and developed methods for decision and reliability analysis during operations. We used simulation analysis to validate the theoretical methods and provide feedback. The decision analysis approach (DA) helps in identifying and mitigating the run-time risks by bringing to the forefront the uncertainties, decisions, interactions, and other factors that may cause autonomous software to make erroneous decisions. The DA is used to quantify, for each decision that the software can make, failure risk due to uncertainties. The information about key decisions and circumstances that can cause autonomous systems to make incorrect decisions are used to generate test scenarios to be run in a simulator to ensure that the system can handle error-prone circumstances. The simulated decision outcomes are leveraged to further gain insight of the assured system. We combine a goal-based approach with simulation analysis to facilitate requirements development for autonomous systems and provide a method for run-time systems level reliability considerations during the autonomous software development process. The DA and the reliability analysis methods provide a formalism for the consideration of uncertainty in the run time decision-making process for autonomous software. This formalism considers the uncertainty involved in the outcome of different courses of action, in terms of performance and cost, as well as a method to consider the system level reliability implications. System level reliability considerations for autonomous software decision making contribute to the satisfaction of the system level goals over the lifecycle of a given mission. We demonstrate our approach using a hypothetical rover path planning requirements example based on literature about Mars rovers and lessons learned from real world observations at the Jet Propulsion Laboratory (JPL) [2]. The specifics of the Mars rovers, which have not been cleared for external release, are not used for our demonstrations.