Purpose The great filter and an unfriendly artificial general intelligence might pose existential risks to humanity, but these two risks are anti-correlated. The purpose of this paper is to consider the implications of having evidence that mankind is at significant peril from both these risks. Design/methodology/approach This paper creates Bayesian models under which one might get evidence for being at risk for two perils when we know that we are at risk for at most one of these perils. Findings Humanity should possibly be more optimistic about its long-term survival if we have convincing evidence for believing that both these risks are real than if we have such evidence for thinking that only one of these perils would likely strike us. Originality/value Deriving implications of being greatly concerned about both an unfriendly artificial general intelligence and the great filter.
机构:
Washington Univ, Sch Med, Div Emergency Med, 660 South Euclid,Campus Box 8072, St Louis, MO 63110 USAWashington Univ, Sch Med, Div Emergency Med, 660 South Euclid,Campus Box 8072, St Louis, MO 63110 USA
Schwarz, Evan S.
Mullins, Michael E.
论文数: 0引用数: 0
h-index: 0
机构:
Washington Univ, Sch Med, Div Emergency Med, 660 South Euclid,Campus Box 8072, St Louis, MO 63110 USAWashington Univ, Sch Med, Div Emergency Med, 660 South Euclid,Campus Box 8072, St Louis, MO 63110 USA
Mullins, Michael E.
Liss, David
论文数: 0引用数: 0
h-index: 0
机构:
Washington Univ, Sch Med, Div Emergency Med, 660 South Euclid,Campus Box 8072, St Louis, MO 63110 USAWashington Univ, Sch Med, Div Emergency Med, 660 South Euclid,Campus Box 8072, St Louis, MO 63110 USA