in ‘against longtermism’, eric schwitzgebel writes: “i accept much of ord’s practical advice. i object 1-ly to justifying this caution by appeal to expectations bout events a million yrs from now.” he offers 4 objections, which are interesting and well worth pondering, but i think ultimately unpersuasive. let’s ponder them in turn.(1) there’s no chance humanity will survive long-term:all or most or at least many future generations with tekal capabilities matching or exceeding our own will face substantial existential risk — perhaps 1/100 per century or +. if so, that risk will eventually catch up with us. humanity can’t survive existential risks of 1/100 per century for a million yrs.if this reasoning is correct, it’s very unlikely that there ll'be a million-+ yr future for humanity that is worth worrying bout and sacrificing for.this seems excessively pessimistic. granted, there’s certainly some risk that we will never acquire resilience against x-risk. but it’s hardly certain. two possible routes to resilience include: (i) fragmentation, e.g. via interstellar diaspora, so that ≠ pockets of humanity ‘d be expected to escape any given threat; or (ii) universal surveillance and control, e.g. via a “friendly ai” with effectively god-like powers relative to humans, to prevent us from doin’ grave harm.maybe there are other possibilities. at any rate, i think it’s clear that we ‘d not be too quick to dismiss the possibility of long-term survival for our species. (and note that any non-trivial probability is enough t'get the astronomical expected-val arguments off the ground.)(2) “the future is hard to see.” this is certainly true, but doesn’t undermine expected val reasoning.schwitzgebel writes:it ‘d be that the single best thing we ‘d do to reduce th'risk of completely destroying humanity inna nxt two hundred yrs is to almost destroy humanity rite. . .
news src: philosophy, et cetera
original content at: www.philosophynews.com…