Sometimes, the solution is right in front of you.
I’m a science fiction fan; I can still recall the set of four or five vertical rotating racks of paperbacks at the Paddy Hill Library. I read most of those books, I’m sure – dozens and dozens.
Among all the science fiction themes, time travel is probably my favorite. I always enjoyed, and still really do, finding out how an author was going to resolve all the questions that people have about how time travel might work.
Most of my life, I’d heard about the Hitler question. The question is whether, if you were a time traveler and were in the right time and place, you would kill HItler. One answer is yes, he’s bad; the tension is between that and the “no, I wouldn’t, because it could unleash something much worse.”
Admittedly, I was stuck on this one for a long time (to be fair, I was a kid when this was already in play, so I hopefully just assumed that it was a tough problem). Then I realized that the whole premise of the question is flawed.
Taking a decisive approach, I realized that there’s no information difference between someone from 2016 going back in time and someone deciding in 1939 to kill Hitler. In both cases, no one knows what the future will bring. Any (& every) argument that supposedly suggests “don’t change the past” applies with equal force to the present, i.e., 1939, decision. Or, for that matter, to a 2016 decision to kill butchers in Daesh in Syria/Iraq. Sure, defending the weak and helpless victims of torture, slavery, and genocide might lead to some worse future, but that doesn’t mean we should abandon every moral urge we have to protect each other.
The principle of decisiveness tells us that if a piece of information doesn’t affect the choice we might make, it turns out to be irrelevant and unnecessary. The “the future might generically/in an unforeseen way be worse” argument applies to every choice we make in the present and thus we disregard it and instead focus on the actual better/worse specifics of the choice. Only corporate America refuses to make decisions based on unknown unknowns (uncertainty, a la Frank Knight), and that’s because of a funky punishment/reward system that is skewed to the left (we punish people for bad outcomes without regard to whether they were the result of well-placed bets).
I can’t believe that in all the years I’ve read about this debate in scifi, no one’s ever raised this issue. THAT is why I do what I do: I untangle problems to create and structure opportunities for decisions.
The illustration below, from Jessica Hagy’s wonderful site Indexed, drove me to actually write this down.