NaClhv

Theology, philosophy, math, science, and random other things
                                                                                                                                                                                                                                                                  
2015-11-23
Importance: 

How to think about the future (Part 4)

What I am about to say in this post is less certain than what I have said in the previous posts; it's a half-formed thought, which I only post because I think it may be more important than my other thoughts, especially regarding the future of human society. In short, this post will more nebulous compared to the others in the series.

The thought here is simple:

I wonder if humans only learn through experiments.

That doesn't seem important? Let me rephrase that:

I wonder if humans only learn through experiences. Or, to put in a little bit more detail: I wonder if humans only learn through experiencing mistakes.

Note that I intend for this to apply to human societies as a whole rather than to just individuals. You see, when we start talking about the future, we sometimes have this idea that we'll reach some kind utopia, where all our social problems will be solved. We just have to learn to treat each other as fellow humans, right? Surely with the right education and policies, and with our ever increasing powers of science and technology, we'll eventually reach that eternally perfect society?

If there were some cap to the amount of beauty in the universe, or a limit to human potential, perhaps. Then we will eventually reach that maximum and stay there. Then we could maybe be like those space-faring aliens whom I mentioned before, who are fated merely to rule the stars then perish with the universe.  But I don't think that this is the case. I don't think that we are bounded in that way.

So, if we have infinite potential, and we can only learn though experiencing mistakes, what will the future be like? Well, here is one possible future history:

One thing that it's popular to wonder about nowadays is whether we should make a computer that's smarter than a human (whatever that means). Now, this is a question of immense importance, but if we can only learn through experiencing mistakes, the only way to settle the question would be to try it, then see the results: there is no historically analogous previous situation to draw on. And getting the question wrong may be disastrous.

So then, after the disastrous Skynet wars of 2050, humanity might finally agree on what kind of artificial intelligence to build. This will allow us to colonize the other planets in the solar system - and we'll then ask questions like "how should we share the resources between the different planets?" This type of question has never been asked on a planetary scale, so there will be no historically analogous previous situation to draw on. Even a super-intelligent AI might not be able to reason with no data. So again, we'll just have to try out different models of planetary economics, and learn by trial and error. So, Mars and Jupiter may prosper, but then the citizens of Mercury and Earth may suffer for generations under an oppressive system of planetary trade.

But eventually, that will get sorted out, after a great deal of human suffering. But at this point, the differences between the haves and the have-nots, at the planetary scale, subject to different planetary conditions, combined with the genetic tinkering that's been going on, may threaten to tear apart our conception of "the human race". What should we do about this? Should we allow a portion of humanity to evolve separately from the rest? Again, this will be the first time in history where we can meaningfully ask this question, and the only recourse may be to try different policies and make our mistakes. And given the momentous nature of the question, the consequences for getting them wrong will be correspondingly tragic.

Of course, these scenarios are only projected from the mind of an early 21st century human. More likely, the kinds of issues faced by these future people will be completely incomprehensible to us; just as, say, the 20th century's nuclear ideology of Mutually Assured Destruction would be unimaginable to a prehistoric human. And each time, because of this unimaginable newness and lack of historical precedence, it may be that mistakes may have to be made in order to learn from them. And each time, because of humanity's increased power and the larger scale and scope of the problem, the mistakes will be more costly, even as the benefits of getting it right propels us further into the future.

What kind of future is this? It's actually a strangely hopeful but tragic one. Our powers and glory will perpetually increase - but with many steps being paid for with ever more costly mistakes. You think that slavery or the killing fields were bad? The instances of such tragedies will only increase in enormity. On the other hand, you think that modern democracies and smartphones are good? The overall trajectory for history as a whole will continue to be upwards, as all these things will perpetually be superseded by something better.

As I said, I'm unsure about all this; but it seems likely, as it's been the case thus far in history.


You may next want to read:
How to think about the future (Conclusion) (Next post of this series)
The biblical timeline of the universe
The lifetime of evil (part 2)
Another post, from the table of contents

Show/hide comments(No Comments)

Leave a Reply

Copyright