On the difficulty of coming up with new ideas

It’s pretty hard to come with a completely original idea these days.

It might be because almost all of the basic ideas, the low-hanging fruit, have already been thought of and anything that remains to be stumbled upon by humankind is far more complex. But the problem with that theory is that it is possible to imagine that it might have always been like this. A fair bit of knowledge had been accumulated by the time Newton and Einstein came along. But they still went ahead and changed the world as they knew it.

It’s probably fair to assume that at any given time, the knowledge we possess is infinitesimal when compared to the stuff that we are yet to discover.

As science becomes more and more complex, it becomes so that in order for a new idea to develop, all existing knowledge becomes a prerequisite. This means that right now, it takes longer for scientists to make breakthroughs than in previous eras.

But that is the relative calm before the storm. What if we keep on progressing down this path? One day the prerequisite knowledge required for new ideas will be so vast that the normal lifetime of a human will not provide sufficient time for assimilation of said knowledge and leave enough time for applying all that information in the search for a new idea. In short, the human race will stagnate.

There are an two ways we could potentially avoid this stagnation.

One is to extend the average lifespan of a human. But that will present it’s own problems as people who live longer will need more resources and energy to sustain themselves.

The other option, which is becoming increasingly likely, is that we cede the base level of knowledge to machines. This would not be dissimilar to how knowledge of the internal workings of calculator is not a prerequisite to using one to solve complex mathematical problems. That’s a rather simplistic, limited example but Artificial Intelligence could possibly take on a far heavier load of base knowledge than simple number-crunching, freeing up humans to focus on pushing the envelope.

But that solution too seems incomplete. A scientist who offloads some of his tasks to an AI would still need, at some level, a good understanding of those tasks. Which means she has not really skipped the process of studying and comprehending them, merely the act of executing them repetitively. And that may not go far enough as the years advance and the sum total of human knowledge multiplies exponentially.