The half-eaten carcass floats to the seabed. By the time the red-mess grounds, the Chinstrap penguin’s other half is already decomposing in the Leopard seal’s stomach. Thirty-seconds later, the first episode of the BBC’s “Life” series ends with an Orangutan guiding her child through the Malaysian treetops. Attenborough makes it clear — successfully evading circumstantial threats is not a get-out-of-jail-free-card against the eventual unconditional surrender to death.
Two million years ago, it was this reality that drove our ancestors, Homo habilis, to use tools to crack open nuts so they could replenish their energy for the next day’s hunt. It’s why seventy-thousand years ago, a band of these ancestors left Africa to escape the harsh climate that came from the Toba eruption and re-settled in the Arabian peninsula. This evolutionary history explains why your most primitive behaviours (colloquially called the four F’s — feeding, fighting, fear and, unsurprisingly — fucking) arise from a small region of neural tissue called the hypothalamus that’s found in all mammals. Your basic instincts are the sum total of a shared family history that’s just a string of traumatic memories detailing why 99.9% of all species become extinct.
What makes Homo sapiens so remarkable is that we’ve been afforded the opportunity to surpass nature’s original restrictions. From the primitive stone tools two-million years ago, we’ve steadily advanced our engineering abilities to reshape nature to our liking. As a result, there’s a growing majority of our population that aren’t in poverty and world-hunger has reduced along with infant and maternal mortality rates. While our ancestors’ original fears may still weigh differently across segments of today’s world, the general day-to-day concerns for the majority of us are no longer about general survival, starvation or even avoiding viruses.
But the danger of accomplishment is the possible afterbirth — complacency.
“It is not the strongest of the species that survives, nor the most intelligent. It is the one that is most adaptable to change”.
Accept that assessment from the mastermind behind evolution’s decoding and you start to wonder what Charles Darwin would say about today given that 43% of newborns in the 1800s never reached their fifth birthday.
Examining the impact of changes in our world today delivers two ideas:
- That any species faces constant mortal threats,
- Tactical changes can only postpone the inevitable defeat against these threats.
So when we look to the changes we’re integrating into our lives today, we’re left asking — how far away is that end-date? How much further have we pushed it away?
That’s assuming you ask the right question. It’s reasonable to assume that if you were asked to assess modern innovation, it’d be unlikely you’d answered the above.
To some this could be a sign that there’s no need to worry anymore, that the innovation of today brings enough medical and cultural benefit that being concerned about morality is just a waste of time.
I’ve always thought “it is what it is” could be the most useless phrase in history.
The problem with using a comparison to determine value is that it unquestionably assumes that there is a value to be measured. Imagine you’re in a store that sells everything that can be bought. There is nothing in that store that is free, but there can very well be something that costs $0.01. By default, everything has a price.
So if there’s no default value to all innovation, then can we be sure we’re headed in the right direction? In fact, if we believe our world is at its optimal performance because it is better than before, are we biased towards only resolving issues that affected us in the past?
Of course this isn’t an absolute statement. Everyday we make strides across vast problems that are by-products of modern innovations. Data security, lithium-ion battery leakage, minimising voltage drops in data-centres as AC power is converted to DC for each blade in a server and even the more intrusive — robotic surgery. The potential issue though is that do we prioritise improving individual comfort or societal comfort?
The possibility that modern innovation doesn’t prioritise societal comfort might come as a jolt. Surely the listed examples of modern-innovation prove that our society is improving in broad areas, and as noted earlier, our general livelihood is a far-removed cousin of life a few centuries ago.
The concern arrises from the fact that innovation’s quality is derived from the rate of change from the past. Innovation is fixated on improving our past mistakes and accidents.
Obsessing on outperforming history has to be the fastest way to walk backwards. This is simply being brainwashed by the past. It’s literally living in a simulation.
To demonstrate just how easy it is to have an experience that is so far removed from what is actually right, you’re going to need to count.
What is 1+2+3+4+…..to infinite?
It takes about 150–200 milliseconds for you to consciously recognise a thought after your brain has already generated the idea. But even if you gave your brain far longer, for those that were unfamiliar with the answer — you’re almost certainly wrong.
The answer is -1/12. Here’s the logic:
I won’t explain the above maths despite how much I’d like to and partly because fun comes from the journey in understanding the answer, so here’s your starting-step. What’s important about the above demonstration is that it’s representative of an idea. What if you’re conceptually blind? What if you just can’t register an idea no matter how prudent or underlining it is to your livelihood?
There’s entire fields dedicated to demonstrating the vulnerabilities of the human mind and they’re not just limited to cognitive psychology or economics. Neuro-computing, behavioural sciences and in general any type of social science can highlight the shortcomings of human cognition. I highly recommend “Thinking Fast and Slow” by Daniel Kahneman to anybody that wants to know just how fraudulent they’ve been to themselves.
This is by no means a conclusive effort to show that firstly, survival of our species is our top-priority and secondly, that our species can easily be disenchanted, but I do hope this at least kickstarts some of your own thoughts. A personal dialogue will always sound longer than any text on a screen.
The consequence of these two modules is that their summation produces a concerning idea. What if our species is disenchanted about survival? The best way to predict the direction of our species is to look at innovation — the process that breaks and builds new paths for us to follow. Where is innovation leading us after bringing us out of so much initial mess?
It’s questions like this that often return sweeping attacks against Marxism/Neo-Liberalist or some abstract swipe at Millennials and their apparent obsession with identity politics. While there may be some truth in either of these, they generally relate to societal constructions and how we should collectively function. Surprisingly, if there’s one thing any of these movements fail to consider — it’s defining individual purpose. A consequence of any movement that heralds individualism and self-autonomy is going to refrain from collectively defining purpose. But the irony there is that inaction translates to the idea that anyone’s purpose is undefined. Just as defining.
I’ve written before about the dangers of the modern world, and they’re not just limited to climate change and nuclear warfare. Arguably the biggest dangers are the ones that you don’t know.
The danger with modern innovation is that it’s unashamedly biased towards championing only the user’s choice. There is no space for communal action. Sure a collective of individual actions may precipitate a group effort but it’s each to their own at every step of the way. Any bindings between users have been completely eradicated. This is the most dangerous consequence from modern innovation.
While some readers may think of how Facebook might be brainwashing kids or how iPads at the kitchen table will damage somebody’s social skills, they aren’t the problem. Modern products are designed to provide the perfect user-experience. Consumer products began with home appliances, became compressed to portable devices and now permeate their philosophy through an ever-attractive mobile operating system that’s only improving.
Just as how Apollo 13’s landing on the Moon in late July 1969 opened a new chapter in human history, so too has the smartphone. Before the age of modern-transistors, appliances used vacuum tubes and a television in the 1950s may have used about half a dozen of these to generate a black and white image. An iPhone using the A11 processor has 4.3 billion transistors in its processor. The chip is 87.66mm2. These machines are only becoming more power. And much more luminous.
The smartphone is the conduit between what you want and what you receive. On average, an American adults spends 2-hours and 51-minutes a day on their smartphone. Again, the same adult spends 7 hours asleep, meaning nearly 18% of their day is spent on their smartphone. Across all forms of screen, an American adult spends has 10 hours and 39 minutes of screen-time per day. In other words, a total 62.64% of the time when somebody is awake, is spent in front of a screen.
At what point has software ever encouraged you to make a compromise? Software isn’t designed for that. Software provides the options and users’ preferences kill off the unwanted choices. There is no alternative to a user’s preference.
An interesting rebuttal is that clearly this is what users’ want. They want complete independence and to be unchained from any commitment or responsibility that what somebody else wants. This theoretically sounds brilliant — do as you please.
Do as you please.
At what point does any user start developing a blanket expectation that what they want is what they get? Does the best software today promote some anarchist ideals? To make sure we’re all on the same page, I’m describing anarchism as the abstinence of any civil structures with individuals allowed to voluntarily choose to be governed or not.
It’s important to recognise that most universities are the petri dishes of future movements. The growth of Marxism and identity politics along with Neo-Liberalism in these universities are actually weak categories for what I think could be representations of anti-categorical autonomy.
To unpack that idea, let’s consider it’s two components, “anti-categorical” and “autonomy”. The first half refers to the growing resistance of any communal identification and instead, the growing fractionation of somebody’s identity. The pitfall here with fractionating an identity is that there becomes no limit and so at what point does anyone have common ground with one another? At what point do we have a shared purpose? When are we allies in any future fights?
The latter half is the self-directive nature of somebody’s reasoning. “The decision is mine” sort of thinking.
To be clear, I’m not blasting any social-justice issues. They are not the focus. I’m concerned about the growing expectation users have to receive what they want and in a world where these experiences are taking more and more of users’ time, at what point are we inclined to fight the future together?
In the 1970s Carl Sagan (this video captures his essence if you’re unfamiliar with him) wrote about a society that was dependent on engineering and science but the majority of the population were unaware of the intricacies and nuances of how such a system operated. As a historic leader in technical innovation, it’s important to assess how the U.S. workforce and future entrants, perceive STEM today. So it’s worrying to note that only 16% of U.S. high-schoolers are interested in STEM subjects, let alone that 1 in 4 STEM graduates actually enter STEM-related jobs.
By now I hope a picture has been positioned in front of you that suggests there’s not enough troops for the future wars. And “wars” is a perfect word to describe what we’re going to get ourselves into.
Firstly in a literal sense, it’s undeniable there will be future military engagements at some-point and that these conflicts will be orchestras of the engineering and scientific achievements of their times. But figuratively, our next few “wars” need to resolve the challenges of mass-AI integration into society, the threats of viruses and other diseases that are only becoming easier to spread due to the advancements in transportation, interplanetary civilisation, the emergence of a sub-species that are defined by their blurred bio-tech and biological composition and so on. If you want to write a list of what may “complicate” the 21st Century just find the worst-possible scenarios for each invention and work backwards. Any actuary will tell you it’s a matter of probability, not possibility.
It’s a pretty interesting idea that getting what we want could be toxic.The danger with software is that its physical properties, the values that are most influential on our thoughts, are products of industrial design. Asbestos and mercury are easy to avoid, in general theres’ no commercial reason to advocate their use. Alluring and ingenious user-experiences seem harmless, look harmless and are immediately harmless.
In “The Iron Lady” Margaret Thatcher says:
“Watch your thoughts for they become words. Watch your words for they become actions. Watch your actions for they become…habits. Watch your habits, for they become your character. And watch your character, for it becomes your destiny! What we think we become.”
Justifying any of your decisions because you “want” it is the same reasoning I child often tells their parents. A parents’ rationale when they discipline their child is that what’s uncomfortable today is so their child can be comfortable tomorrow.
That’s because parents know what’s ahead for their child? Do the products we use today know what’s ahead? Do they know where they’re leading us?