"Computers have been given the ontological kid glove treatment.” - Jaron Lanier
"TED Talks don't know what to do with spirit, because it can't be proven on a spreadsheet, it can't be tested in a lab. But spirit, that which is inaccessible to the five senses and yet is more real than that which you can hold in your hand—it's the giant elephant in the room. 'We can't talk about spiritual things.' Well, you're going to have to at some point." - Rob Bell
A few months ago, I caught on to this idea of becoming conscious, of becoming aware of the effect that passive consumption of media and use of technology is having on me. George Orwell wrote, "Until they become conscious they will never rebel, and until after they have rebelled they cannot become conscious.”
I think his point was that the allowance of intrusive technology to govern us is dehumanizing. In our zombified state of passive consumption we're ceding our dignity bit by bit, and in our unquestioning reliance on technology we discredit the value of human intuition. Not that I'm the picture of self-reliance—I'd last maybe a day in the wilderness—but am I willing to wholesale trade my gut for my Google?
Jaron Lanier echoed that sentiment in his essay, "You Can't Argue With A Zombie," an essay that's had a profound effect on my thinking about technology. Lanier includes among zombies those who can't zoom-out to see the bigger picture because of their belief that all we are is a physical body lacking consciousness or a soul.
These people devalue human intuition too much to see that technology devoid of consideration for the human experience is a bad, bad thing. Lanier calls them vitalists, and by that he means the trending sort of empiricists or positivists we sometimes call the "new atheists": Dennett, Dawkins, Hitchens, and the like. In his essay he's specifically responding to a work denying the existence of a transcendental human consciousness, by Daniel Dennett. Consciousness, he believes, comes from something purely chemical.
Remember those old commercials, “This is your brain on drugs?” or no, what was it… oh yeah, the "Above The Influence" commercials. “Sarah, Sarah?” … “This is how she’s been since she started smoking pot.” There was that kid who just deflated and sank into the couch. Am I remembering that right? "Sarah hasn't been the same since she started smoking pot."
I want you to think about your brain on Instagram. As you’re scrolling, passively browsing, what’s your brain doing? It’s turning to mush, isn’t it? Don’t you feel that? You’ve trained your brain to operate on less than 1% of its normal computing power—you've been conditioned to mindlessly bob and weave through the barrage of advertisements and bot posts to get to the good stuff, the meat, the eye-catchers, the wonderful distractions.
In a series of just instants you determine which social media posts deserve a little more of your attention, from the posts that are lame or boring or passable or invaluable to you. And you manage to do it effortlessly. Again, mindlessly. You're a well-oiled distraction-finding machine. A zombie.
This is your brain on drugs, and the drug is distraction. What are you doing with social media, when you’re mindlessly consuming? You’re turning your attention away from what’s happening IRL, away from what’s right there in front of you, to peek through the window of your smartphone into a different world. The world of the fake.
Fake news, fake friends, fake followers, Russian bots, sponsored posts disguised as actual personal thoughts, advertisers disguising commercials as friends. And you’re good at spotting fakes, but not good enough. As you’re getting better at spotting the fake, the fake is getting better at tricking you.
This is not a new idea. Read: “Active Choice, Passive Consumption,” or "Facebook says ‘passively consuming’ the News Feed will make you feel worse about yourself,” or “Millennials Spend 18 Hours a Day Consuming Media—And It's Mostly Content Created By Peers," or “The Lucrative Business of Fake Social Media Accounts."
In “Weeding Out Fake News,” the Wilfried Martens Centre states that "80% of middle school pupils who took part in tests could not see the difference between a genuine news story and sponsored content, even if the latter was visibly labeled as ‘sponsored.’”
The Centre goes on to say, "Many students, despite their presumed online fluency, were unaware of the basic indications of verified information.” The solution they propose is to bolster ‘e-literacy.’ They assert, “Education and improving the e-literacy of citizens are probably the best ways to limit the influence of lies.”
We’re good but not good enough at determining the real from the fake as we passively, mindlessly scroll. That’s a problem. We're losing our experience of experience itself. Let me unpack that. We adopt a state of distraction and thoughtlessness, such that we no longer feel what it is to experience things. Think about it. While you casually browse Reddit, are you mindful? are you aware of the moment? of your breathing? of your posture?
I know I'm not. This funny thing happens when I'm watching YouTube videos. My brain goes numb, like a kid watching cartoons, mouth agape, and after 30 or 45 seconds I realize I haven't been breathing. All of a sudden I suck in air like a vacuum to make up for the breaths I wasn't taking. My mind was so goopified that I neglected basic human functions. I have to remind myself to practice mindfulness, to reject that level of pure consumerism that is mindlessness.
There are organizations that want you to stop thinking. The easiest way to control a group of people is to limit their agency. Stifle their independence. Once their brains are mush you can sell them anything.
Convince them to give up their humanity, or trick them out of it, or simply never acknowledge they were humans at all and you can get away with all sorts of things. As consumers of social media, we have to remain mindful. Intentional. Honest. We have to remember to stay conscious, stay awake, stay human.
We simply must demand more humanity in the social media economy. Reject the fake. The sponsored ad masquerading as a personal status update, the bots that were proven to have a dramatic influence on voter thinking prior to the 2016 election, the algorithms that sort and categorize us, the crawlers that gather our private information and compile psychological profiles using our activities and interests. Humanity is increasingly absent.
In computer science, there’s something called the halting problem. The halting problem is that we cannot predict, given a particular set of instructions, whether a computer will stall or whether it will run on forever. There are functions that would take a mechanical computer so long to process that none of us would live to see it finish—we can’t know if it will compute or if it will halt. Alan Turing solved this problem, in theory, with a construct he aptly named the Turing machine.
A Turing machine is a conceptual computer that works from a set of rules. It takes input X and gives outputs Y. Because it is conceptual—it’s a math problem—we can use it to solve functions that otherwise would take forever or might halt.
It’s a foundational formula to computer science—it’s not a real machine, but if it were (if we could actually build a machine with the resources needed to compute what we need our Turing machine to compute) it would be the greatest computer ever built. We’ve not yet found the limits to what a Turing machine could compute. If we kept feeding it power and memory and whatever resource it needs, it could compute anything. That makes this concept computer the greatest computer.
The Church-Turing Hypothesis states that a function performed by any human working under a strict set of orders (like an algorithm), disregarding the resources needed to keep that human working, could also be performed by a Turing machine. In other words, any problem a man can solve, a computer can solve. This is the basis for artificial intelligence. Computers can be used to simulate humanity. See 'bots.'
Computers solve all kinds of problems for us by simulating the outcome of a particular set of events. Things like automobile crash physics, how a bullet travels, the prediction of weather patterns, the structural integrity of bridges and buildings; we can simulate flight and use flight physics to train airline pilots, we can simulate electrical component interactions before they’re ever soldered together.
I'm a guitarist, it intrigues me that we can even simulate tube guitar amplifiers and using my computer I can sound just like a young Eric Clapton ripping from a pair of 60s Marshall full-stacks. The coveted 'woman tone' of Cream. Every single day there are more and more things that we’re able to simulate using a computer.
So if a human can solve a problem, and a computer can solve the problem, and we’ve not yet found the limit to the problems either can solve (at what point might either ‘halt’ when solving a problem or performing a function), then there could theoretically come a time when a computer is indistinguishable in any way from a human. In its ability to solve problems and in its appearance, it could have true artificial intelligence. Think Westworld.
Now, the empiricist or positivist claims that nothing exists outside of what is measurable, observable, physical. This is the central presupposition of the scientific method. Many believe that if we can’t measure it, it simply doesn’t exist. This is the position of the "new atheists” I mentioned before—Dawkins and Dennett and Hitchens and the like.
If you go to Barnes & Noble, the new atheists' books are all bestsellers. Your garden-variety sophomore-in-college atheist is borrowing the arguments of the new atheists when he or she returns to an evangelical home over the summer. "I don't want to go to church anymore, mom."
If nothing exists outside of the realm of what we can observe and measure, then your mind—being purely chemical neurons and proteins and synapses firing and nerve-endings—your mind is purely matter. It’s measurable. There is no “soul,” no consciousness. The you-ness making you you is only goo. You are just a sack of cells and when you die you’ll return to the earth. Organic.
Let's follow that logic. If you’re a sack of cells and measurable, and there’s no known limit to a Turing machine’s ability to compute, then one day we could take all the measurements that is you and distill you down to a number, a source code, a key.
In other words, let’s say your entire body and whole being is measured and designated the number 10010001010101001010011101. That number is your source code—the measurement of all that makes you you. When a computer reads 10010001010101001010011101 back, you appear! you’re born. The computer executes the source-code that is you.
There’s a problem with this. When that computer executes the code that is you, when that computer is indistinguishable in any way from you, then how are we going to know the difference between you and the computer? It’s like in cartoons where the evil twin is standing right next to the hero, and the hero’s got to convince you that he’s the real hero and not the evil twin… you know? It’s an epistemic problem.
If all is physical and there is nothing transcendent, nothing spiritual, nothing more than cells and synapses to the consciousness, then the only way to know the difference between you and a computer would be to build a computer-detecting machine, a tool that says, “THIS is the computer, and THAT is the human.”
But a computer-detector machine would have to have sufficient information to pronounce one a computer and one a human, therefore, a computer-detecting machine will likewise be indistinguishable from the computer and from the human. So we’d need a third computer. Then a fourth. Then a fifth.
We’d have to keep building computers to try to identify computers from computers from computers from the human—they’re all executing the same code, the code that is you. You're all performing the function of being you. See the problem?
Without human intuition, there will come a day when we might face the obsolescence of man. Or the rejection of human relationships in favor of human-to-robot relationships. Isn't like, every company in Japan trying to perfect the sex-bot right now? Intimacy will be destroyed when we opt for the drama-free subservience of a human-like mate.
Perhaps worse than all of that, there may come a great existential crisis with regard to the value of a man's life. A sentient robot military conquering the innocent men and women of a smaller nation by force—it sounds like science fiction but these three things are undeniably true: artificial intelligence exists, computers are getting better and better at faking humanity, and men will make a weapon out of anything.
I'm trying to prove that science simply must presuppose the sovereignty and dignity of the human consciousness, the soul, in order to determine what is real and to have any trust in its determination. Am I making myself clear?
There’s a problem in metaphysics which basically says, how can I know that I wasn’t just created 5 minutes ago with the appearance of age? How can I trust in my perception of the world? Occam’s razor is what Lanier invokes, that the simpler answer is that I was born and lived as I remember having lived.
I see no other way to trust our perception of reality than to assume the trustworthiness of human intuition. That the experiential "meta-feeling," the sense of being, the sense of knowing, is vital to our perception of reality and in turn our study of the known universe. Of course there are exceptions, there are people whose perception of reality is certainly not trustworthy, but those are exceptions.
Science needs humanness. This, being human, is a dignified and sovereign appointment. We lose so much of what it means to be human when we diminish the importance of human experience in favor of the compulsory presence of technology in all areas of life, in our passive consumption of social media, in our addiction to distraction, and in our thoughtless approaches to a social internet.
We can use computational theory to argue the need for the soul. If talks of artificial intelligence and virtual reality and advancements in technology don’t include discussions of what it is that makes us human, body and soul, then we’ll continue down this path we’re on right now—where social media and access to information is making us more sad, more divided, more zombified and distracted.
It’s a problem that has been proven to exist: Social media has been polluted by bots so that fake news is elevated over the real, studies show it’s making us more depressed, Snapchat turns conversations into streaks, redefining how our children measure friendship. Instagram glorifies the picture-perfect life, eroding our self worth. Facebook segregates us into echo chambers, fragmenting our communities.
And speaking of Facebook, Facebook's business model succeeds more and more as it gets more and more of our attention. So does YouTube, compensating creators based on the amount of time that we watch. YouTube autoplays the next video within seconds, even if it eats into our sleep, and they don't seem to mind that they're turning out a generation of addicts to the glowing screen.
The Attention Economy is funded and fueled by our eyes and they’re using us to sell products, sell their services, sell advertisements, exploit our private information, and influence our elections. The antidote to all of this is humanity. We have to become conscious again. We need an injection of intuition into our information. We need soul. We need to reclaim our humanity. Technology and science need the soul to thrive in this post-postmodern age.
I think a minimalist approach to tech is an approach that uses technology to serve humanity—to connect us and to enhance our experience of the world—but an approach that encourages awareness and mindfulness, knowing when to shut the screen off, when our ability to experience the world fully is affected.
This turned into a rant fast. Ok, rant over. Where my neo-transcendentalists at?