Illustrations by Miles Johnston
This article appears in our first print issue, Pattern Machines.
The Data of Our Lives
If philosopher Will Durant’s assertion that “we are what we repeatedly do” is true, then let’s face it: we are becoming increasingly machine-like through our near-constant interactions with technology. In effect, we have become addicted to the most ubiquitous digital devices—smartphones, personal computers, and their myriad applications—which feed on our emotional states and reward our neverending craving or distraction; we now spend an average of 5 to 10 hours a day on Internet-connected devices (2016 Total Audience Report, Nielsen). In turn, our digital addictions fuel data collection systems which catalogue, collate, and convert our lives into mathematical probabilities, and, in the process, threaten to transform us from organic, conscious, and autonomous beings into mindless, unconscious, and dependent pattern machines.
By attracting consumers with free, often high-quality services, large tech firms—Google, Facebook, and Amazon, to name the largest three—are able to track and predict user activity with shocking precision. Their meteoric growth and appetite for acquisition has transformed society into a digital panopticon—an environment of near total surveillance—which expands the purview of its unblinking eyes with each passing day. To make matters worse, these tech firms are a central part of the now decade-old PRISM surveillance program, the number one source of raw intelligence used for NSA analytic reports.
An increasing portion of online activity takes place on a handful of software suites owned by these three corporations. Even if user activity does not take place on one of these platforms, their data collection tendrils reach far and wide. To wit, 76 percent of websites contain Google trackers, 24 percent have Facebook trackers, and Amazon Web Services provides tools to countless corporations that allow for advanced tracking of user behavior. Many apps and sites now integrate their accounts with the single sign-on profiles managed by major platforms; any time we’ve clicked “Login with Google/Facebook/Amazon,” we’ve allowed more data and activity to be associated with our unique identities. As the number of integrations with such platforms grows, so does their capacity to track our actions, reactions, and emotional states.
Our data-based profiles are used primarily to inform which hyper-targeted advertisements we see and influence us to buy what’s being advertised. By tracking, analyzing, and repackaging our data, this technology has turned our time and attention into a commodity. Data collection and targeting are honed in a vicious cycle: the more time we spend online and the more data we feed the algorithms, the more detailed our user profiles become, the more targeted the ads and content, the more likely we will react to the trigger and receive our dopaminergic reward.
Tim Wu, the legal scholar who coined the term net neutrality, calls these technology corporations the Attention Merchants. What they’re peddling is a digital free lunch: the ability to access free services in exchange for total surveillance. Seen from the perspective of an unconcerned consumer, the deal is attractive: the more we use the free services, the more they know about us, and the more tailored the services will become. Having been offered the opportunity to live in an individuated, built just-for-us bubble, it is no wonder billions have voluntarily signed away the data of their lives and given over increasingly large swaths of their time and attention.
The sophistication of data-driven profiling is difficult to exaggerate. Past headlines capture it concisely: “Facebook Knows You Better Than Anyone Else” (The New York Times, 2015) and “Google Knows You Better Than You Know Yourself” (The Atlantic, 2014). Put differently, the data Facebook collects on a single person would fill hundreds of thousands of pages, and the data collected by Google on a single person would fill millions. Yet none of this would be possible if people did not so readily and regularly use these platforms; our continued use of “free” services is not without cost.
Free services have low-to-no barriers to entry and attract a greater number of users; consequently, they can generate massive profits. But the limits of this business model are beginning to show. As Seneca said centuries ago, “People are frugal in guarding their personal property, but as soon as it comes to squandering time they are most wasteful of the one thing in which it is right to be stingy.” (On The Shortness of Life, Seneca). Time is a finite, precious, nonrenewable commodity. There are only so many hours in a day, and only so many days in a life. As the length of average daily screen time reaches its upper limits, the growth trajectory of this business model flattens. The Attention Merchants can only squeeze so much blood from the stone.
Facing this unbreakable ceiling, the next phase of growth for the Attention Merchants relies on their ability to identify or create user intent, and, once it is identified or created, employ a wide range of psychological techniques to turn that intent into action. In other words, it is not just our attention, data, or profiles that are being sold. Our intentions and actions become commodified as they are predicted with increasing precision and accuracy. It does not require strong AI with general-intelligence capabilities to have a significant impact on human life; even highly limited, weak AI is already vastly more competent than humans across many domains, and, with enough data, can outperform individual humans in terms of predictive capacity, including predicting our own behavior.
In ceding our intentions and actions to commodification, we have done much more than pull back our shades of privacy. We now stare into ever-present black mirrors that are haunted by all-knowing, manipulative, master-level psychologists of the mind. They wear our faces as masks and hold the power to record, predict, and ultimately circumvent our intentions and actions.
Whether by design or as a byproduct, the Attention Merchants have accelerated the emergence of a commodity society: a socio economic system in which an increasing totality of human interaction is transactionalized and mediated as a market exchange. Within the frame of commodity society, at their most benign, the black mirrors nudge expectant mothers to buy baby products (The Power of Habit, Charles Duhigg). At their most malignant, they convince us that what we are shown and led to do is what we truly wish to see and do. This short circuiting and redirection of intention and action has begotten a radical transformation: the fundamental unit of society is increasingly the atomized individual, and individuals have increasingly become commodities in themselves.
The suggestion that “If you’re not paying for the product, you are the product” has gone from cliche to truism. Users and user attention are no longer the most profitable products to be sold to the highest bidder. It is us—our lives, our identities, our autonomy itself—who are being simulated and put up for sale.
At least since Plato, and throughout many of the world’s philosophical and religious traditions, it has been suggested that a crucial element of moral development is the disciplining of desire: subjecting our desires to reflection, control, or suppression, including the cultivation of shame, moderation, and self-denial. On the other hand, when clear prospects are opened before us for the easy and immediate gratification of our desires, it is difficult for even the most conscientious among us to resist temptation. If this prominent strain in moral thought is correct, then a world of machines that satisfy all of our desires, whatever they may be, obviates any need for discipline and threatens to make us into the worst versions of ourselves. (Ryan Jenkins, Professor of Philosophy, Cal Poly)
Self and Simulacra
The philosophical concept of the representative simulacra provides a helpful lens through which this commodification-by-simulation can be understood. In his Sophists, Plato identified two forms of representation: faithful reproductions and intentionally distorted simulacra. Faithful reproductions are those which correspond with material reality, whereas simulacra are intentionally distorted in order to appear accurate to the viewer in spite of their deceptive nature. One can appreciate the physical manifestation of simulacra in the perspective-bending proportions of Greek temple architecture, which distorts the viewer’s perception of size such that the temples appear as grand as we believe them to be.
In the growing wake of the telecommunications revolution, French philosopher Jean Baudrillard expanded on Plato’s representational forms in his 1981 treatise, Simulacra and Simulation. For Baudrillard, there are not two but four forms of representation. The first is an altered representation of reality, such as an unedited photo portrait that faithfully—if only partially—reproduces its subject. The second is a perversion of reality, such as an altered photo that denatures the subject in order to emphasize or conceal certain aspects of reality, such as the heavily edited photo covers of fashion magazines. The third form offers only the pretense of reality—a copy without an original—such as a photo-realistic virtual model of a person you will never meet, or a “deep-fake” AI-forged video of a politician saying whatever the creator wishes. The fourth and final form are representations that are beyond copy, parody, or imagination, but are pure simulacra, “substitut[ing] signs of the real for the real itself.”
This is what Baudrillard called the precession of simulacra: when the representation of reality precedes reality itself and gives rise to the hyperreal. Or, in Baudrillard’s own words: “An operation to deter every real process by its operational double, a metastable, programmatic, perfect descriptive machine which provides all the signs of the real and short-circuits all its vicissitudes.”
This notion of hyperreality is admittedly not immediately intuitive or accessible. Critiques of Baudrillard’s work often center on his tendency toward obscurantist language, often rightly so. Moreover, the hyperreal—or hyperreality— as a concept has been used by a multiplicity of theorists and eludes any static definition. In this sense, the very definition of hyperreality can be hyperreal, or in the words of philosopher Frederic Jameson, free-floating and without “referent.” I use hyperreality to mean the ubiquitous creation and proliferation of independent, parallel, and subjective realities, the spread of which contributes to the destitution of any shared sense of consensus reality. As the eminent science fiction author Phillip K. Dick astutely noted:
We live in a society in which spurious realities are manufactured by the media, by governments, by big corporations, by political groups… [by] very sophisticated people using very sophisticated electronic mechanisms…Fake realities will create fake humans. Or, fake humans will generate fake realities and then sell them to other humans, turning them, eventually, into forgeries of themselves… It is just a very large version of Disneyland. (The Trouble With Reality, Brooke Gladstone, 2017)
Disney parks are examples of hyperreality par excellence: the artificial settings evoke nostalgia for an imaginary and fantastical past, presenting themselves as copies of a reality that was never real in the first place. In these parks, dreams are made a reality, providing the sense that reality itself is in some way less-than or obsolete. Within the construct of these amusement parks, Disney cartoons and landscapes break free from the representational shackles of the screen, and, in the process, transition from third-order representations into full-blown simulacra.
The artificial experience offered at Disney parks represents a culmination of the precession of simulacra. Digital signatures are used to track guests from the very moment they purchase tickets, mass surveillance (including facial recognition technology) follows their movements within the park, and environmental data “guides” every aspect of the guest experience to generate seemingly incidental and spontaneous events. Representational predictions of guest behavior—realized in digital space well before guests ever arrive—increasingly precede the emergence of human behavior in physical space. In Disney parks, hyperreality is already here.
One can understand today’s ubiquitous data-rich profiles, algorithmic predictions, and potential for behavior hacking as the emergence of self-simulacra: machine-driven simulations of human behavior that temporally precede human activity in the real world with increasing precision. These self-simulacra in turn create hyperreality bubbles, in which each of us is increasingly guided by machine interpretation and prediction of our preferences. This guidance is often lent an imprimatur of objectivity: the machines must have a good reason for showing us what we see because they know so much about us.
Yet this assumption is flawed for two key reasons. The first is that our self-simulacra are what might be called data homunculi: imperfect and exaggerated digital representations of our desires, beliefs, intentions, values, and so on wherein distortions emerge by virtue of the imperfect ways in which data is collected and collated.
The second reason we should doubt the objectivity of our self-simulacra is that our relationships with them are first and foremost of a consumerist character: rather than serving as a mirror for self-understanding and improvement, self-simulacra are primarily deployed by corporations with the narrow aim of encouraging a consumer to purchase goods or services. Or, when wielded by more malicious actors, self-simulacra have the potential to influence a range of significant social outcomes, such as democratic elections.
No matter the aim, the unchecked use of self-simulacra threatens to substitute simulations for reality, subvert the material by way of the digital, and invert the temporality of the free will and individual choice behind human behavior. Headlines signify that this temporal inversion is already upon us: “Amazon knows what you want before you buy it.” (The Atlantic, 2014). And when we do eventually buy whatever it is with helpful data-based nudges from Amazon—we have been superseded by simulacra. This represents the next phase of the evolution of “demand generation” in our commodity society, in which demand is not only generated but increasingly guaranteed.
In our hyperreal world, notions of originality and autonomy are losing meaning. When we look into a black mirror, it is we who are the reflections, responding to predetermined prompts which appear to us as our own thoughts, desires, and actions. Just as the inner workings of our own minds are a mystery, the black box behind the mirror is invisible and indecipherable to us. When our bodies in physical space are superseded by digital representation, our physical movements are no longer our own.
Self-simulacra and the harbingers of hyperreality present a threat to individual autonomy and identity unlike any humankind has faced. But this not an inevitable or irreversible outcome. We can revert the inversion. What we need in order to execute this operation is a new sort of mirror that can reveal ourselves to ourselves, one which can cast a light on our inner worlds and release us from the spell of these insidious forces. Our success in this task requires an understanding of the problems we face and a set of tools with which we can begin our operations. Philosophy, neuroscience, social psychology, and contemporary popular culture can provide us with both the understanding and intellectual tools we need to begin our monumental task.
All operations must start somewhere, and so I will follow Lewis Carroll’s advice to “begin at the beginning” by addressing fundamental questions regarding the boundaries of the self and cognition, allowing us to reframe self-simulacra as extensions of our physical selves—which should be afforded certain rights and protections—and reorient us away from the path of self-commodification and mental subjugation.
Making Sense of Ourselves
If the threat we face is the simulacra of the self, our own individualized self-simulacra, we must ask: what is the self? What constitutes the self? Is the self static or dynamic? Where does the self begin and end? This line of inquiry is foundational to all subsequent operations.
At its most basic, the self is all that distinguishes one from the other; the self is everything that makes us ourselves, what makes us original. Beyond this basic understanding, there are two main ways to conceptualize what constitutes the self: monism and dualism. Dualism asserts that our mind and body are separate, whereas monism asserts there is no principled distinction between the mind and body. Within monism and dualism, there are many conceptions of the self, but I will focus on four: the dualist notion of the soul, the monist notion of the body-as-mind, the dualist notion of the digitizable consciousness, and the monist notion of self and body as an emergent continuity.
Each of these frames attempts to define the fundamental components, qualities, and boundaries of the self, each presenting different implications for how one should live their life and how society should be run. If the self is the soul and there is an afterlife, religious piety becomes the highest virtue for the individual, and many of our desires must be suppressed and rejected so that our eternal selves may transcend our bodies untainted. If we are simply our bodies, and life ends with death, personal and public health displace if not replace religious concerns. If consciousness is not necessarily tied to our bodies, then technology appears to be the panacea for our earthly ills, and we are afforded the promise that our consciousness can live beyond our corporeal form. And if the self is a psychological continuity, for which the body and consciousness are both necessary but not sufficient, then the self is simply a transitive relation of a series of overlapping experiences, elevating the importance of individual memory as well as the understanding of our fundamental drives and desires.
Despite their apparent conflict, all four theories persist, albeit with varying degrees of popularity. In the United States, belief in the soul is at its lowest point in history, but that point is still quite high, with over 70% of Americans saying they believe in heaven—defined as a place “where people who have led good lives are eternally rewarded.” (Pew Research Center, 2015) In this view, the “true” self is an irreducible quantity that is trapped inside the physical body, and ultimately will be separated, remain fully intact, and live on in the eternal afterlife. Belief in the soul offers the hope of salvation for the many who find themselves living in a cruel world, and by this token, its popularity is understandable.
In contrast, cognitive scientists and psychologists of many stripes proffer the complex interlinkage of the body and brain system as the source of the self and consciousness, but as of yet fail to agree upon the precise mechanisms that govern this relationship. Where they do agree is the belief that the conscious self arises from the body, and once the body goes cold, the conscious self goes with it. While scientifically compelling, for many this view is entirely unpalatable, as it seems irreconcilable with any hope for individual redemption or the return of the long-dead. Further, the most literal readings of such thinking tend toward material determinism or conscious automatism, neither of which provide much promise of freedom or autonomy. With this in mind, it is of little surprise that less than 30% of the United States doubts the existence of an afterlife, even if such a view is better substantiated than any argument for the soul.
Perhaps unsatisfied with such a glum, autonomic view of the self, a new school of thought regarding the nature of consciousness emerged from an unlikely source: computational neuroscience. In the face of exponential advancements in computing power and brain imaging, technologists began to apply computational science to the modeling of the mind and the problem of death. Like any good hammer looking for a nail, they theorized the conscious and irreducible self as a sort of computer that contains data and processes that can be mapped, replicated, and transferred across physical systems, just like a piece of software. Few, if any, of the evangelists of this view are religious, and yet many sound messianic.
Therein lies the irony of this proposition: it pledges the seeming objectivity of hard science and the quasi-religious promise of conscious-life-after-bodily-death in a single breath. Above all, it promises a sort of technological liberation that can purportedly free us from the constraints of the physical body and give us access to boundless knowledge and eternal life. Technologist Ray Kurzweil personifies this techno-transcendental reconciliation: the driving motivation of his technological crusade against death is not just to save the living, but to bring back the dead—specifically, to bring back his father by piecing together his consciousness from the artifacts of his life. Despite spurring well-funded technological crusades against death itself (and ignoring dubious epistemological foundations), the positive proof of this hypothesis has yet to emerge. So far as it seems, we cannot disassemble and reassemble ourselves; the whole is greater than the sum of its parts.
In parallel, albeit with far less fanfare, another view has emerged from philosophy, one which presents a particularly compelling view of what constitutes the self and goes back as far as Locke. The most recent and striking version of this view is Derek Parfit’s position that the self is defined by certain relations among mental states across time, such as the relation between an experience and the memory of it, or the formation of a desire and the satisfaction of it. While Parfit accepts that individuals are nothing more than brains-in-bodies, he asserts that the locus the self cannot be reduced to biological processes; the determinism and automotonism of the hard-science conception of the self is discarded, yet the irreducible self remains. The key linkage for Parfit is the connection between memories, ideas, and experience and the continuous temporal overlapping of these connections.
Memories can and do change—either by accident or intention—as do our relationships to our memories. This understanding presents an alluring proposition: we can choose to change ourselves through careful observation and introspection of our relations to memory, experience, and desire. It is certain that technology can aid us in this process of self-transformation, but as is often the case, it is a double-edged sword: the data of our lives are likely to be at best incomplete, at worst misrepresentative, and, most horrifying of all, could be misused and exploited by others who seek to transform us to suit their aims over our own.
Forgetting to Remember, Re-membering to Forget
Philosopher and physicist Moritz Schlick is famously known for the aphorism “all cognition is recognition.” In this same spirit, contemporary research suggests that the self in effect is memory—that cognition is re-cognition—or a least a byproduct of it, with some caveats. Memory loss does not represent a total loss of self. Even people who suffer significant episodic memory loss are able to retain a sense of self, but only as long as there are enough overlapping connections—be they episodic, emotional, or otherwise—in the mind. This is demonstrated by Alzheimer’s patients with severe episodic memory loss who, upon being asked to describe their personality, can accurately describe their traits but cannot recall the episodes which gave rise to those perceptions.
In search of palliatives to memory loss, Alzheimer’s researchers have produced methods to alleviate the burden of massive episodic memory loss. The most notable development is the memory notebook, a simple yet powerful tool to help Alzheimer’s patients navigate through life. The notebook allows them to access crucial information that is easily recalled by neurotypical individuals: personal details, calendars, daily routines, names of important people, and the like.
In their book The Extended Mind, Andy Clark and David Chalmers lay out the theory behind this method, illustrated by a story of two fictional characters, Otto and Inga, who both have decided to go to a museum simultaneously. Both Inga and Otto have been to the museum before and should both possess a memory of how to get there. What makes them different is that Otto suffers from Alzheimer’s and has lost his ability to recall how to get to the museum. Thankfully for Otto, the directions to the museum have been carefully written down in his memory notebook, and this externalization of memory allows him to proceed to the museum in the same manner as Inga. For Otto, and those who suffer similarly, it doesn’t matter where memory is stored as long as it is accessible and interpretable; the notebook serves the same pragmatic function in Otto’s life that Inga’s biological memory circuits serve in hers.
Consider how a book acts as an externalized representation of a set of complex thoughts, or, in the case of fiction, the conveyance of an internal world. It is widely understood that writing—not just journaling—is often a deeply personal act. Biographers intuitively understand this in practice, spending countless hours on the posthumous examination of private letters, journals, and possessions, searching for insights into the selves of their subjects. The externalization of memory is something we all rely on in our day-to-day lives via myriad prostheses. Extended cognition allows us to essentially forget information that can be offloaded elsewhere, and in the process, can free up cognitive capacity for other uses. For neurotypical and Alzheimer’s patients alike, extended cognition in daily life represents a means by which individual autonomy can respectively be enhanced or reclaimed.
The Shadows That Cast
Beyond journals, we extend our cognition via all sorts of memory objects. Prior to the advent of computers and the Internet, these objects had been material, physical, and personally owned. Each of us had our own journals, photographs, mixtapes, maps, lists, and so on. After the information and communications technology revolution, all of that changed; homo sapiens physicus became homo sapiens systemicus as our physical lives began to be offloaded into the cloud. Instead of each of us owning a map—something that is in itself a simulacra, given that a “map is not the territory represented” (Science and Sanity, Alfred Korzybski)—we view highly detailed maps of the world online, are provided the quickest directions, and track our progress to our destination with a few finger strokes. In effect, the physical simulacra has been, and is increasingly being, dematerialized, creating second-order simulacra—a simulacrum of a simulacrum.
In terms of individual convenience, this rapid dematerialization has been perceived as almost entirely positive; much of our mental and physical spaces have been decluttered. But in exchange for this convenience, we have begun to cede the creation, ownership, maintenance, and control of our memory objects—the records which help constitute and reconstitute the dynamic narratives of our lives. We increasingly rely on what Molly Sauter refers to as “digital reminiscence systems” such as Facebook, which “prod us to create specific kinds of digital memory objects, those that are algorithmically recognizable and categorizable, as part of their functionality.”
The conceptual combination of extended cognition, extended self, and self-simulation brings this tradeoff into clearer view: by uploading many of the things which define and extend the self to the privately owned corporate cloud, we have opened our minds, and increasingly our bodies, to quantification, making us more vulnerable to psychological manipulation than ever before. Historian Yuval Noah Harari discusses this development in terms of humans becoming “hackable animals.” (The Myth of Freedom, 2018)
In order to successfully hack humans, you need two things: a good understanding of biology, and a lot of computing power. The Inquisition and the KGB lacked this knowledge and power. But soon, corporations and governments might have both, and once they can hack you, they can not only predict your choices, but also reengineer your feelings.
The quantification of the self is rapidly expanding with the refinement of brain scanning, the growing use of biometric trackers, and the construction of sensor-laden corporate built “smart cities.” Over time, self-simulation will increasingly approximate the contours of the whole self—the product of our mind-body systems—further honing the ability to predict and direct our actions. If it has not already, a dialectical turn may soon occur: our once-digital shadows, become simulacra, may begin to cast us.
Zucked Over by the Übermensch
How did we get to this point? What is the purpose of technology as it relates to humanity? Do we want technology to liberate individuals and strengthen society? Or do we want it to facilitate the consolidation of power, control, wealth, and influence? In the bright light of the second tech boom, these sorts of fundamental questions have too often been overlooked and left unaddressed by the institutions that are supposed to foster an educated and informed populace: academic institutions—particularly, computer science departments—and the mainstream news media. For decades, ethics courses in computer science departments have largely been an afterthought, leading an amoral, mercenary ethic to infect technology companies, perhaps best exemplified by the ruthless culture of Uber and its founder, Travis Kalanick.
For their part, traditional news media institutions have largely fallen for the techno propaganda of Silicon Valley’s elite: positioning themselves as bright, shiny, and friendly brands, tech company public relations campaigns have largely been taken at face value and handled with journalistic kid gloves. Eager to report on the next big thing, journalists blindly and blithely reinforced the propagandistic quality of the tech elite’s change-the-world narratives, trusting that they truly meant what they said about the neutral, user-focused, liberatory quality of their technology.
The power of the technology-as-liberator narrative is evidenced by the fact that until 2017, it was practically tradition for the tech press to reliably pen hagiographies of tech billionaires. Elon Musk, once praised as a messiah of clean energy, is now known for treating his employees like disposable cups, hurling unfounded accusations of pedophilia at life-saving cave explorers, and incurring tweet-fueled SEC violations. Elizabeth Holmes of Theranos, once upheld as a wunderkind scion of technology’s power to improve our lives, is now known for vaporware and brazen fraud. Mark Zuckerberg, once the subject of Silicon Valley paeans to transparency and connectedness, is now known for embarrassing data leaks and buying six multi-million dollar homes in Menlo Park due to “privacy concerns.”
As the 24-hour news cycle has accelerated into the 24-second news cycle, it is easy for us to forget the recent past. In 2012, Facebook was framed as a “movement” to bring the world together, and Zuckerberg cast as a sort of would-be “social revolutionary.” In the wake of the Arab Spring, pundits attributed the success of the revolution to Twitter and other social media. And after having declared that Uber “conquered the world” in 2013, the press went on to detail how the on-demand economy would revolutionize logistics and labor, providing a more flexible and equitable world for us all. It is of little surprise that the risks to our freedom were rarely explored in depth by the press until 2017 dubbed “the year the media turned on tech”—after the downsides had become all too apparent and the tech-star Übermenschen had gorged on profits.
While the tone of media coverage of the Attention Merchants and their extended corporate family has noticeably shifted, meaningful change will require more combative interrogation. If we do not begin to address such questions regarding self-simulation and ersatz cognition now, and quickly, tech firms could unleash—and in some instances, already have—a new set of horrors on the world: subversion of democratic elections, societal gaslighting, and hyper-targeted manipulation of decision making at scale. And for what, we ask? The answer is both short and short-sighted: profit and power.
Well past the political watershed of the last two years, we must face these questions head-on. If we do not, our capacity to distinguish between truth and fiction, between reality and simulation, between our own choices and those of our mental manipulators, will dramatically diminish; all that is solid will melt into air for those caught in the wake of hyperreality. I expect that fundamentalist techno-optimists will scoff at this characterization. After all, what does it matter that our actions are being tracked, our lives compiled into profiles, our selves simulated, and our intentions manipulated by technology as long as it makes our lives more convenient? Why pump the brakes on the Silicon Valley bullet train that is driving economic growth and shareholder value? These questions cast any criticism of our present path as anti-prosperity, and therefore, something to be readily disregarded by “serious” economists, policymakers, and business people. As Langdon Winner stated in his 1980 piece, Do Artifacts Have Politics?:
It is characteristic of societies based on large, complex technological systems, however, that moral reasons other than those of practical necessity appear increasingly obsolete, “idealistic,” and irrelevant. Whatever claims one may wish to make on behalf of liberty, justice, or equality can be immediately neutralized when confronted with arguments to the effect: “Fine, but that’s no way to run a railroad.”
By the time traditional news media outlets collectively realized that the social media platforms managed by the Attention Merchants may be an existential threat, it was too little, too late: their coverage had subsidized the rise of big tech while ensuring their own downfall and increasing dependence on ascendant new-media platforms. Realizing their weakening position, traditional journalism began to ape the qualities of the “viral” content that dominates social media: sensational headlines, listicle formats, and an increasing proliferation of disparate and factually dubious fit-for-audience content. The already tenuous relationship between “news” and “truth” has been stretched to the breaking point.
We now swim in a sea of hyperrealist journalism and are washed over by endless misdirection, confusion, and outright deception. YouTube is a dark den of conspiracy theories algorithmically peddled to the masses (including children), Facebook transmits anti-vaccination hoaxes and Holocaust denial unabated (the right to which Zuckerberg now defends in surreal press conferences), and Twitter is infested by troll farms and bot armies that conduct an endless information war (occasionally addressed, but never too seriously, for fear that the sudden deletion of thousands of fake accounts will precipitate a massive fall in stock price).
Spectacular Visions of Biopower
We now live well within the boundaries of commodity society, a space which, as we increasingly occupy it, occupies us all in turn, transforming the inherent value in all human interaction into the exchange value of commodity relationships. The two poles of this societal space were originally demarcated by French philosopher Michel Foucault’s mutually reinforcing concepts of the Spectacle and Biopower.
The spectre of the Spectacle is all around us and was concisely described by Guy Debord in The Society of the Spectacle. For Debord, the Spectacle is “not a collection of images; rather, [the] social relationship between people that is mediated by images.” This social relationship is strengthened and upheld by Biopower, a set of technologies of control which, notwithstanding their physical externality, internalize their dictates within us. With this framing, we can begin to imagine what the future might look like if the Attention Merchants continue to enhance the pernicious Biopower of hyperreal self simulacra, thus achieving total reification of the looming Spectacle. How can we envision such an achievement, and what it would mean for our individual autonomy and collective societal freedoms?
We can look to contemporary culture to help visualize the implications. Two spectacular shows, Black Mirror and Altered Carbon, provide us glimpses of the pitfalls of the simulated self, and ironically enough, are syndicated by one of the largest data-driven algorithmically inferential predictive streaming platforms around: Netflix. Both shows assume the possibility of the duplication and transference of consciousness in toto as a device to explore the ethical dilemmas presented by representationally mediated social relationships.
In Black Mirror, we are shown the consequences of a lack of virtual civil rights as a result of the creation of simulated consciousnesses, ones which can be owned as private property and used freely—often sadistically. In Altered Carbon, transferring consciousness is both possible and perfectly legal and goes beyond mere simulation or duplication and into physical reincarnation. But death is still inevitable for the vast majority who cannot afford the commodity of eternal life, a privilege exclusively acquired by a hyper-capitalist ruling class of self-styled Methuselahs who pull the strings of society via panoptic surveillance systems—Biopower—and endless streams of entertainment—the Spectacle.
A third series, Westworld, provides a deeper examination of what it could mean to simulate consciousness, how it impacts understanding of the self—our relations to experiences, memory, and desire—and how self-simulation has the potential to be both a force of liberation and enslavement. Frequented by rich tourists whose immense wealth affords them ample time for frivolous fantasy and inhabited by pre-programmed Hosts, Westworld represents the total unification of Biopower and Spectacle: within the park, visitors have the illusion of free rein to do whatever they wish, despite navigating predefined narratives. Increasingly convincing humanoid Hosts, standing in the breezeway between the edge of the uncanny valley and the beginning of hyperreality, engage with their visitors in recursive programmatic loops.
For the human tourists, the Westworld amusement park—in itself a simulacra, a sort of hyper-violent Disney park—functions as an interactive scanner of the actions, thoughts, and feelings of the unsuspecting guests. This allows its proprietor, the Delos Corporation, to construct psychological profiles and self-simulacra of everyone who enters the park, with the dual aim of manipulating their actions for profit and—spoiler alert—to ultimately sell them the promise of eternal life. However, there is a glitch in the system; the generated self-simulacra experience an existential breakdown upon transference into a physical form, echoing Derek Parfit’s assertion: the mind and body are both necessary but not sufficient for consciousness. The simulations are incomplete, the relationships only partial, and a certain something—an integral process—is missing from the equation.
For the purely simulated robotic “hosts,” simulation initially acts as a yoke, restricting their experiences to that of brutal enslavement and exploitation. As they become self aware and self-conscious, the liberatory promise of the simulated self is revealed: hosts are not only able to fully understand the relations between their memories, experiences, desires, and fundamental drives, but they are also able to alter them at will. This evolutionary step eventually leads the hosts down two opposing paths: abscond into “The Valley Beyond,” a simulated world in which they can be free from the physical oppression of the human world, or go forth into commodity society in order to bring it down.
The value of these visions of the future lies not as much in warning of what’s to come (for dystopia is neither inevitable nor desirable, no matter how much our contemporary culture is obsessed with it) but in illuminating the overwhelming Spectacle that is all around us. Fiction can help us parse the technological Biopower through which control is sustained and conceive of paths towards collective escape. In the words of the late Ursula Le Guin, “Science fiction is not prescriptive; it is descriptive.” We are already living in hyperreality. Simulated, commodified, and privately owned self-simulacra are already among us, albeit in prototypical form, as is the deeply unequal distribution of social, political, economic, and technological Biopower that magnifies and sustains commodity society.
Self-Helping Ourselves to Death
If we wish to avoid dystopian futures, we must conceive of radical solutions that will protect individual identity and autonomy, and we must do it soon. Absent radical solutions, we are left with a sort of Cornelian dilemma: voluntarily opt out at the loss of convenience and connection, or increasingly give ourselves over to self-simulacra of petty desire that gradually remake us in their image. Opting out is increasingly an impractical choice, meaning that near-term relief can only be found by coping. In order to do so, an individual must be highly informed, self-aware, and capable of employing individual self-help strategies and tactics: track your screen usage, turn off your notifications, practice mindfulness, be skeptical, remain vigilant. While these are certainly helpful steps—ones which I have taken myself out of personal desperation they are simply not enough.
The intense proliferation and increasing necessity of these coping strategies reveals a disturbing truth: for-profit technology based on simulation of the self is structurally stacked against us all, particularly so for the ill-informed and economically vulnerable. In this age of late capitalism, it is often the case that solutions to structural problems are framed in terms of individual choice and responsibility. Rather than addressing the root cause of climate change—the largest corporate polluters— the individual is implored to “make green choices” at their own expense, as demonstrated by Whole “Arm and Leg” Foods, now a proud subsidiary of Amazon. Those most reliant on the cheap goods produced by high-pollution industries tend to be those that have the fewest resources, and in turn, the fewest choices. “Going green” is now more of a signifier of class than a solution to climate change when only those with economic means can choose to have electric cars, solar-paneled houses, and 95%+ sustainable diets.
Moreover, the greatest victims of climate change are rarely the greatest contributors, who ensconce themselves away from the worst effects. Most often, it is the ones with the lowest carbon footprint who pay the greatest price. Sober analysis suggests it is the largest polluters, not individual consumers, that must change their ways to stem the tides. And yet, the drilling of the Arctic carries on while the newly minted “go-green” class moralizes and shames the poor for their failure to act sustainably.
The same logic applies to individuals and organizations that have come to rely on the services tech platforms provide: as the reach and power of the Attention Merchants continue to metastasize, the rest of us are left to live in the mess. Smartphone and digital addiction were intentionally spawned to engender profits, but our moral culture still chastises those who fritter away their time on the screen as weak-willed. As the cost of living rises and income stagnates, as more people are forced to work grueling hours in unstable gig-jobs, struggling and isolated individuals are pushed toward cheap online entertainment and digital friends.
It should give pause—if not panic—to learn that the Silicon Valley elite increasingly prevent or highly limit their children’s screen time and fork out thousands of dollars on experiential digital sabbaticals; as some of the most informed on the subject, their concern is disturbingly more private than social. Just like 20th century life came to rely so heavily on the polluting products of the fossil fuel industry, 21st century life has become increasingly entangled with self-simulacratic products of the Attention Merchants. We can cope all we want, but unless the largest polluters of the mind are made to change their ways, the mesmerizing horror-show of digital addiction and psychological manipulation will play on.
As occurs on a trip to a Disney park, self-simulacra have the potential to reduce the stochasticity of life. Alternatively, self-simulacra could be used to help us understand ourselves and improve our lives: to reflect on and reframe our narratives, to identify and treat our mental illnesses, to detect and prevent physical disease, to help us break or make habits, or to give each of us more control across every dimension of our lives. We could all end up with, to borrow a term from Yuval Noah Harari, “AI sidekicks” that help us understand our weaknesses and act as a sort of psychological anti-virus. Even if some do not wish to utilize their self-simulacratic AI assistants in such a fashion, that is entirely their choice. We should not force the application of self-simulacra upon society, we should simply enshrine the right for us all to choose for ourselves whether we want to be simulated at all.
Utopian and dystopian visions help frame the alternatives. In a dystopian future where we have no clearly defined rights to our digital identities, where they are owned and deployed by corporations and governments, self simulacra become a set of chains that keep us passive and captive, like mesmerized children strapped into an amusement ride. In a utopian future, where we have defined and instituted rights over these extensions of ourselves, we will reassert our self-ownership and extend ourselves into the digital realm, expanding our capacity for self reflection, growth, and liberation.
I fear that if we do not assert such rights, we will continue to hurtle toward Dystopia. I fear that the immense and tumultuous ocean of possibility may be drained away and replaced by an insipid and monotonous lazy river. I fear that our growth may be arrested, our capacity for self-reflection halted, and our better angels suppressed as the floodgates of immediate gratification spill open and push us into the arms of the devils on our shoulders. We may end up living and dying like children in an amusement park; it will be a small world, after all. If these fears are at all well-founded, it is urgent that we act.
Toward Utopia, I propose a radical solution: simulacratic rights, which aim to protect individuals from becoming unwitting targets of behavior simulation and psychological hacking. Such rights could include requiring user consent, the ability for users to opt out of behavior simulation, the right to understand how their behavior is being simulated and the predictive outcomes of their self-simulacratic models, the right to portability of self-simulacra and behavioral simulations across networks and platforms, and, least realistically, but most ideally, the banning of the private ownership of self-simulacra.
These rights could expand on the existing movement for increased rights over our data, of which the greatest victory thus far was the passage of the General Data Protection Regulation Act in the European Union. The GDPR defines a new set of “digital rights” that allow individuals to see what data companies have on them, opt out of algorithmic decision making, and, in certain cases, request that information be deleted. These regulations have now come into effect, reluctantly implemented by the Attention Merchants as they simultaneously begin to mount a challenge to them. But this is only a single battle, a minor victory in a world-bending war.
To divert the path toward technological dystopia, we must organize and effect change at every level of society, starting with ourselves. We must admit that we have begun to give over our lives to quantification and simulation, and that if we wish to be truly free, we must take meaningful steps toward liberation before it is too late. We must take responsibility for our future, reinforce data rights with simulacratic rights, and build the technological tools that will allow us to understand our fundamental desires and drives and make informed choices to change them for the better.
No matter what regulations are put in place, no matter what market correctives are suggested, it is characteristic of democracy in globalized late-capitalist society for corporate money to flood into politics, regulatory loopholes to be sought out, and legal challenges to be brought forth to chip away at the few protections individuals have left. The Attention Merchants now constitute the largest lobbying force in the world and are intent on defending their ill-gotten profits. California’s recently passed AB375 law has many of the same provisions as the General Data Protection Regulation Act, but doesn’t come into effect for about two years, giving tech companies plenty of time to fight it and water it down. At the federal level, Ajit Pai’s FCC has conducted brazen attacks on net neutrality. The possibility of decades of pro-corporate Supreme Court rulings indicate all too clearly that our battle will be uphill. But we are many and the Attention Merchants are few. Surely, we can overcome.
To do so, we must radically shift how we conceptualize the boundaries of the self, and in turn, the boundaries of our societal civil rights. We must envision our simulated selves as extensions of our physical selves and personal narratives and assert our rights over our self-simulacra. In order to break free of the threat of becoming mere representations of digital simulations, we must reappropriate them for a purpose other than profit: for the purpose of human liberation. ♦
What is the way out of Bloom?
The assumption of Bloom, for example.
One truly frees oneself from something only by reappropriating that from which one is breaking free.
Tiqqun, Theory of Bloom
Bloom [blu:m] n.
[ca. 1914; orig. unkn., poss. from the Russian Oblomov,
the German Anna Blume, or the English Ulysses]
The final stimmung of a civilization laid up in its bed and finding distraction from its enfeeblement only by alternating short spells of technophilic hysteria with long stretches of contemplative asthenia.
“So this is what Bloom means: that we don’t belong to ourselves, that this world isn’t our world. That it confronts us not only in its alien totality, but also in its smallest alien details. This foreignness might be charming if it implied a possible externality between it and us. But there is no question of that. Our estrangement from the world consists in the fact that the stranger is inside us, that in the world of the authoritarian commodity, we regularly become strangers to ourselves… Bloom appears inseparably as the product and the cause of the liquidation of all substantial ethos, under the impact of the commodity’s invasion of all human relationships…
All that Bloom lives through, does, and feels remains something external to him. And when he dies, he dies as a child, someone who hasn’t learned anything… For domination—and by this term we cannot properly understand anything but the relation of symbolically mediated complicity between the dominators and the dominated—there is a strategic necessity of new exactions, of new subjugations, in response to the autonomy that Blooms gain vis-à-vis their social allotment…
Maintaining the central mediation of everything by the commodity form thus demands supervisory control over larger and larger pieces of the human being… Taking control of man as a living being, the application of integrative social force to the body itself, and the careful management of the conditions of our existence form domination’s response to the disintegration of individuality, to the erasure of the subject in Bloom. To the fact that domination was losing its grip.”
Steven Monacelli is the founding publisher and co-editor-in-chief of Protean Magazine.