There is a question that circulates in certain corners of the internet, usually posed with more anxiety than curiosity: how do you know that the person you are talking to is real? It is asked about social media accounts, about customer service chatbots, about the author of the article you are reading. The question is rarely answered satisfactorily, because the satisfactory answer is uncomfortable: in most cases, you do not know. And increasingly, it does not matter whether you know or not, because the systems you are using were not designed to help you find out.

This essay is not about the threat of AI-generated content. That framing — threat, danger, contamination — assumes a prior condition of purity that never existed. It is about something more interesting and more permanent: the functional transformation of the internet from a medium that aspired to represent reality into something it was perhaps always better suited to be.

I. The Representation Myth

The early internet was animated by a powerful myth: that it was, or could become, a faithful mirror of the world. The dream had many versions. In the utopian register, it was the library of Alexandria reconstituted — every text, every image, every fact made universally accessible. In the democratic register, it was the public square without gatekeepers — every voice equal, every perspective available. In the journalistic register, it was the end of information asymmetry — no government, no corporation, no institution could hide what the network had distributed.

These dreams were never fully realized, and their failure was not primarily technological. The internet was never a neutral conduit for reality. From its earliest commercial instantiation, it was a medium shaped by incentive structures — attention economics, advertising models, platform architectures — that systematically distorted what appeared on it. The content that spread was not the most accurate but the most engaging. The voices that amplified were not the most credible but the most provocative. The information that persisted was not the most truthful but the most linked, the most shared, the most algorithmically favored.

What AI-generated content has done is not introduce distortion into a previously undistorted medium. It has made the distortion visible, and made the question of authenticity — which was always present, always unresolved — impossible to ignore.

II. The Scale Problem

In 2024, researchers estimating the proportion of online content generated by AI arrived at figures that varied wildly depending on methodology, but converged on a shared implication: the volume of AI-generated text, image, audio, and video on the public internet is now sufficient to constitute a qualitatively distinct condition. Not a contamination of a human-generated baseline, but a new baseline in which the origin of any given piece of content is, by default, unknown.

The scale matters because trust in any medium is partly a function of how much of it has been verified. When most content on a platform is human-generated, the occasional fake is a detectable anomaly — something that can, in principle, be identified and removed. When the ratio shifts, the anomaly becomes the norm, and the detection apparatus — already inadequate — becomes structurally overwhelmed. At that point, the epistemic relationship between the medium and its users has changed not in degree but in kind.

Consider what it means to read a product review, a scientific abstract, a news article, a personal testimony, knowing that any of them might have been generated by a system that has no experience, no accountability, and no relationship to the events it describes. The rational response is not to disbelieve everything — that is paralysis — but to recalibrate. To treat online content not as a report on reality but as a signal of uncertain provenance, useful for some purposes and unreliable for others. This recalibration is already happening, largely unconsciously, in the behavior of people who spend significant time online. It represents a fundamental shift in the cognitive relationship between users and the medium.

III. The Verification Collapse

The mechanisms that the internet developed to establish credibility — the verified account, the institutional byline, the citation trail, the backlink network — were always imperfect proxies for truth. They were, in essence, systems for distributing trust from known entities to unknown content: if the New York Times published it, or if a credentialed expert endorsed it, or if it was cited by a hundred other sources, then it was probably reliable. These systems rested on the assumption that the known entities themselves were trustworthy — an assumption that was always contested but at least coherent.

Generative AI destabilizes these proxies at multiple levels simultaneously. Institutional bylines can be attached to AI-generated text. Credentialed experts can be simulated, or real experts can be quoted saying things they never said. Citation networks can be seeded with plausible-sounding but fictitious sources. The backlink economy can be gamed at scale. None of this is categorically new — all of it existed before generative AI — but the cost of doing it has dropped by orders of magnitude, which means the volume of credibility-mimicking but unverifiable content has increased proportionally.

The result is not that nothing can be trusted. It is that trust can no longer be delegated to the medium's own verification systems. It must be rebuilt, where it can be rebuilt, through channels that are harder to automate: direct personal relationship, physical presence, institutional accountability with meaningful consequences for failure. For most online content, these channels are unavailable. For most online content, therefore, trust is no longer rationally available either.

"The internet did not become unreliable when AI arrived. It became transparently unreliable. The difference is not in the condition but in our ability to pretend otherwise."

IV. What Cannot Be Stopped

It is worth being precise about what is and is not reversible in this situation, because much of the discourse around AI-generated content is framed in terms of prevention, regulation, or detection — as though the condition were a problem to be solved rather than a transition to be navigated.

The generation of convincing synthetic content will not stop. The economic incentives for producing it are too strong, the technological barriers too low, and the detection systems too slow. Watermarking, provenance systems, and content authentication technologies will be developed, and they will be partially effective, and they will be circumvented. This is not defeatism; it is pattern recognition. Every information technology that has been used for deception has generated a corresponding detection technology, and the detection technology has always lagged. There is no reason to expect generative AI to be different.

The recalibration of epistemic trust will therefore continue. Users will adapt, partly by developing new heuristics, partly by shifting their trust toward different kinds of sources — embodied, relational, locally verified — and partly by simply accepting a higher baseline of uncertainty as the normal condition of being online. This acceptance is not comfortable, but it is not entirely unprecedented. Historians, journalists, and lawyers have always operated in conditions of uncertain provenance. The novelty is the universalization of that condition to everyday media consumption.

What this means practically is that the internet's implicit contract with its users — the claim to be, however imperfectly, a window on the actual world — is not renegotiable. It is already broken. The question is not how to restore it but what replaces it.

V. The Narrative Turn

Here the analysis becomes speculative, which is appropriate for a medium in transition. But speculation grounded in pattern is more useful than the absence of it.

When a medium loses its claim to represent reality, it does not disappear. It finds new functions. The functions it finds are typically those for which its actual properties — rather than its aspirational ones — are best suited. The internet's actual properties, stripped of the representation myth, are remarkable: it is a medium of simultaneous global reach, bidirectional communication, persistent archiving, and infinite reproducibility. It connects people across geography, enables coordination at scale, and creates lasting records of communication. None of these properties require the content to be true. All of them are well-suited to fiction.

This is not a claim that the internet will become a platform for explicit fiction — novels, films, games — in some straightforward sense. It is a more structural claim: that as the epistemic contract between the internet and its users dissolves, the space opens for different kinds of engagement with content. Engagement that does not depend on the question "is this real?" because that question has been suspended — not through deception, but through a shared recalibration of expectations.

The analogy that suggests itself is cinema. Before cinema normalized the fictional film, the question "is this real?" was structurally important to how people engaged with projected images. The early audiences who fled from the Lumière train were not naive; they were applying a reasonable heuristic to a new medium. As the medium matured and the fictional contract became established, the question was suspended — not because audiences became incapable of distinguishing film from reality, but because they agreed, tacitly, that the question was not the right one to ask. The relevant question became not "is this true?" but "is this meaningful? Is this moving? Is this worth attending to?"

A post-AI internet in which the epistemic contract has dissolved may be moving toward an analogous condition. Not a medium that reports on reality, but a medium that generates experience — narrative, emotional, social — whose value is not contingent on its correspondence to any external state of affairs.

VI. The Opportunity in the Ruins

This transition creates, among other things, a specific opportunity for narrative form.

The representation myth was, for storytelling, a double-edged condition. On one hand, it meant that fiction presented online could achieve a kind of credibility unavailable to fiction in clearly marked fictional media — a story told as though it were real, on a platform people used to find out what was real, could achieve effects unavailable to the same story told in a novel or a film. On the other hand, it meant that this technique was a kind of fraud — a manipulation of epistemic trust that the audience had not consented to.

In a post-epistemic-contract internet, this calculation changes. When users have already adjusted their expectations — when they have already suspended the question "is this real?" as a default posture toward online content — the use of the internet as a narrative medium is no longer a manipulation. It becomes, instead, a natural extension of what the medium has become: a space for constructed experience, for designed encounter, for narrative that inhabits the texture of daily life rather than being separated from it by the frame of the screen.

This is not a small shift. Every narrative medium that has ever existed has been defined by its frame — the covers of the book, the darkness of the cinema, the edge of the screen. The frame tells the audience when the story begins and when it ends, and what kind of attention to bring. A medium that has lost its claim to reality, but that is woven into daily life through constant use, offers something no previous narrative medium could: a story without a frame. Not because the audience is deceived, but because the frame — the border between story and world — has dissolved, mutually and knowingly, for everyone.

What narrative forms are adequate to this condition, we do not yet fully know. What we know is that the condition exists, that it is irreversible, and that the question of what to build in it is among the most interesting questions available to anyone who cares about how stories work.

We are, in this respect, in a position structurally analogous to the early cinema operators who understood that they had a medium but did not yet have a form. The medium is here. The form is waiting.