What Happens to “True” When Anyone Can Build Anything?
A lot of people calling AI-generated code 'slop'. That word stuck with me. Not because of the debate around code quality. But because I've heard that word before - just about different things.
This pattern does not seem unique to software. A similar tension appeared earlier in photography. When digital photography emerged, it was often treated as less “real” compared to film. Film photography required limited shots, careful framing, and a slower process of intention. Digital photography removed many of those constraints, allowing thousands of images to be taken with almost no cost. Yet this shift did not simply change tools - it changed what people considered authentic practice.
What connects these cases is not the technology itself, but the way “truth” becomes attached to the method of production. “True” is often used to describe not just the result, but the perceived legitimacy of how the result was achieved. In programming, this now appears in the distinction between hand-written code and AI-generated code. In photography, it appeared between film and digital. In both cases, “true” becomes a way of evaluating process rather than outcome alone.
However, this also reveals something more subtle: once “true” is tied to process, it immediately becomes tied to people. To say that digital photography is “not true” is implicitly to say that digital photographers are not doing “true” photography. The same applies to AI-assisted developers. What starts as a technical distinction between methods quickly transforms into a social distinction between groups of practitioners.
This suggests that “true” is not only about process or quality, but also about structure - specifically, about how a community preserves boundaries of competence and identity. When a new tool lowers the cost of production, it does not eliminate these boundaries; it shifts where they are drawn. What was once considered mastery in execution may move toward evaluation, architecture, or verification.
If AI systems eventually handle not only generation, but also architecture and verification, then the question becomes whether “true vs not true” in development still refers to anything meaningful in production itself. My intuition is that it does not disappear, but instead migrates again - from creation to trust, from making to validating, from execution to responsibility.
In that sense, “true” may not be a fixed property of work at all, but a moving marker of where uncertainty, control, and accountability are located in a system. As tools evolve, what counts as “true” shifts upward or sideways, following whatever part of the system remains hardest to fully automate or fully trust.
This raises a final open question: if every layer of production becomes increasingly automated, does “true” eventually stop describing how things are made - and instead become only a question of whether we can still trust what the system produces, regardless of who or what produced it?
May 2026