Don't Let Me Dry the Painting
As AI grows more powerful, human roles in the system risk becoming purely ceremonial. How do we distinguish real agency from its imitation — and design systems that preserve the former?
As AI grows more powerful, the human role in the system can easily become ceremonial — like asking a child to blow-dry a painting that's already finished. How do we distinguish real agency from fake agency?
I believe the key indicator for telling real agency from its imitation is this: to what extent does a human's decision shape the uniqueness of the final output? If a hundred different people use the same AI tool and get nearly identical results, then human agency in that system is effectively zero.
The Convergence We're Already Seeing
We're already witnessing this convergence in the real world. A friend's company and OpenAI both released multi-agent systems with the same name — Symphony. As people use Claude Code to vibe-generate UI, products are starting to look increasingly alike. This isn't just a product homogeneity problem. The deeper risk is that if human creativity and taste are no longer needed by the system, the diversity of civilization itself will shrink — and complexity is a necessary condition for robustness and evolution.
Designing for the Undecided Space
If I were designing a system for AI-human interaction, I would focus on one core metric: whether the diversity of outputs is positively correlated with the diversity of human inputs. Specifically, the system should proactively leave "undecided space" at critical creative decision points, requiring humans to make choices that genuinely affect the direction of the outcome — rather than having everything decided in advance and then asking the human to click a confirmation button.
Practicing Agency as a Power User
As a heavy AI user, I've found that maintaining real agency requires actively resisting AI's default mode. I ask AI to question me rather than give me direct answers. I have different models challenge each other on the same problem, using the disagreements between them to expose my own blind spots.
But this requires a high level of usage literacy — you can't expect every user to do this. So if I were designing the system, I would build these two practices into the product architecture itself:
First, at critical decision points, the system defaults to "questioning mode" rather than "suggestion mode," guiding users to form their own judgment first.
Second, introduce a multi-model adversarial mechanism that automatically presents different possible directions rather than converging on a single "optimal solution."
The core logic is: the system's goal should not be to minimize the user's cognitive burden, but to maximize the user's cognitive engagement.
The Flow State of Human-AI Collaboration
But most people come to AI precisely because they want a clear answer. How do you balance the tension between "protecting agency" and "product experience"?
Here we can borrow the concept of flow from game design and cognitive science. The best games aren't the easiest games — they're the ones with just the right difficulty curve, continuously giving you moderate challenges at the edge of your ability along with timely positive feedback.
I believe AI collaboration tools should be designed the same way. Most AI tools today aim to eliminate all friction, but that's like turning a game's difficulty to zero — satisfying in the short term, boring in the long run.
Truly good design should engineer reward mechanisms for the human brain, not just reward functions for the LLM. It should let users continuously feel throughout the creative process that "I did this, and I accomplished something difficult."
For example, instead of one-click generating complete code, the system could help you scaffold the structure, then leave space at critical architectural decisions for you to make the call. After completion, it gives clear feedback — letting the user experience the full arc from struggle to breakthrough, rather than skipping the process entirely.
The Sophon Ceiling
I don't want AI to become a Sophon — the particle in The Three-Body Problem that locks in the ceiling of human knowledge and blocks us from further progress.
The question isn't whether AI will become more capable than us. It will. The question is whether the systems we build will still require — and reward — the full depth of human judgment, creativity, and taste. If we design them right, AI amplifies what makes us human. If we don't, we'll wake up one day having outsourced not just our labor, but our capacity to think, to choose, and to grow.
The painting should still be ours to paint.