9 Comments
User's avatar
Vishal Kataria's avatar

Makes me realize AI is not smarter than us. Rather, taste-makers as described above start transferring their expertise to AI models, and that is what makes tech smarter than us.

AI doesn’t know more. It has acquired the knowledge of people who have developed the taste and intuition that most of us chose to ignore or reject.

Expand full comment
Paul Sturrock's avatar

Great article. I found the framework of capabilities, taste and agency as the drivers of value intriguing. I also found it challenging because I think the fundamental driver of value is curiosity, which creates a flywheel of motivation, capabilities and autonomy.

I found myself asking: what is the difference between curiosity, taste and agency? Is curiosity the intersection of taste and agency, insofar as it is the tastes that we are prepared to act on, either by learning or making? As opposed to remaining passive consumers and spectators?

Curiosity is also the original driver of most of our capabilities. So I'm sticking with the curiosity flywheel as the driver of value. But this framework of capabilities, taste and agency adds a lot of depth to how it works.

Expand full comment
Rob Brogan's avatar

To be fair, the article is not aimed at the latest emphasis on "excellent taste" or "high craft" that everyone is trying to hire. I happen to be reading this from the frustrated Design candidate perspective. I often see JDs with a call for “taste” just as a guise to say “really flashy visual design.”

Given that lens, what I react to the most here is the section "Taste is a fragile fortress" where I agree in the abstract, but I think in a professional market where Design is a Job, managers seem to forget that most designers work in the context of a company. So when you’re evaluating a portfolio, you should be measuring the output based on those business objectives of course but also the visual in context with the rest of the visual/brand of that product.

As described in the article, it’s all about being a good predictor of what the audience wants.

I personally think I do a great job of designing for my audience (specific product). When I’m applying to another company, I am assuming that will be a new audience for me to learn about and create for, stakeholders and users alike. I wouldn’t assume that my old projects for a different audience need to be an exact match for this new audience. What’s important is the ability to *learn the audience.*

Anyway this is a nice framework to talk about taste. Thank you!

If we were to apply this to an industry like Product Design, you'd probably be really frustrated with an employee who insists on bucking the design system or creating a new feature that looks and moves in a truly unique way. Sure they're a brave and bold trendsetter, but they're probably looking at a generalized audience, and ignoring the audiences of the stakeholders, collaborators, and specific people who actually use the product.

Expand full comment
Daren's avatar

Great article. Please forgive the paragraph, I am forming my thoughts as I write. I am struggling with many concepts, emotionally and logically, which this article hilights, mainly, what is the "value" of a person and how do we survive when we have no value? It seems like two of the three value aspects you brought up require permission, the agentic one does not. This is timely, as I have struggled to find purpose/employment (but first, substack!) after my doctorate and keep mining our modern "sages" for advice: Ravikant, who supports Leverage X circle of competence, Newport & Galloway, who support getting really good at something boring but with 90% employment, etc... I like this additional view you present here, but I struggle how one can use this to survive. It seems like there is value independent of reward (satisfying the infinite growth function ex financial growth) and interpersonal value, regardless if it makes the system more efficient (ex. charity). A homeless man is in deep need - we have a conversation and make his week, build a system where they can find some purpose etc. Acting in this way in accordance with the values of "good will toward all mankind" is agentic, valuable to a wider society, but absolutely will not be rewarded by a growth-first system, and will not be automated away bc there is no incentive for this to be so. Is Agency a moat because the system, which optimizes for a growth subroutine, does not need to reinforce this aspect of human interactions? Does this call for a re-assessment of human value altogether, a destruction of the social contract, and a divorce between a human's ability to justify their space and their ability to move matter or information? Or simply do we let those who cannot keep up struggle and not reproduce? Could it even be possible to value the entire cloud of all human abilities, as a package, and not just their vectors which resonate with the growth function?

Expand full comment
Shawn Wang's avatar

Taste is as unique as it can be. There is almost no objective "good taste" existing. Taste is still human's uniqueness, although AI has his/her own.

Expand full comment
Skye Gill's avatar

No one considers the curation that closes in on the mean (e.g. the top 40 charts) as great taste. The people who have great taste are the people we resonate with, who's taste aligns closely with our own, not the average. We follow their curated content and begin to trust them, precisely because they stray from the average, perhaps we see ourselves on a similar path to their's.

In this respect, I'm not convinced that AI could ever embody great taste, unless it truly adapted to each person with enough character and vitality.

Expand full comment
Laís Lara's avatar

Thanks for the article, Julie. I like how AI is making us question premises we've taken for granted as 'truth.'

I agree that 'great taste' is not a mystical, innate human quality, but a mechanistic process of pattern recognition.

As for the premise that "Agency is the Uniquely Human Moat," my reading on Robert Sapolsky leads me to question: What is 'will' and where does it come from?

If every human action—every "choice" we think we make—is the deterministic result of a chain of causes (genes, hormones, culture, neural wiring, our immediate environment), then "agency" is just a post-hoc narrative our brain constructs to make sense of actions that were already determined.

Just because we feel we have free will doesn't mean we actually do.

Through this lens, not even agency is a human moat. The "choice" in both systems is not a spontaneous creation but an inevitable outcome of a causal chain.

Our objectives as humans are internally generated and opaque. Our "programming" comes from millions of years of evolution, decades of cultural conditioning, and a lifetime of unique experiences. An AI's objectives, however, are externally defined and explicit. A human gives it a clear goal; its "will" is a direct command from an outside source.

An AI executes its function without an inner life. It does not feel the "struggle of the game."

I believe the key difference—the 'essence' of the human condition—is that we are self-aware animals, both burdened by and in awe of our mortality and the search for meaning. Our values can be our most precious resource, even though they are 'installed' by long-term "programming" and not chosen from a blank slate.

This, for now, does not seem to be a feature of AI.

Expand full comment
Yin Sylvia's avatar

Great read, Julie. I’d add that one key edge humans have is embodiment.

We don’t just think—we sense, act, and feel through our bodies. Our brain knows something is hot because our skin does. We understand relationships because we’ve lived them, not because we’ve analyzed a dataset of outcomes.

Yes, AI can simulate feedback with sensors, but embodiment isn’t just about inputs—it’s about context, stakes, and lived continuity. When we run real-world experiments, whether in business, love, or art, we’re affected, changed. That feedback loop is visceral and self-reinforcing.

A machine might observe grief or love, but it doesn’t suffer or long. That gap matters.

So maybe our resilience in the age of AI isn’t just intelligence or creativity—but that we bleed, adapt, and grow through a body.

Expand full comment
Rodolfo Diaz Cabello's avatar

Hi Julie! Thank you for writing this. Every couple of years I find myself having to overcome bigger and bigger challenges. I'm lucky to have been mostly successful thanks to sheer stubbornness and having a great community around me. My questions for you are: how do we develop greater agency over time? how do we keep this moat when AI actually is capable of independent thought?

Expand full comment