Discussion about this post

User's avatar
Daren's avatar
1dEdited

Great article. Please forgive the paragraph, I am forming my thoughts as I write. I am struggling with many concepts, emotionally and logically, which this article hilights, mainly, what is the "value" of a person and how do we survive when we have no value? It seems like two of the three value aspects you brought up require permission, the agentic one does not. This is timely, as I have struggled to find purpose/employment (but first, substack!) after my doctorate and keep mining our modern "sages" for advice: Ravikant, who supports Leverage X circle of competence, Newport & Galloway, who support getting really good at something boring but with 90% employment, etc... I like this additional view you present here, but I struggle how one can use this to survive. It seems like there is value independent of reward (satisfying the infinite growth function ex financial growth) and interpersonal value, regardless if it makes the system more efficient (ex. charity). A homeless man is in deep need - we have a conversation and make his week, build a system where they can find some purpose etc. Acting in this way in accordance with the values of "good will toward all mankind" is agentic, valuable to a wider society, but absolutely will not be rewarded by a growth-first system, and will not be automated away bc there is no incentive for this to be so. Is Agency a moat because the system, which optimizes for a growth subroutine, does not need to reinforce this aspect of human interactions? Does this call for a re-assessment of human value altogether, a destruction of the social contract, and a divorce between a human's ability to justify their space and their ability to move matter or information? Or simply do we let those who cannot keep up struggle and not reproduce? Could it even be possible to value the entire cloud of all human abilities, as a package, and not just their vectors which resonate with the growth function?

Expand full comment
Vishal Kataria's avatar

Makes me realize AI is not smarter than us. Rather, taste-makers as described above start transferring their expertise to AI models, and that is what makes tech smarter than us.

AI doesn’t know more. It has acquired the knowledge of people who have developed the taste and intuition that most of us chose to ignore or reject.

Expand full comment
4 more comments...

No posts