Dear readers,
What do you feel when you hear the word judgement?
Is it a hit of cortisol—Oh shit, I hope you’re not judging me.
Is it resigned—I wish we’d learn to accept rather than judge.
Or is it a surge of pride—Yeah, I know exactly how to separate the wheat from the chaff.
The notion of judgement is a wonderful swirl of contradictions. In today’s issue, let us explore its necessity and cultivation.
Warmly,
~Julie
In this issue:
One skill to rule them all
An equation for judgement
For paid subscribers:
How to judge who has good judgement
Improving your decision model
One skill to rule them all
If I ask a room of people, “Do you want to grow your career?” The answer will be “F— yeah I do.”
If I ask the room, “What do you think you need to do that?” the answers will be a cooking pot of skills: Learn Python! Improve my communication! Get better at systems thinking! Access my creativity! Get rizz! The list goes on and on.
These lists can be helpful, but they’re also scattered, endless, and occasionally too specific. Let’s try a simpler approach:
The way to grow your career is simple: get better at making good decisions.
Pretty non-controversial, right? Making better decisions = greater chance of success in whatever you’re trying to do.
What does it take to make better decisions? This skill is judgement.
Judgement is the ability to transmute information into actions that achieve the desired outcome.
So, if your goal is growth, try to see the predominant skill you are focused on as honing your judgement.
Let’s explore how we do that.
An equation for judgement
What makes up excellent judgement? Here is my take:
Judgement = quality of decision model + richness of data + clarity of mind
Quality of decision model
Whenever any of us has a strong feeling about something, it will be based on a framework or algorithm we have in our brains. I call this the “decision model.” Our decision model typically lives in the subconscious, but if we are particularly aware, we manage to pull parts of it into our consciousness to be described or analyzed.
For example, if you are a skilled engineer and I give you working code samples from two new grads and ask, “Which candidate would you hire based on their code?” you will probably have a gut feeling that one code sample is “better” than the other.
If I asked you, “Why?” and the difference is really obvious, you’ll be able to explain your framework to me — “Candidate A put all their code in one function, while Candidate B created simpler, reusable functions which will scale better, and is easier to read and debug.” You are giving me a glimpse of your decision model, which is that Breaking code into functions → demonstrates proficiency in scaling / debugging → more skilled engineer.
Of course, the framework above is coarse and not terribly controversial. Any engineer who has taken an intro computer science class would agree. But let’s say now that the two samples are not that different. They both use functions, and they both work. Does that mean they are equally good?
That n00b engineer fresh off her intro CS class might say, “They both seem fine to me!”
However, a more skilled engineer will be able to spot minor differences that makes her believe one code sample is better than the other — everything from variable naming, to granularity of functions, to whether recursion or a ‘while’ loop was used, to specificity of variable typing, to how errors are handled.
In other words, the skilled engineer has a more complex and nuanced decision model. Her framework of “quality code” takes into account more signals than a lesser engineer. In fact, a master-level engineer (let’s call her Unicorn Engineer) operates similarly to an advanced LLM — as her eyes scan the code, she is processing thousands of such subtle signals, much of it subconsciously. It would be impossible for her to explain her entire decision model because it’s just so complex.
She might chalk it up to “intuition,” while others gape at her, amazed at her prescience and track record. It feels like magic, but it is not magic. It’s simply that the number of parameters she subconsciously considers, as well as the weights she assigns to each parameter, gets closer and closer to being a true predictor of a great engineer.
The compliment we can give to Unicorn Engineer is that her decision model for assessing code is high quality.
Richness of Data
Having a high quality decision model is awesome, but alas it isn’t sufficient. You also need to have enough relevant data to feed into the model.
Let’s go back to our example of Unicorn Engineer, whose decision-model can pick up thousands of subtle signals in assessing code.
What if the code samples we give her are 50 lines long? Clearly her capability to accurately predict the better engineer will be way worse than if we handed her thousands of lines of code!
Not to mention, any engineer worthy of Unicorn status will surely balk at this test in the first place. All Unicorn engineers know, in their Yoda-like ways, that writing quality code does not a good engineer make. How quickly can the candidate think on their feet? How easy is the candidate to collaborate with? How proactive will the candidate be in solving problems? These signals cannot be found in code alone!
So even if one’s decision model is incredibly well-honed, the amount of known context matters tremendously.
Most engineers will make better hiring decisions after seeing a candidate in action for 3 months (internship) versus 3 hours (interview). Most PMs will make better prioritization decisions if they look at the past 6 months of customer requests rather than just last week’s requests. Most designers will design a better user experience after taking to 100 customers versus 10 customers.
Clarity of Mind
The last piece of good judgement is having a clear mind. Our Unicorn Engineer may be brilliant, she may have the deepest wellspring of intuition the world has ever seen, she may have done all her homework in gathering context—but if she is seething mad, I promise she will make poor judgement calls.
It is impossible to make use of an excellent decision model or a ton of rich data if you are not in the right mental state.
Fear, stress, anger, sadness, jealousy — these all cloud our ability to make use of our gifts to the greatest degree.
Let’s take a simple example — “Be kind.” I bet you have a pretty honed decision model around what “being kind” looks like, and you’re resolved to be kind to people you like. And yet, have you ever yelled at a friend or family member? (If you never have, I salute you, please share your wisdom.)
We don’t mean to yell, but we do because we feel overwhelmed. We’ve lost mental clarity.
Mental clarity is just as important to judgement as any of the other components. If you find yourself struggling to see clearly, you should doubt the quality of your judgement and turn your attention instead to regaining some of that lost clarity.
There are two more chapters in this issue for paid subscribers:
How to judge who has good judgement
Improving your decision model
Enjoy what you are reading? Have strong opinions and wanna let me know them? Support The Looking Glass by becoming a paid subscriber!
Keep reading with a 7-day free trial
Subscribe to The Looking Glass to keep reading this post and get 7 days of free access to the full post archives.