The Looking Glass: Elephants, Blind Men, and that One 15-Minute Trick
How we err, and how we must learn.
Dear readers,
It is truly impressive how quickly the worlds’ sands shift nowadays. I mean, November must be a record in tech drama! Since the last letter, we saw FTX crash and burn, leaving a trail of carnage in its wake, more mass layoffs, and Elon rocking the boat hard at Twitter.
I have so many thoughts on Elon’s Twitter leadership which would be so fun to debate about but alas, start-up life demands singular focus. At the same time, I’m incredibly curious how the experiment plays out and the collective lessons we’ll all learn.
What we’ve already learned from FTX and SBF: skepticism is important. Reputation is a proxy metric that can be gamed. Double-or-nothing for too many rounds, and eventually you lose everything.
On the writing front, I had fun revisiting a favorite parable and a tactical playbook on a meeting I have now come to love. As always, my inbox is open to your feedback and questions!
Warmly,
~Julie
The Blind Men, The Elephant, and the 3 Data Mistakes
Share on: LinkedIn; Twitter
A famous Indian parable describes five blind men encountering an elephant for the first time. They each decide to touch the elephant to understand what the animal is like.
“An elephant is smooth and hard, like bone,” says the man who grasped its tusk
“Nonsense, an elephant is soft like leather,” says the man who felt its ear.
“You’re both wrong; an elephant is rough like a tree,” says the man who touched its leg.
The five men begin arguing vehemently, each convinced that he is right. None of them manages to convince the other of anything, except that everyone else is untrustworthy.
This is my favorite fable. It says so much in so few words, about the way we reside in our lonely heads, why alignment is an Everest mountain, how the shape of truth is diamond-faceted.
I see this story often used to describe organization dysfunction — See? This is why Sales and Product keep quarreling!
But hear me out. Recently, I’ve started to see this story as a perfect embodiment of another topic: the 3 most common mistakes teams make in using data.
Ready to explore this elephant with me?
Mistake #1: Rejecting Data that Doesn’t Match Your Beliefs
The obvious thing we — who are not newbies to elephants — can recognize is that none of the blind men are wrong!
An elephant is simultaneously soft like leather, smooth as a bone, and rough like tree bark. It’s also a million other adjectives, because it’s a complex, majestic animal! Such is our world, one where we extol the wise words of Walt: Do I contradict myself? Very well then I contradict myself (I am large, I contain multitudes).
But in a team setting, when you’re locked in a tense battle of wills, it’s hard to remember this. Everything becomes binary.
Consider a data analyst sharing experiment results: Unfortunately, this new redesign decreased user engagement by 5%.
No way, retorts the designer. This redesign is way WAY better. Can’t you see for yourself how much simpler it is? People loved it in the lab. And look at this Twitter feedback! <cue visual of many fire emojis>
I’m embarrassed to admit I have made various forms of the above argument. Like the blind men, I turtled into a narrow definition of the truth: I only embraced data that confirmed what I wanted to believe — This redesign is Awesome!
This is the number one killer of data discipline: Instead of testing my intuition with data, I was seeking data that confirmed my intuition.
In most instances where we bring up conflicting pieces of data, the reality is that both viewpoints are true — there are indeed people who love the new redesign, while on average it causes more people to use the feature less (a lesson I learned the hard way, see Exhibit A here).
Instead of trying to prove one’s opinions, we should aim to broaden our view of reality to incorporate all the data. Only then will we more clearly “see” the whole elephant…
Mistake #2: Selecting Ineffective Methods of Measurement
If none of the blind men were technically wrong about the elephant, then the million dollar question becomes:
What, exactly, is the best way to describe an elephant?
…
Read the rest of the article here
Why Your Team Needs a Weekly Metrics Review
Share on: LinkedIn; Twitter
A few weeks ago in our Manifesto for the Data Informed, one of the five beliefs presented was Company-wide familiarity with metrics rather than outsourcing to ‘data people.’
Immediately, we were pummeled with questions: Does this really matter? Is this a realistic expectation? How can an organization achieve this? So for our next few posts, we’ll deep dive into how to make this lofty aspiration practical.
As with anything else in life — whether people, languages or customs — there is no shortcut to gaining familiarity; the only way is through direct and frequent exposure.
With data, most teams inherently understand this — that is why dashboards are built and links are passed around and we are all reminded to “please bookmark it and check it often.”
Unfortunately, unless your job title includes the word data, the practice of loading said bookmark does not frequently arise to the top of your to-do list, even if you really truly do think data is important! Thus begins the great death spiral of dashboards — because they go unused, they become unmaintained. Because they are unmaintained, when you finally have a need to look at them, they’re broken and useless.
This is why data-informed teams rely on practices other than sheer will to create data familiarity. The big three are 1. weekly metrics reviews, 2. weekly insight reports, and 3. insights reviews.
In this installment, we’ll tackle one of the single most impactful practices of building a data-informed team: the weekly metrics review.
What is a Weekly Metrics Review?
A weekly metrics review is a synchronous team meeting to review the key metrics for a scaling, post-PMF product with all functional team members present — ie, PM, engineering, design, operations. This type of review can (and should!) happen at the executive level, with the CEO and C-level executives, and recurse down to individual product teams.
A weekly metrics review should be short and sweet (think 5–15 minutes, typically at the start of a regular team meeting) and led by the data person who walks the group through the key metrics for your collective area of work (e.g. new user growth, revenue, conversion rates, tickets resolved).
The group should examine how key metrics have progressed over the past few weeks, ideally by looking at a series of time-series line charts. The presenter can also prepare a few key segments to review, for example if a certain type of user, platform, or market is strategically important to the team, or if the team has launched something that impacts a particular segment (like a new feature in a test market).
It’s best to keep the meeting lightweight. Preparation should be easy, ideally no more than 30 minutes. Many great metrics reviews simply start with screenshots of dashboards. The data person shouldn’t have to have all the answers at their finger tips (why did active users spike two weeks ago?). It’s fine to circle back with an answer later.
What is a successful outcome for a weekly metrics review?
Metrics reviews are unlike product reviews or decision meetings; the point is a shared understanding of how the team is progressing towards its goals, not making decisions or creating action items.
This is an unusual proposition, especially when ambitious team are (rightfully!) wary of wasting time in meetings. After reading an article like this and trying out this style of meeting, you may find yourself wondering: Is this meeting actually helping us accomplish anything? It’s common for the weekly metrics review to get cancelled, demoted to an e-mail update, or become the spawning ground for long lists of follow-up questions just so the group feels they did something.
And yet, we argue that familiarity with the impact of the team’s work is in of itself an important enough goal to create a synchronous cadence around. Because we work in a team environment, having shared context across different functions is critical. The ideal outcome of a metrics review is that each person develops a shared understanding of the following questions:
1. How are we doing against our goals?
Few things focus a group as effectively as seeing a chart of week-over-week progress inching towards a goal target. The ritual of doing this together forces the room to confront questions like Are we likely to hit our goal? and Is our recent work having the impact we expected?, which ultimately ladder to the Holy Grail question: Is our strategy working?
2. Which levers are most important in reaching our goals?
…