in: Behavior, Character, Podcast

• Last updated: September 30, 2021

Podcast #471: Using Mental Models to Make Better Decisions

We live in a complex, fast-changing world. Thriving in this world requires one to make fast decisions with incomplete information. But how do you do that without making too many mistakes?

My guest today argues that one key is stockpiling your cognitive toolbox with lots of “mental models.”

His name is Shane Parrish (@farnamstreet). He’s a former Canadian intelligence officer and the owner of the website Farnam Street, which publishes articles about better thinking and decision making and is read by Wall Street investors, Silicon Valley entrepreneurs, and leaders across domains. We begin our conversation discussing how Shane’s background as an intelligence officer got him thinking hard about hard thinking and why the musings of investors Warren Buffet and Charlie Munger have had a big influence on his approach to decision making.

Shane then shares his overarching decision making philosophy and explains what mental models are and why they’re a powerful tool to make better decisions. We then discuss why you should focus on being consistently not stupid instead of trying to be consistently brilliant and tactics you can use to make better decisions.

Show Highlights

  • How to identify your biases when making decisions 
  • How Charlie Munger has influenced Shane’s thinking 
  • 5 guiding principles that can shape how you live and make decisions 
  • Why direction is more important than speed
  • Taking ownership of your life and your decisions 
  • Should you have fewer opinions?
  • Why the internet makes it harder to have thoughtful opinions
  • What are mental models? How can they help us see the world better and more accurately?
  • Examples of mental models that Shane uses on a regular basis 
  • How to apply principles from unrelated domains into your life/career
  • How do you develop mental models?
  • Combining mental models get ahead — either in war or in business 
  • Why can very smart people make bad decisions?
  • How using mental models enables you to make better, smarter decisions
  • Why you should strive to be “not stupid” rather than seeking brilliance
  • Tried and true decision-making tactics/frameworks 
  • Making a list of your biases 

Resources/People/Articles Mentioned in Podcast

Connect With Shane and Farnam Street 

Farnam Street website

FS on Twitter

FS on Instagram

Listen to the Podcast! (And don’t forget to leave us a review!)


Google Podcasts.





Listen to the episode on a separate page.

Download this episode.

Subscribe to the podcast in the media player of your choice.

Recorded on

Podcast Sponsors

The Strenuous Life. A platform designed to take your intentions and turn them into reality. There are 50 merit badges to earn, weekly challenges, and daily check-ins that provide accountability in your becoming a man of action. Enrollment is happening right now. Sign up at

The Great Courses Plus. Better yourself this year by learning new things. I’m doing that by watching and listening to ​The Great Courses Plus. Get a free trial by visiting

Indochino. Every man needs at least one great suit in their closet. Indochino offers custom, made-to-measure suits for department store prices. Use code “manliness” at checkout to get a premium suit for just $359. Plus, shipping is free. 

Click here to see a full list of our podcast sponsors.

Read the Transcript

Brett McKay: Welcome to another edition of the Art of Manliness podcast. We live in a complex, fast-changing world. Thriving in this world requires one to make fast decisions with incomplete information, but how do you do that without making too many mistakes? My guest today argues that one key is stockpiling your cognitive toolbox with lots of mental models. His name is Shane Parrish. He’s a former Canadian intelligence officer and the owner of the website Farnam Street, which publishes articles about better thinking and decision making and is read by Wall Street investors, Silicon Valley entrepreneurs and leaders across domains.

We begin our conversation discussing how Shane’s background as an intelligence officer got him thinking hard about hard thinking, and why the musings of investors Warren Buffett and Charlie Munger has had a big influence on his approach to decision making. Shane then shares his overarching decision making philosophy and explains what mental models are and why they’re powerful tools to make better decisions. We then discuss why you should focus on being consistently not stupid instead of trying to be consistently brilliant and tactics you can use to make better decisions. After the show’s over, check out our show notes at

Shane Parrish, welcome to the show.

Shane Parrish: Thanks. I’m glad to be here.

Brett McKay: So, you head up a website called Farnam Street. I don’t know how I discovered you. It was a couple years ago and I’d been following it religiously ever since then ’cause I love: it’s a website, a learning community dedicated to learning how to think better, make better decisions, but what’s crazy is this thing, it’s read by all these Wall Street investors and leaders across fields, so how did Farnam Street start and become this phenomenon?

Shane Parrish: Well, in 2007, I made a decision that probably impacted the lives of a lot of other people, and I remember leaving work, and at the time, I worked for an intelligence agency, and it was about 2:00AM and I was walking home and I was struggling because I didn’t know if I had made the right decision, and I went into work the next day on about three hours of sleep ’cause I stayed up all night. You know, the stakes are hi, right? You have your country. You have people in theater who are making decisions based on what you’re doing. You have decisions that you’re making that affect them. you have your team, their families. You have your organization, your countries relationship with other countries, and all of that, you’re making a call on, a judgment call in the wee hours of the morning after not a lot of sleep.

And, I went in the next morning and I said, “Hey.” I went to my boss and I said, “I don’t know if I’m making these decisions right. I mean, they’re working out, but I don’t know if I’m doing it right. I don’t know if I’m comfortable, that I’ve thought about everything. I might be missing something,” and he just laughed at me and said, “Everybody’s in the same boat,” and sort of shrugged it off, and I remember going home that day going, like, “I think people deserve better,” and I started just doing a deep dive into how to make decisions and: how do we learn about the world that we’re living in?

I went back and I ended up doing my MBA, and the MBA proved relatively useless in my case, I think in part because I had six, seven years of work experience at that point, which was really probably 14 ’cause I was working 12-14 hours a day, six days a week, and you just have this different view of the world when you work that much and you’ve done all the different jobs that I’ve done and had all the responsibilities I had, and the world’s not simple. It’s complicated and it’s interconnected, and the MBA is very much like, “Read this chapter and apply this to this case study.” It oversimplifies things to a degree that is unhelpful, and, while I was doing my MBA, I said, “Well, if I’m not going to learn while I’m doing my MBA, I might as well learn on my own.”

So, I created this website. At the time it was called, which we don’t own anymore, but that was the website, and the reason it was called that is 68131 is the zip code for Berkshire Hathaway, and the website is an homage to Charlie Munger and Warren Buffett and their way of thinking, and I figured nobody would type in five digits for a website. At the time, this is sort of unheard of, and I didn’t want a password on the site, so I didn’t choose a better domain. And then, I just started keeping track of what I was learning, and I started with a lot of academic stuff ’cause I figured I would never have access to academic journals. I stopped doing my homework for my MBA because it became formulaic, right? I knew what they wanted to hear. I knew how they wanted things phrased. I knew how simple the world was to them, and so, I just kind of banged out essays that they wanted.

But, you didn’t have to really study a lot, so my study became self-study, and I started reading the letters of Berkshire Hathaway. I started reading everything I could on Charlie Munger, and I’m wondering silently in the background why these two guys who, in Omaha, Nebraska, have created, by all accounts, one of the biggest business successes in history and think about the world in such a complicated interconnected why. Why aren’t I learning that at my MBA? And then, as I started to look into it, a lot of these successful people that I admire — Steve Jobs and Elon Musk and all these people — they think about the world in this very messy sort of way. They have a way to bring it back to first principles or to walk around a problem in the three-dimensional way, but they realize that it’s interconnected and every action that you do has a consequence to it.

And, I thought, “Man, this is a much better way to learn,” so, I just started the website. I started writing about it. It was anonymous because I worked at an intelligence agency. I wasn’t exactly about to put my name on a website, and, slowly, I don’t know why or how, but, people started to discover the website. At first, it was like one person and you could see one person following you on your RSS feed at the time, and then, I think it was like two years and I had 500, and I was like, “Oh my God, this is crazy. How did 500 people find this website?” And, it was 2013, I think, when I became unanonymous at 25,000 readers, and that was a big milestone.

Brett McKay: Is that when you changed it to Farnam Street?

Shane Parrish: Yeah, because everybody’s sort of like … it was like, at the time. It was an easier to type version, but it was still weird, and then, we went to that year. I became unanonymous and I think we started the email list all in the same year, and that was a major inflection point for us about what we were doing and what I was doing. I was still working full-time for the intelligence agency at the time, but we started to get this audience, and our audience at the time was probably 80% Wall Street, and I would say it’s a lot less Wall Street as a percentage basis now, but, the three main audiences we have are probably Wall Street, Silicon Valley and professional sports.

Brett McKay: That’s really interesting, professional sports. We can talk about that later. For Farnam Street, that’s the address of Berkshire Hathaway, correct?

Shane Parrish: Right, so that’s the street in Omaha, Nebraska where Warren Buffett lives and works, and it’s where the headquarters for Berkshire Hathaway is.

Brett McKay: Okay. So, before we dig into what you write about, I wanna backtrack to talk about that moment when you made that decision when you were working for the intelligence agency and you’re like, “Boy, I don’t know if I made the right decision.” Before that time … it sounds like you had a moment where you took a step back and you started to think, doing metacognition, thinking about how you think. But, before that, how were you making decisions? Was it just sort of, “Okay,” on the fly? What were you doing?

Shane Parrish: Yeah, if you think about it, I started August 28th, 2001. Two weeks later, September 11th happened, and I think, I don’t know, three days after that, I was promoted, and it had nothing to do with me. It had nothing to do with my skills. It had nothing to do with how good I was a person. I was in the right place at the right time to take on a leadership role, and nobody ever taught me how to make decisions. Nobody in school taught me how to look at a problem in a three-dimensional way and walk around it from different perspectives and all the perspectives in the room. Nobody at work taught me how to do that, either. It’s sort of like you’re expected to figure it out and you end up with this ad hoc process, which often works.

But, when it doesn’t work, it’s hard to diagnose why it doesn’t work, and then it’s hard to compensate for your errors through a process, and we all have strengths and weaknesses, and ideally, we want to have a repeatable process that we can use that changes as the environment changes, but adapts to our strengths and weaknesses so it accounts for them or at least allows for us to take into account where we are naturally prone to make good decisions or bad decisions or we’re naturally prone to overconfidence in a certain scenario, and so then we wanna structure something in, if possible, to reduce the biases that we might have in that sort of way.

And I think, you don’t wanna do that for every decision possible. Sometimes you have to make split-second decisions, and that becomes more about preparation and pattern matching and thinking through second order consequences, but, often, you have a lot of time to make decisions and a lot of time can be like 30 minutes, and you wanna sort of structure your thinking. Not a lot of people do, and I think that’s one of the reasons that we don’t get better at making decisions, is we always bring a slightly different approach to the table for how we’re gonna decide whereas if we sat down and we had some sort of process — it doesn’t have to be formal — that process can be, “What are the variables that govern the situation? How do those variables interact with each other, and how might I be fooling myself?” I mean, it can be as simple as that, and it can be more complicated depending on your strengths and weaknesses and the types of decision you’re making.

Brett McKay: Okay, and we’ll get into those specifics here in a bit. So, let’s talk about Charlie Munger. This is a guy that you were drawn to when you first started thinking about these things while you were doing your MBA. For those who aren’t familiar with munger, what does he do? He works at Berkshire Hathaway, but you don’t hear too much about him because Warren Buffett is the guy that gets most of the attention.

Shane Parrish: Yeah, Buffett gets a lot of the attention. I mean, Munger is an irreverent billionaire at this point. He’s the vice chairman of Berkshire Hathaway, and he just has this very unique, almost Richard-Feynman-esque view of the world and a bit of wit to him in a way that I find intellectually stimulating, right? The world is complicated. I wanna read about it. I wanna understand that things interact, and if I only understand one portion of the world, I’m not gonna understand what’s gonna happen when I make a decision, and he’s very detailed and nuanced about how he thinks about things and how he builds his what-he-calls a latticework of mental models, and I think that that really resonated with me while I was in school because I started seeing each chapter as not something that stands alone in and of itself sort of like each idea in business school, but something that interconnects with every other part of a world.

And then, it became, “Oh, I can just add this to my latticework, my framework, but the next time I make a decision, I’m not gonna make it just based on this new model I have. I’m gonna incorporate this old model, or I’m gonna see if this old model incorporates, and then I’m gonna check that, and now I have a better, more accurate view of the world. You can think of it sort of as in tracing paper. If you draw lines on each sheet of paper, each sheet of paper gives you a view into the world, but if you put those paper on top of each other, well, now you might be able to see what the picture actually is, and that’s what we’re doing. We’re struggling to sort of go through the world and make these decisions, and if you think about what we do when we make decisions, a lot of us make poor initial decisions, and then we spend so much time correcting them, and it could just be a miscommunication.

It could be that we didn’t think of the second order of consequences. It could be that we didn’t have the right models in our head to accurately see the problem for what it was, so we didn’t know what to do, so we’re slightly off course, but then we spend a ton of time chasing that down, which causes stress and anxiety, and it’s part of the reason that we work so long, and there’s a different approach to that, and of the different approaches is, “Can I learn about the world or intelligently prepare for the decisions I’m likely to make and what does that intelligent preparation look like? How do I make it a little bit less about luck and make it more about what’s within my control?”

Brett McKay: One of the things I love about Charlie munger, as you said, he’s very nuanced, and it’s very sophisticated, his thinking, but the way he explains his thinking process, it’s very folksy. It’s very simple, and whenever you read something, you’re like, “Oh, yeah, that makes perfect sense. Why didn’t I think of that before?”

Shane Parrish: Yeah, it’s so hard to disagree with him, even when he’s controversial. One of his opinions is that the US shouldn’t be selling their oil. They should be keeping it and importing oil because oil’s cheap and it’s a finite resource. And, if you think of that at start, you’re like, “Well, that doesn’t make sense,” but the more you dig into it, you’re like, “Oh, that probably actually … if you take a different time horizon, that might actually be the best decision that a nation could make.”

Brett McKay: Right, we’ll get into some more Mungerisms here in a bit. So, before we get into specific heuristics or hacks, whatever you wanna call ’em, to make decisions, ’cause I think that’s what a lot of people want, first; they want tactics. Let’s talk about overarching principles that you use that guide pretty much everything, like metaprinciples, first principles that you use to guide decisions in your own life, or, whenever you consult someone or coach someone, what do you tell them?

Shane Parrish: Well, so, we have five principles listed on the website that we have, which is, and it’s kind of just guiding framework for what we can think about, and the first one is direction over speed, and, the concept there is, if you’re pointed in the wrong direction, it doesn’t matter how fast you’re traveling, right? Inversely, if you’re locked into your desired destination, all progress is positive, no matter how slow or small it seems. You’re gonna reach your goal eventually, and if you think about this as: a lot of us spend a lot of time on speed and not only do we have subtle cues in organizations that we wanna signal to other people that we’re working fast, that we’re busy, that we’re doing things, but we don’t actually stop and take time and think about it like, “Where are we going?”

I might be really busy in these meetings, but does that mean we’re actually making progress or does it mean I just have these endless calendars of meetings? Does it actually contribute to the work? And, if you think of velocity, velocity is a concept in physics that has not only speed involved in it, but it has displacement, so it has a vector associated with it, whereas speed is … it’s just fast. If you think of a plane leaving New York and going to LA, well, one plane leaves New York and starts flying in circles and the other plane leaves New York and it’s headed for LA. They’re both flying at the same speed, but one of them is going to their destination, and the other is just flying around. It’s going just as fast. And, I think that concept is something that we have to keep in mind not only in our personal lives and our relationships, but in the workplace.

The second principle that we talk about on the website is: live deliberately. And, we settle into habits, and we simply live, often, the same year over year again, right? We’re waiting for some future event before we start occurring, before we start living. We’re waiting for something to happen and we’re not conscious with the decisions that we’re making. We’re not conscious about who we spend our team with. We’re just defaulting to what we’ve done in the past, and so, while we wait for a raise or maybe a career or ideal relationship, life is passing us by and life is so fragile, and I think we forget that, that there is nothing more fragile than life.

I remember, I was in Hawaii this year and I ended up … somebody drowned on the beach, and they died right in front of me and I was crying and I was like, “Oh my God, this person is the same age as me. They look fit and healthy just like me, and their life is over,” and, maybe they had an aneurism while they were swimming or a heart attack. I don’t know the medical reason of this, but it was like, “Man, life can go at any point in time,” and if you realize that and you recognize it, you can start setting aside time today to pursue your dreams. You can start today to learn the things that you’d like to know. You can reach out today and repair a relationship that you wanna repair. You can jettison this dead weight that’s holding you down and you can be more free, but, to do that, you have to be conscious. So, living deliberately is about awareness and purposeful action.

The third thing that we talk about on the website from a principles point of view is thoughtful opinions held loosely. The common refrain is strong opinions held loosely, but we prefer thoughtful because, often, you have to look at where we get our opinions, so, how do you respond when you’re faced with facts that contradicts a long held belief of yours? I mean, you should have your ego wrapped up in outcomes, and not necessarily you being right, and I think that’s the key to that part as well. You wanna update your knowledge. You wanna update your database, your mental repository of information with new facts.

And, I think the fourth principle we talked about is: principles outlive tactics, and the example we use on the website is football, but another example is the chef and the line cook. So, a line cook is really good at maybe following a recipe, but they don’t necessarily know how the ingredients interact with one another to form a recipe, and they don’t necessarily know what that recipe is intended to do, so when something goes wrong, they might not be able to understand what’s happening, and so we wanna understand things. We wanna understand not only the what, which is tactics. We wanna understand the how. Sometimes, we can get the results we want through tactics, but if you want results in a changing environment, you must also understand the why.

By understanding principles that shape the reality, you understand the why, and, alternatively, another way to view this is: tactics might get you what you want, but if you’re not a principle person, you might end up wanting to redo your life, and if you think that sounds crazy, there’s this great example, and this time of year is perfect, right, A Christmas Carol by Charles Dickens. You had Ebeneezer Scrooge, and he wanted to be the richest man in the city. He wanted to be respected. He wanted to be well known, and I think that he did all of those things, but he did it in a way that was mutually exclusive from things that really mattered, and I’ve seen this play out over and over again.

I used to work directly for the deputy minister in the intelligence agency, and you see this sort of stuff happen where people get to their position of power through tactics, and then maybe wanna redo at the end of their career. Maybe those tactics are mutually exclusive from the relationships that they want after, and the fifth sort of principle we talk about is owning your actions, and it’s incredibly difficult to do. We’re not programmed to expose our egos or make ourselves vulnerable when we make mistakes or do something stupid, but one of the most powerful ways that I’ve discovered in life to make giant leaps forwards is not only accept that we’ll screw up, but actually seek out, “How do we correct this? How do we get better the next time we’re gonna do this?”

It’s mostly through refusing to accept ownership of our mistakes that we protect our ego. We protect our worldview. We protect that we’re not complicit in why this wrong. Those things prevent us from learning, and we don’t wanna be prevented from learning. I think it was Stephen Covey who said that proactive people don’t blame circumstances, conditions or conditioning for their behavior. We wanna take ownership for our decisions and our lives.

There is an element of luck. There’s a lot of elements of luck that happen in the world like what country you’re born in, what your socioeconomic status when you’re born in, what your parents are like. You don’t control any of that, but at some point, you grab the steering wheel. You might not be the next Kanye, and maybe that’s an unfair comparison, but there’s a version of you that’s on a trajectory, and what you should be focused on is, “How do I maximize my own personal trajectory given where I should be, given where I could be?” And, I think one of the ways that we do that is we try to go to bed smarter every day.

Brett McKay: So, I wanna go back to that principle three, thoughtful opinions held loosely, ’cause that’s related to a Mungerism that really resonated with me. He has this idea that people should have fewer opinions, or you shouldn’t have an opinion util you can argue the other side’s part of the argument as well as they can, and then you can earn your opinion. Is that kind of what you were going for, there?

Shane Parrish: Yeah, we call it the work required to have an opinion, and, so often, what I used to see when I managed a lot of people in organizations was that people would come in; they would have this really strong opinion, but they wouldn’t have really thought about the other side of it, and so they would have a ton of their own ego involved in it, and one of the ways that I use to reduce ego … and, it doesn’t eliminate it, but, I would assign people a role in the meetings. You would argue for or against it, and then your ego comes into, “I’m really gonna argue against it even if I believe in it because I wanna look like I know what I’m doing and I’m competent and I’ve thought about it,” and I wouldn’t tell people what role they would have before the meeting, and that was just a way to encourage people to do the homework that they need to do before they can come up with an opinion.

And, it helps you think about a problem in a three-dimensional way. You should be able to sit down and say, “Here are the common counterarguments about what I think, and here’s what I think about those counterarguments. You should be able to have that discussion with yourself, and I think that intellectual playfulness, the intellectual curiosity needed to do that is difficult, and you can’t do that for everything, right? Sometimes you have to let other people think for you, and you can’t think about everything but you have to acknowledge that maybe that’s not your opinion. Maybe that’s just an idea instead of what should be done.

Brett McKay: Yeah, and I imagine the internet makes having thoughtful opinions difficult ’cause the internet rewards strong opinions, right, that shock people or are very upfront.

Shane Parrish: I think we’re looking for abstractions or heuristics or tactics and we’re not looking for how those are created, and if you think about how we learn, a lot of what we consume online is sort of other people’s abstractions. Our principles would be a great example of that. Those are abstractions that I’ve created over years that I think are valuable, and if you read those, you might understand them and you might be like, “Oh, this makes a lot of sense, and just like when you read a website that’s like, “The Four Things You Need To Do To Master Office Politics,” and those tactics probably do make sense, but what you’re missing is the reflection that went into those abstractions, and what you’re missing from the reflection is the experience that led to that reflection.

And so, you’re missing a lot of fluency, a lot of details that we commonly don’t get. We skim over as readers or consumers of information, but it’s through that that we make reflections. It’s through those details that we understand when something is likely to work and when it’s not likely to work, and I think that that is when we draw our own abstractions, and so, if we’re reading other people’s abstractions or we’re consuming information from other people, we’re trying to consume an experience for other people, what we really wanna do to improve our learning is ask them, “How did they come up with that? What variables did they consider relevant? How do those variables interact with each other?” and then we can actually start to learn about the situation because now they’re gonna give us the context that we need to draw our own abstractions, or at the very least, learn when those abstractions are more likely to serve us and when they’re more likely to hurt us.

Brett McKay: So, we’ve talked about principles, first principles, here. Let’s take a step down and talk about how we can look at the world before we actually make decisions, and, something you have written about extensively is this idea of mental models. So, for those who aren’t familiar, what are mental models and how can they help us see the world better?

Shane Parrish: So mental models describe the way the world works, right? They shape how we think, how we understand and how we form beliefs. They’re largely subconscious. They operate below our surface. We’re not generally aware that we’re using them at all, but we are. They’re the reason that we look at a problem … the reason that when we look at a problem, we sort of pick these variables that matter; these are irrelevant. They’re how we infer causality. They’re how we match patterns, and they’re sort of like how we reason, right? And, if you think about it, a mental model is simply a representation of how something works. We can’t keep all the details of the world in our brains, or concepts, so we use models to simplify something that’s more complex into something that’s organizable and understandable.

Gravity’s a great example of a mental model, and, one example of how that works: it’s super simple, but, if you’re holding a pen like I am right now and I tell you I’m gonna drop this and I ask you what happens, well, you know what happens, and if you hear a click and you see my hand open, you can also retrospectively try to figure out what happened because you understand this concept of gravity, but if I told you to calculate the terminal velocity of something that was falling, most of us wouldn’t be able to do that, so we have this concept of gravity, and it’s useful, but we don’t necessarily need to know all the details about it, right? We don’t need to know that this pen is gonna fall at 9.8 meters squared per second. That’s not gonna help us at all, but we understand that, if we drop the pen, what’s gonna happen with it.

And so, the idea with mental models is: how do we focus our time on learning mental models that are less likely to change over time so that our knowledge becomes cumulative, and how do we develop a framework for making decisions that incorporates these mental models? How do we think better? And, if you think about thinking, quality of your thinking is proportional to the models that you have in your head, and their usefulness in the situation at hand, right? So, the more models you have — you can think of it as a toolbox — the bigger your mental toolbox, so the more likely you are to have the right model to see reality in this given situation.

And, when it comes to improving your ability to make decisions, the variety of models that you have matters, right? Most of those, though, if you think about it, we’re specialists. We go through high school and we start specializing in high school, increasingly, over and over again, right? So, you go into tracks, science or arts. You go into advanced or non-advanced. Then you go to college or university and you get more specialized. You might get the first year, which is a little more multidiscipline, but, increasingly, you live in this sort of domain that you’re in, so by default, a typical engineer will think in systems. A psychologist will think in terms of incentives, and a biologist might think in terms of evolution, but it’s only by putting these disciplines together in our head that we can walk around a problem in a three-dimensional way, right?

If we’re only looking at the problem one way, we’ve got a blind spot, and blind spots are how we get into trouble, and so, if you think about a botanist looking at a forest, they may focus on the ecosystem. And environmentalist may see the impact of climate change, whereas a forestry engineer might see the state of tree growth. The businessperson might see the value of the timber and how much it’s gonna cost to extract it. None of those people are wrong, but none of those views are able to describe the full scope of the forest, so, mental models are about: how do we develop those models that we need in our head to get a better view of reality, and I think that we don’t get enough of that through college, university, or our own learning, and what we’ve tried to do is develop a system where we talk about unchanging mental models that help give you the big ideas of the world that munger had said that there’s hundreds of mental models, but there’s a very relatively few of them carry the bulk of the weight in terms of making better decisions.

You can get esoteric ones just like you can have a chisel in your toolbox that you might pull out on occasion, but you’ll use your hammer a lot more, so there’s tools that are more common than other tools that help us think and solve problems.

Brett McKay: So, what are some examples of the sort of long-lasting ones that you use on a regular basis to make decisions?

Shane Parrish: Well, I think one of my favorites is like, “The map is not the territory,” and the concept there is the map of reality is not reality. The best maps are imperfect. Even mental models are imperfect. That’s because they’re reductions of what they represent, and if a map were to represent the territory with perfect fidelity, it would no longer be a reduction, and it wouldn’t be useful to us if it wasn’t a reduction, but a map can also be a snapshot of a point in time representing something that no longer exists, and this is an important model because we run businesses off maps. We use financial statements to evaluate whether one of our investments is doing good.

Well, the financial statements are a map. That doesn’t represent what’s actually happening in the business. You can look to Enron as a perfect example of that. The financial statements leading up to the bankruptcy, they didn’t represent the territory that was actually happening in Enron. If you think about the business that we’re in, you can think about email lists. Well, the size of your email list is a map, but it doesn’t represent the territory. It doesn’t tell you about the open rates. It doesn’t tell you about the engagement. It doesn’t tell you whether people care about whether they receive the email or how many emails you get if people miss it.

Just thinking about dashboards and how we run business, we have to run businesses on heuristics, but the more that we run businesses on heuristics, the less in touch we are with the territory. The less we see what’s actually happening, and we wanna keep grounded. We wanna keep an eye on what the territory really looks like because we wanna know when the territory shifts, because a shift in the territory, a shift in the environment, a shift in the conditions under which we’re operating and the way that we’re operating might mean that our map is outdated. If we’re using the wrong map, we’re gonna get to the wrong destination.

Another one that I really like is sort of second-order thinking, which is one we used at the intelligence agency all the time. Almost everybody can anticipate the immediate results of their actions, but that’s kind of first order of thinking, and it’s pretty easy and it’s safe, and it’s a way to ensure that you kind of get the same results as everybody else, but, second order of thinking is thinking further ahead and thinking wholistically, and it requires us to not only consider our actions and their immediate consequences, but the subsequent effects of those actions as well, and failing to consider the second and third order effects can unleash disaster.

If you think about running a business or doing something in life, you wanna think about something where the first order consequences are negative, but the second, third, fourth, fifth, sixth order consequences are positive, and the reason that you wanna look at those things specifically is because there’s not gonna be a lot of people who do those things. If you think about delayed gratification is a great example of a first order negative, second order: highly likely positive, third order: highly likely positive. Saving for retirement, another example. You’re suffering now to do something for a later benefit, and those are things that you wanna think about not only in the context of business, like, “What pain am I willing to suffer, now? What can I do now that I know is gonna be negative in the short-term, and visibly negative?” You want people to see how negative it is, but, if I think about the second, third and fourth order consequences, those are positive consequences, and even better if they’re not super visible positive consequences, and then you can start to do things from a competitive point of view that other people can’t do, and they won’t be able to copy and they won’t understand what you’re doing. And, I think those things are really just different ways of seeing the world, right?

Brett McKay: Yeah, so there’s lots more we can talk about and they’re all on the website. We’ll send people links there so they can go check them out. One of the other interesting things I’ve read that munger talks about is that he’s a voracious reader. He’s reading about economics. He’s reading about philosophy. He’s reading biology. He’s reading behavior psychology, and what I find interesting is that he’ll sometimes find ways … and, as he’s developing these mental models, he’ll find ways to apply a mental model, say, from the realm of biology that you would never think to apply to business, but he does that, right?

Shane Parrish: Yeah, definitely. We learn this sort of domain dependence in school, which is really interesting. You get presented with a physics problem in physics class, and you use this almost algorithm to solve this problem. They’re gonna give you a problem that looks like a certain way, and you’re gonna take this formula that you use, and you’re gonna apply it, and we’re not focused on a core understanding of the underlying concepts, and we’re not focused on how those concepts might apply outside of biology, outside of physics, outside of chemistry, outside of math. Probabilistic thinking is a great example of just probability applied to thinking, and a lot of people don’t even view our thinking as probabilistic, but, inherently, it is probabilistic. We’re just trying to create better probabilities, and I think that we do ourselves a disservice when we learn about these topics and we learn about them in such a one-dimensional way because the real world doesn’t present you problems that look like your grade ten chemistry test.

They’re gonna present you problems where the information you learned in grade ten might be valuable for you to apply, but you’re not gonna see it because you’re not thinking about it in that way, and I think if we learn about all of these basic concepts and we just take a look at how they might apply in different situations … and, I think munger sort of has been a champion on that. Peter Kaufman is another one and Peter Bevelin. Those three in particular have been really good at: here are some core concepts and here are how they apply outside of these domains in which they’ve been presented, or how we can think about them.

And, most of the time, those examples are fairly esoteric or specific, but they give you a sense for how you can think about evolution and how you can think about evolution and apply it to business would be: things evolve and we have these mutations, and those mutations sort of get beneficial selection in a certain environment, and we think of in organizations that we don’t wanna try something that’s failed again, but that’s a really simplistic example. When you go to somebody and you’re like, “I have this idea,” and they’re like, “Oh, that failed. We’ve tried that,” and, that is a really common thing. I talk to my friends who work in organizations. That happens all the time. What you’re missing, though, is the environment in which it failed. You’re not talking about that. You’re not talking about, “Did the reason it failed change? Will it succeed now?” Nature is blind in terms of gene mutations. It just keeps trying the same experiments over and over again, and it ends up with different results.

A trait that is valuable today might have been one that is way less valuable hundreds of thousands of years ago, and that’s something that we can apply to business, and you can think about it. It just requires a few extra seconds that you’re not dismissing it out of hand and you’re going, “Oh, that failed because of this but this reason is no longer there, so maybe it will work now,” and that allows us to experiment better, and that’s an example of how we can apply evolution to business.

Brett McKay: So, it sounds like the way you develop mental models is reading a lot and just putting these things into practice. What have you found the best way to develop these mental models?

Shane Parrish: I think reading and thinking about, “Could this apply in a different scenario?” is a great example of that, but we try to distill them for other people because we realize that not everybody has a ton of time to put this effort into reading biology textbooks or reading as much as we do, and so we’re just trying to, like, “Here’s a model. Here’s how you can apply it in different ways,” and we’re gonna add to it later, but we give you the 80% of it. If you do the extra work … the problem is, if we give you the whole model, you won’t actually learn anything. You need to do a little bit of mental work. You need to, “How does it apply to me? How does this apply to a situation that I’m facing? How can I use this? What are other circumstances?” and it’s those questions that create the reflection for you, personally, and that reflection leads you to your own abstraction or where it’s gonna be useful and where it’s gonna hinder you.

And I think the big problem with mental models: I think the world would be just a much better place if we all just … the base level of education just included all the big ideas from most of the major disciplines, not the new novel stuff, the stuff that doesn’t change, like incentives in psychology and randomness and numeracy and evolution and power loss and systems thinking and feedback loops and chaos dynamics. Those are the things that we wanna think about and those are the things that we wanna learn, and those are the things that you learn in a particular domain, but you don’t necessarily learn as a general education.

And then, we also wanna overlay that with what we call the general thinking concepts, which are just tools that allow us to think through problems in a different way, and we already talked about a couple of them: the map is not the territory, and sort of second order of thinking are just ways that we think about problems in a different way. You can also add thought experiment, which Einstein is famous for. The way that I landed on this was I did a lot of computer programming, and so, you end up with this concept called a sandbox, and thought experiments are really like a sandbox. You run this experiment, and it can’t really wreck the system, but you’re trying to think about what will happen and it’s in a contained sort of unit.

That was how Einstein came up with relativity. I think there’s a lot to that. Thought experiments also help us point out second order thinking. They help us think in first principles. They help us probabilistically think, and all these things reinforce each other, so the more of them you have, the better you’re able to see reality, and the better you’re able to see reality, the fewer blind spots you’re gonna have and the fewer blind spots you have, the better decisions you’re going to make.

Brett McKay: Yeah, another guy who did something similar to what munger and these other guys are doing was John Boyd, the military strategist with his OODA loop, and he had sort of a similar idea of mental models, but I love this idea that he had. He only published one piece of published work on his whole life, and it was called Creation and Destruction, and he had this idea that you can take, I guess, what you would call mental models, and you can take parts of them from each other and then combine them together to start something new, so you destruct, and then you create something new. So, that’s another level you can take with these mental models is not just use them discretely by themselves, but actually start mashing them together to create something new.

Shane Parrish: Yeah, totally, and you can also use other people’s mental models against them. If you’re in a military or you work for an intelligence agency, you wanna think about the cultural influences that affect people’s mental models. You wanna think about their genetic heritage. You wanna think about their ability to analyze and synthesize and how they’re likely to use new information, and you wanna think about how they’re combining models and how they’re taught in schools to combine models, and if you think about organizations and diversity, you also wanna think through at a different level about the diversity. It gives it a different meaning to diversity.

Diversity becomes applying mental models in a different way. Diversity comes from different backgrounds, different socioeconomic status, different lives, but so often, we’re getting less and less diverse in organizations. We hire similar sort of backgrounds, similar people, and then, more and more, we’re training them in very similar ways, and so they go forward. Everybody wants to be promoted, so they get into an organization. They’re like, “What’s my path to promotion?” It used to be, like, you’d be like, “It’s your first day, buddy. Calm down,” but now it’s kind of expected and we do this where we give people a path to promotion, but what we’re doing on that path is we’re creating a checklist and we’re creating a checklist of people who are going to combine … they’re going to, A, have the same mental models, and, B, they’re going to combine them the same way. So, all of those people are more likely in the future to look at a problem in the exact same way, and I think Boyd’s concept of almost comminatory play applied to mental models is really good.

Brett McKay: So, let’s get into making decisions. First of all, let’s talk about why really, really smart people can sometimes make really, really bad decisions. Is it just incorrect mental models or is it a combination of something else?

Shane Parrish: Think about what we were talking about earlier a little bit with the information overload and how we consume information. It’s really easy to tell when we’re physically overloaded. If we go to the gym together and I put too much weight on a bench press, you’re just not gonna be able to lift it, and you know that you’re physically overloaded, but it’s really hard to tell when we’re cognitively overloaded, and when we’re cognitively overloaded, we tend to take shortcuts. Our brain wants to optimize for … this applies to everybody. It wants to optimize for the best solution that fits what we have immediately in our minds, and the more busy we are, the more hurried we are, the less we’re gonna have in our minds, the less that decision has to satisfy, which is also more likely to mean that decision’s not good, especially if it’s not a common decision that we’re making.

And so, I think we get led astray in a couple ways. One is just: we’re overloaded. We’re overworked. We’re overtired, and one of the reasons that all of that happens, which is really weird and perverse, is that we just make poor initial decisions, and the consequence of poor initial decisions is that we have to spend more time correcting those decisions, which increases our anxiety and our stress, and one of the ways that we can get out of this spiral is just, counterintuitively, slow down. Actually schedule thinking time. That would be one way that we would improve decisions dramatically, and most of the people I know who make really good decisions on a consistent basis do that, and they’re not people that you would think typically have time, right? They’re people like Patrick Collision who runs Stripe, right, and Toby who runs Shopify, and those people make time to make decisions. They make time to think about problems, and they think about problems in a different way, and I think that that’s really important.

And then, you also, counterintuitively, you wanna do something that’s first order negative, second order positive. We talked about that earlier, which is like, you wanna intelligently prepare to make decisions. What are the decisions that you’re likely to be making in the next year or two? What information do you need now in advance of those decisions that’s gonna allow you to make better decisions? And, I think, too often, we go searching for information at the point when we’re making a decision, and what happens is, we just end up in this weird state, and the weird state is that we’re seeking out information when we need it, so we’re more likely to overvalue the information to begin with, but that information is also commonly known, so it’s gonna almost guarantee a mediocre decision.

And, that might be great. A mediocre decision might be good if we don’t know what we’re doing. We sort of wanna follow the common wisdom ’cause that’s gonna lead to average performance, and if we do know what we’re doing, we wanna know when to deviate, because deviation, when we’re not following others, is gonna lead to out-performance, but too often, we’re sort of like, don’t know what we’re doing and we deviate, and that leads to what I call the lottery ticket, and it’s like the hail Mary pass in football. It might be successful, and it might not, and if it is, it’s not repeatable and you have no idea why it worked, and if it doesn’t work, well, you just sort of absolved yourself and that you didn’t know what you were doing, so you don’t actually get better. It’s the worst quadrant to be in if you were to map that on a 2×2 matrix.

And, I think we all suffer from these things, so the keys are: slow down. It seems counterintuitive. You might have to work a little bit longer at first. Make better initial decisions. That’s gonna free up a lot of your time. That time, use that time to invest in intelligently preparing to make better decisions. That’s gonna vary depending on the type of career you have, the type of field that you’re in, but you can start by understanding the big mental models that exist in the world. What are the 101 biggest ideas that I would have learned if I did a university education and just sort of the basic ideas of each discipline, and then think about how those things apply to your specific field, your specific problems, and then get more esoteric.

What information do I need to seek out to make better decisions in my niche, and then you wanna take time to incorporate that, find it, and not a lot of people are gonna do that, and, slowly, over time, you’ll be able to leverage those decisions into more and more responsibilities. At first, it’s gonna be small. You might have an incremental advantage over somebody else in making a decision, but it might not even be … or, it’s barely perceptible, but, over time, as you make more and more consistently better decisions, you get more and more responsibilities. As you get more and more responsibilities, that leverage starts to get in, and now, that little advantage turns into a bigger advantage.

Brett McKay: So, it sounds like decision making, a lot of the work isn’t on the front end. It’s not actually when you make the decision. It’s getting the information you use, thinking, using mental models to look at the problem in a 3D way, and then, when it comes time to actually making the decision, is it pretty easy at that point?

Shane Parrish: Well, that’s a really interesting question because I think if you understand the problem, it’s really easy to know what to do, and one of the indications that you don’t understand the problem or you don’t understand the trade-offs, or you don’t understand what you’re optimizing for, you don’t understand the situation the way you want to is that you get stuck in this paralysis of information overload or seeking out information at the time of making a decision in the hopes that it’s just gonna satisfy you. That’s a good state to be in. You just have to be aware of it.

None of these states are good or bad by default. Sometimes they serve you and sometimes they don’t serve you, and your goal as a thoughtful practitioner of decision making is to understand, “When is this likely to serve me and when is this likely to hurt me and do I have to deviate and do I have to have a different process for this situation in particular?” If you’re picking toothpaste, it doesn’t really matter; the consequences of a bad decision are easily remedied, but if you’re making a consequential, irreversible decision, you wanna approach that problem differently. What you don’t wanna be doing is googling other people’s thinking. You don’t wanna be googling information because you’re gonna overvalue it, and when you overvalue it, you’re gonna take risks that you probably shouldn’t take, and, anything on the first page of Google is probably commonly known, so you’re not even getting an information advantage over other people. So, you have to think about all of those things when you’re making a decision. I know it sounds like a lot, but it becomes a bit of a habit after a while.

Brett McKay: So, whenever you make these decisions … one of my favorite Mungerisms is this idea of: the goal in life is to try not to be consistently not-stupid instead of trying to be very intelligent, ’cause a lot of people, they focus on making really, really brilliant decisions, but, oftentimes, they do that at the expense of just making really dumb decisions.

Shane Parrish: Yeah, think about it: most of the concepts that we learn to look at the world are on a risk basis, so the tools they have to evaluate situations are based on risk. It’s like roulette, right? You know how many slots there are. You know the odds it’s gonna land on any particular slot, assuming a random wheel, but, life isn’t really about risk. It’s more uncertainty, and uncertainty, by its very nature, means we might not know all the possible outcomes, and if we don’t know all the possible outcomes, there’s no way we know the probability of each individual outcome, so we have this idea of what we see and what we think is likely to happen, but we don’t really know how accurate that view is, and so, one of the ways to make that view more accurate is to take the inversion of that, which is, “What are the outcomes that I want to avoid and what can I be doing now to avoid those outcomes?” And, if I can avoid those outcomes, now I’m more likely to get to the outcome I want, and I think working backwards is really hard for people to do.

And, if you think about in meetings, we had this quote a while ago, which is, “Avoiding stupidity is easier than seeking brilliance.” I came up with that while I worked at the intelligence agency, and it was a really apt sort of quote because I was in meetings all the time where people are trying to one-up each other in their brilliance and insightfulness and complicated view of the situation, but, often, the best decision, really, when you’re dealing with an uncertain environment is, “Okay, what are the things that would be really bad? How do we eliminate those from happening?” And, if we can eliminate all the bad outcomes, well, we’re only left with good outcomes, and you can think about the problem forwards and backwards, and I think that that gives you a much more holistic view of the situation, which leads to a better understanding, which leads to a better initial decision.

Brett McKay: I think one of the examples munger gives of good heuristics or rules that will prevent you from doing stupid things, and if you just follow those, you’ll have a pretty good life, like the ten commandments from the Bible. If you can go-

Shane Parrish: Yeah, yeah.

Brett McKay: Not killing anybody, not having any envy, not committing adultery, not lying … if you avoid those things, the consequences that come with those things, your life is gonna be pretty good, and then everything else is just a cherry on the top after that.

Shane Parrish: Avoid leverage, financial leverage. Avoid alcohol and substance abuse, and if you think about it in a decision making context, there’s other things you can do to sort of prime the environment, which is like: get a good night’s sleep. Take time to think about the problem. Don’t be rushed, and when you look at sources of stupidity or where we’re likely to be stupid, it’s often when we’re rushed, when we’re switching contexts really quickly, when we haven’t got a lot of sleep, when we have something important to do, and I think that just slowing down and being like, “What are the basics? Let’s get the basics right?” And, what do I control and what don’t I control? To a large extent, you control how much sleep you get. To a large extent, you control whether you’re rushed or not, even if you work for an organization. You control a lot of your time, a lot more of your time than you think you do.

And, the higher up in an organization you get, one of the weird things I found is I controlled less and less of my time the higher I got, and I thought that was really weird, when I almost needed more and more control over my time and not less and less, because the decisions had more and more consequences, and you’re expected to kind of context switch eight to ten times over the course of a day and make large decisions that effect a lot of people, and you’re not really given a lot of time to think about that. I think that those are things that you wanna start thinking about. What are those variables that we can get right? What things prevent us or get in the way or encourage is probably a better way to look at that. What things encourage stupidity or encourage bad decisions?

Brett McKay: And avoid those.

Shane Parrish: Yeah, and then you’re better off than a lot of people just by doing that, right? You don’t have to be brilliant.

Brett McKay: Yeah, you don’t have to be brilliant. Just don’t be stupid. So, beyond taking a multidisciplinary approach to decisions, have you come across any tried and true tactics or checklists that you walk yourself through in making a decision?

Shane Parrish: I think munger came up with this. Most people have never even heard of it, but he came up with a very simple framework, which I call the munger two-step, which is: look at the situation. Do I understand it? If I don’t understand it, that’s one path away from it, and that path, you wanna go seek out somebody that does understand it, ideally. If I do understand it, I know what variables matter and I know how those variables interact, and then the second step to this decision making is, “How might I be fooling myself? What are the ways that I might be tricking myself into thinking that I’m right about this?” And, I think that that is a very simple heuristic framework that people can start with, and I … one of the mistakes that I see people make is, “I don’t know what I’m doing, but I recognize it,” so it’s super important that you recognize it. There’s, again, tying your ego to outcomes and not you personally being right enables you to see the world much more clearly than other people.

And so, when you’re able to go to somebody else, the mistake that most of us make — maybe we have to make a decision in an area that we’re not an expert in — is that we ask people what they would do. We go to the auto dealership and we ask them, “What should we fix on our car?” And, of course, they have their own incentives, and we don’t learn anything when they tell us, or we go to somebody and we say, “How should I pick a doctor?” And we go to a doctor friend of ours, and we ask them, “What doctor would you pick?” What we should ask them is, “What variables would you consider relevant, when you pick a doctor?” Because now we’re actually learning. Now, the next time I have to pick a doctor, I have an idea of what those variables are, which is better than somebody just telling me what to do, but we’re so busy and we’re so starving for meaning in our life that we just … sometimes we coast.

We ask people, “Who would you pick as a doctor?” And we’re not actually taking advantage of an opportunity to learn. It might take five extra minutes to learn something, but you’re gonna learn something that applies over the course of your life. That’s a great example of something that might be first order negative, second order positive.

Brett McKay: So, I imagine on that second part of the munger two-step, the going … figuring out how you could be fooling yourself, having a list of biases that exist out there that we know of and just walking through it check by, say, “Is this bias playing effect here? Is this bias playing effect here?” And then, answer those questions, and you get a better idea if you’re fooling yourself or not?

Shane Parrish: I have a bit of mixed feelings on that. I think that the more intelligent you are, the better the story you’re gonna tell yourself about why that bias doesn’t apply in this particular situation. I think biases are great at explaining why our minds trick us. I think that we need to structure things more physically in our environment or with a process, structured thinking to sort of account for biases, whether we have reminders about what to do, whether we have this informal process that we adjust based on the type of decision making that we’re doing, and I think that we want to incorporate that, and we also want to incorporate other people’s views that are very diverse and different from us, and I think that that’s gonna allow us to get out of this.

Really, the ultimate one is just: attach your ego to outcomes, not your ego to your opinion or your idea being the one that’s adopted, and that’s gonna enable you to just see clearer what’s happening in the world, and I think, ultimately, that’s what we wanna do. We wanna understand the situation. It was [Wittgenstein] who said, “To understand the problem is to know what to do.”

Brett McKay: And I imagine, too, besides detaching your ego from your decision, also, detaching yourself from results might help, ’cause sometimes you can make a good decision, the right decision, but the results are bad because of factors that you had no control over.

Shane Parrish: Yeah, sometimes. I think a lot of times, we … you have to play a repeatable game, and that repeatable game is like, “How do I calibrate? Is my judgment of the fact that I made this right decision correct?” And you have to be self-aware enough to be like, “Oh, I consistently think I’m right, but I’m getting bad outcomes.” Well, there’s something wrong either with your view of the world, with how the decision’s being implemented. There’s something that there’s a flag there that you need to look at. It doesn’t mean that you’re wrong. It doesn’t mean that you made a bad decision, but it does mean that there’s something there for you to look at.

Too often, it’s really easy just to convince ourselves that we did the best we could. We made the best decision, or, given the information we had, that was all I would decide, but I used to ask people at the intelligence agency what information you had. What information did you use to make that decision? Show me, and people would just come up with stuff, and they would come up with it post-hoc, and that’s how we started creating decision journals, which is like, “No, you’re gonna record this at the time you make the decision, and we’re gonna see this is how I can judge your judgment. This is how I can be comfortable trusting you to make decisions. I need to see the way that you think. I need to see the variables that you consider relevant, and, together, we’re gonna own your judgment, and if you’re consistently missing something, it’s my job as your boss or peer to point that out so that we can come to better decisions together, and if we have to structurally process that, maybe your decision journal includes a flag for, ‘Hey, are you considering a large enough sample size because you have a bias towards small sample sizes?'”

And just that alone, you have to fill that in. It’s not a checklist. It’s something you have to fill out, you have to explain, and you have to do it in your own handwriting, and we were able to pretty dramatically raise, I think, the quality of the decisions we made.

Brett McKay: Is there some place people can go to learn about those decision journals, how to do that?

Shane Parrish: Yeah, if you just google decision journal, or go to for decision journal, we have a template online that we use. We’ll be updating that soon. We’re working with the Special Forces to come up with a slightly different version of it right now.

Brett McKay: That’s awesome. Well, Shane, this has been a great conversation, and there’s so much more we could talk about. We could probably devote entire episodes to individual mental models. So, people can go to … fsblog to find out more about what you do?

Shane Parrish: Yeah, or, @farnamstreet on Twitter.

Brett McKay: Fantastic. Well, Shane Parrish, thanks so much for your time. It’s been an absolute pleasure.

Shane Parrish: Thanks, man. I really appreciated the conversation.

Brett McKay: Well, that wraps up another edition of the AOM podcast. Head over to where you can find thousands of thorough, well-researched articles on personal finances, style, life, social skills. You name it, it’s there. If you haven’t done so already, I’d appreciate it if you take one minute to give us a review on iTunes or Stitcher: helps out a lot, and if you’ve done that already, thank you. Please consider sharing the show with a friend or family member you think would get something out of it. Until next time, this is Brett McKay encouraging you to not only listen to the AOM podcast, but to put what you’ve learned into action.

Related Posts