Menu

in: Advice, Character, Podcast

• Last updated: April 4, 2022

Podcast #649: Thinking for Yourself in an Age of Outsourced Expertise

In an age where endless streams of data, options, and information are available, it can feel like every choice — from what TV show to watch to how to invest our money — ought to be optimized, and yet making any choice, much less an ideal one, can seem completely overwhelming. How do we figure out what to do? Much of the time, we don’t. Instead, we outsource our thinking to technology, experts, and set protocols. This, my guest today says, is where some real problems start.

His name is Dr. Vikram Mansharamani and he’s a Harvard lecturer who studies future trends and risks, as well as the author of Think for Yourself: Restoring Common Sense in an Age of Experts and Artificial Intelligence. Today on the show, Vikram explains how our increasingly complex lives have led us to increasingly rely on algorithms, specialists, and checklists to make decisions, even though experts are best suited to untangling complications rather than complexities. We then discuss the issues that can therefore arise in relying on expert advice, including the siloing of information and the application of misdirected focus. Once we diagnose the problem (and how the problem can, for one thing, muddy medical diagnoses), we turn to the solution, and how we can harness the good that technology and experts can provide, without undermining our ability to still think for ourselves, by doing things like asking experts about their incentives, knowing our own goals, triangulating opinions, and crossing silos. We end our conversation with how the serendipitous discovery of perspectives that can come from flipping through a magazine and browsing a bookstore can be part of restoring self-reliant thinking in the 21st century.

If reading this in an email, click the title of the post to listen to the show.

Show Highlights

  • The problems with relying on experts and technology to make our choices for us 
  • The rise of complexity, and how that changes our world (and the difference between complex and complicated) 
  • Maximizing vs. understanding 
  • Can AI really solve all the problems it promises to?
  • How software and optimization can instill FOMO on our choices 
  • The meaningless data of rankings and reviews 
  • Choice paralysis 
  • When experts should step back and view the world outside their silo 
  • When a helpful checklist can lead you astray 
  • How to manage your focus 
  • What a prostate cancer screening test ended up causing more harm than good 
  • What is the Peter Principle?
  • How to be more self-reliant in the 21st century 
  • The serendipity of flipping through a magazine or browsing a bookstore 

Resources/People/Articles Mentioned in Podcast

think for yourself book cover

Connect With Vikram

Vikram’s website

Vikram on Twitter

Listen to the Podcast! (And don’t forget to leave us a review!)

Listen to the episode on a separate page.

Download this episode.

Subscribe to the podcast in the media player of your choice.

Listen ad-free on Stitcher Premium; get a free month when you use code “manliness” at checkout.

Podcast Sponsors

Click here to see a full list of our podcast sponsors.

Read the Transcript

If you appreciate the full text transcript, please consider donating to AoM. It will help cover the costs of transcription and allow other to enjoy it. Thank you!

Brett McKay: Brett McKay, here and welcome to another edition of The Art of Manliness Podcast. In an age where endless streams of data, options and information are available, it can feel like every choice, what TV show to watch, how to invest your money ought to be optimized and yet making each choice, much less an ideal one, can seem completely overwhelming. So how do we figure out what to do? Well, much of the time, we don’t, instead we outsource our thinking to technology experts and set protocols, this, my guest today says is where some real problems start. His name is Dr. Vikram Mansharamani, and he’s a Harvard lecturer who studies future trends and risk, as well as the author of, Think for Yourself: Restoring Common Sense in an Age of Experts in Artificial Intelligence. Today on the show, Vikram explains how our increasingly complex lives have led us to increasingly rely on algorithms, specialists and checklists to make decisions, even though experts are best suited to entangling complications rather than complexities; we’ve talked about the difference between the two. We then discuss the issues that can therefore arise and rely on expert advice, including the siloing of information and the application of misdirected focus.

Once we diagnose the problem, we then turn to the solution and how we can harness the good that technology and experts can provide without undermining our ability to still think for ourselves by doing things like asking experts about their incentives, knowing our own goals, triangulating opinions and crossing silos. And we ended our conversation with how the serendipitous discovery of perspective that can come from flipping through a paper magazine and browsing a bookstore can be part of restoring self-reliant thinking in the 21st century. After show’s over, check at our show notes at aom.is/thinkforyourself.

Vikram, welcome to the show.

Vikram Mansharamani: Thanks for having me, Brett. Thrilled to be with you.

Brett McKay: Well, you just came out with a new book called, Think for Yourself: Restoring Common Sense in an Age of Experts in Artificial Intelligence, and in this book, you’re making the case in the past few decades, lay people like just regular folks have been increasingly outsourcing their thinking to experts and even technology, and I’m sure we’ll dig into some examples deep, but just off the top, sort of give us a big picture view, what are some examples that you’ve seen where you see people just outsourcing their thinking to experts and technology?

Vikram Mansharamani: Sure. Well, Brett, I think it starts off with the idea that we’re drowning in information and data, and then the result has been more and more decisions are being put in front of us to be made, and so we have too many choices, and we think because there are more choices, that there is a perfect answer, that there is an optimal decision to be made. We also know we can’t make that by ourselves, that we don’t have enough information to do so or frankly, we have too much information and we need some help so we turn to expertise, and expertise can be embodied in the form of technology, human beings, I.e. Experts, or even checklists and rules. And so just think about the GPS device that many of us use when we navigate our way around town. A lot of us have stopped thinking about maps, we don’t actually know sometimes where we are, but we’ll listen to this little voice that tells us make a left up here in 300 yards, stay in the right lane, merge left, etcetera. So that’s a great example where we’ve stopped thinking about the dynamics of geography and our path within it, and we’ve resulted in just outsourcing our thinking to the GPS device.

Brett McKay: Well, another example of outsourcing our thinking to technology is algorithms. You go to Amazon and instead of thinking, “Oh, do I really like this book?” You just rely on the algorithms that Amazon says, “Yeah, you’re gonna like this book so shut up and buy the book.”

Vikram Mansharamani: That’s right. Think about, you used to go to a bookstore, at least I used to. I don’t know how old the listeners are, but I used to go to bookstores where I would browse the shelves, I went looking for a topic, possibly a specific book and I would find adjacent titles, other things, and it was this somewhat unorganized, albeit fortuitous search process that often was fortuitous. Nowadays, you end up in these little echo chambers. You expressed interest in a book on baseball strategy, Amazon’s gonna recommend a lot of baseball strategy books to you over time, and they’re gonna get more and more specific because that’s more and more likely based on your revealed preference of having purchased that book or that topic, and so yeah, I think that those algorithms are conscientiously managing where we pay attention.

Brett McKay: And what sorts of problems can pop up when we over-rely on experts in technology?

Vikram Mansharamani: Well, think about it this way. Experts and technologies live in silos, and we live in the real world where there’s a context outside of those silos, but when you rely only on the siloed information, you’re not seeing the big picture, you’re not seeing the context and so what you get is information that’s optimized in a particular domain, but may not be optimized for you or your overall context. So I think the biggest problem is the silo effect, if you will.

Brett McKay: And when did you start noticing this outsourcing of thinking were starting to cause problems?

Vikram Mansharamani: Well, it really has to do with my first book. My first book was about financial bubbles, and what I realized was economists and those that were very narrow and focused, I.e. Those who were deep and specialized sometimes missed what was deemed very obvious to the lay person, and what I realized was actually a multi-lens, multidisciplinary view could help you identify dynamics that a single perspective might miss. And so what I realized was actually every perspective was limited, biased and incomplete, combine that with the fact that we often outsource to people who have really deep focus and expertise, and what you realize is you’re outsourcing your thinking to incomplete perspectives, so why do that? Or if you’re gonna do that, maybe consult multiple perspectives so we can triangulate by discussing your insights with that of an insight from another expert, with another expert, with another expert and really triangulate to get some sense of what the problem really is about.

Brett McKay: So you mentioned earlier, one of the reasons why we start to rely more on experts is that there’s this… We’re just flooded, inundated with information, so many choices, but not only that, I think everyone’s experienced information overload, which is why they go to Google and they look for, they Google best whatever for my kid ’cause they… We’ll talk about optimization here in a bit, but besides being flooded with information, things have gotten more complex, like the information we had to work with is much more complex, what does that look like? Flush that out for us a little bit, what does the increasing complexity look like and how is it pushing us towards experts in technology?

Vikram Mansharamani: Yeah. I think there’s a little nuance here, Brett, that I’d love to make sure I clarify for the listeners and that has to do with the terminology, and so let me first describe what I think are a couple of different types of problems and environments that we may be facing. The first is a simple environment or a simple problem, and that’s one where there is a clear cause and effect. This is a problem that could be solved with automation very quickly with software or a spreadsheet. Think about how to calculate the interest on a credit card balance, there’s a spreadsheet, it says, “Here’s your average balance, here’s the interest rate, there’s your interest payment.” Alternatively, you can get to something that I would call complicated. Complicated environments or problems are ones where there is in fact a clear cause and effect, that it takes an expert to help you identify that because it’s layered in multiple different causes and different effects. So think about the fact that your car didn’t start this morning. Maybe you were more astute on this matter than I would be, but I would likely seek assistance, especially as these cars have become more technologically sophisticated, is it the starters, is it the alternator, is it the ignition, is the battery, is it the… What is the problem? I don’t know.

There is a problem, it didn’t start. It takes an expert mechanic, someone who understands and can disentangle all those causes and effects to get to it. This is the domain, what I call complicated that experts really thrive within. The minute you cross the threshold from complicated into complex, what we have are emergent phenomenon, this is where causes and effects are not clearly linked or identifiable, and it’s because there’s just too many moving parts. This is the domain of social dynamics, when you have lots of individuals thinking for lots of reasons, different thoughts and interacting to produce behaviors that emerge, so it’s an emergent phenomenon. It’s in this domain that our instincts are to lead us towards experts who promises salvation who can solve these problems, but these are not problems that are solvable. These are problems that are understandable and that we can try to get our arms around, but there are no solutions. And so when you employ an expert who’s skilled at helping us navigate a complicated dynamic in a complex dynamic, what you find is you’ve brought a man with a hammer to a situation where there may or may not be a nail, but he’s gonna find that nail, and so that’s the domain of complexity where I would suggest it really does make sense to use multiple experts or multiple perspectives to really get your arms around the type of problem you’re facing.

Brett McKay: Well, the example of how an expert, even though they’re an expert and they’re very knowledgeable about the area, they still can’t solve the problem of complexity, this is your domain, financial advisor, you talk about that, but is it the track record for financial advisors isn’t great.

Vikram Mansharamani: Yeah. Look, I think the financial advice community, it’s hard to really gauge whether it’s great or not, because even the assumption that financial advice has been suboptimal, usually, is based against some optimization logic of, “Oh, we wanna have the maximum blank.” Well, what if instead, a true financial advisor understood what their client’s needs were and increase the probability of achieving those rather than trying to maximize just some theoretical objectives such as, “Oh, we just wanna produce the max return.” Well, the max return comes with some risk. So what if instead the person said, “I wanna make sure I have enough money for my kid’s tuition when he goes to college in three years time and it’s this much money.” Great, then we’re gonna increase the probability of achieving that number, rather than just objectively try to maximize in this ambiguous way. So it’s not clear to me that financial advisors are unproductive or useless, I think they’re probably very productive and very useful. The key is really taking the time for financial advisors to step out of their own little silo of maximize returns, maximize returns and understand client needs.

Brett McKay: And you’re also seeing, I’d say in the past 10 years, you’ve seen companies pop up, promising that they can use artificial intelligence to solve these complex problems, the idea is that you can get these super computers thinking about things and they can see all the different possibilities and these emergent properties. What do you think? Is that can actually gonna do anything or is it just… Can that help us solve problems using technology?

Vikram Mansharamani: Yeah. Look, technology has forever, as far as I can tell, promised us the salvation into a utopia where everything is knowable and everything is optimizable. The problem is technology, at least so far, has been designed by humans that have limitations and biases and other issues and those get embodied in the very technologies that are produced by humans, so when you get to the domain of artificial intelligence or machine learning where they are trying to learn from themselves, the possibilities really could be endless, but they’re not anywhere near the concept of artificial general intelligence, a computer or a software that can actually think and understand common sense dynamics doesn’t seem anywhere imminent to me, at least. So that’s one dynamic, I would say. But it also, if you think about, even something that’s becoming increasingly popular as a topic, like autonomous driving.

If a car is driving, let’s say the car is about to hit… This is a common problem in the discussion of fairness and some of the decision-making literature, but if the car is going to hit either a person on the right side of the road that has a baby in a carriage and is pushing it down the sidewalk or two old people on the left side of the road that are walking with their canes down the street, and that’s it, it has to choose one of those two, which one should it hit? Those types of ethical problems that emerge in the software are things that humans have grappled with, but the software doesn’t know how to grapple with that, the software is gonna deal with it whatever it’s been programmed to do. And so you have these ethical considerations that emerge. The truth is, software embodies values, algorithms embody the values of the people that design them, and so there you go. That’s the problem.

Brett McKay: That’s the problem. Alright, so besides increasing information, besides increasing complexity, you said another reason that we’re starting to turn more towards experts to help us solve our problems is this desire to optimize everything. What does that look like and why do you think we’re trying to be the best to everything?

Vikram Mansharamani: Yeah. Well, look, think about… I’ll give you a great example that I did mention in the book, which is my wife and I would sit down after a long week and we’ll plop ourselves down on the couch, and we’ll say, “Alright, let’s just watch something,” and maybe she’s had a week where she’s in the mood at this point for an action movie. Maybe I was thinking more, “You know what? It was really a heavy week, I want a comedy.” We’re convinced because there’s… I don’t know, a million movies on demand available between XFINITY, Hulu, Netflix. You go down Apple TV, what have you, Amazon Prime. There’s gotta be a movie that can thread that needle perfectly, right? There’s gotta be.

Brett McKay: I think so.

Vikram Mansharamani: There’s so many movies, why wouldn’t there be? Of course there is, and our mood is perfectly studded to that exact movie, but finding it is a nontrivial task and the truth is, we probably won’t. And so what ends up happening is we think because there are so many choices out there in the world that we get effectively paralyzed because we know that at an optimal perfect decision probably exists, but we can’t find it. How do we find that movie? How do I know? Oh my God, you would think I’d have to consult Rotten Tomato. I’d have to consult different movie critics, I’d have to find this, I’d have to find the genres. Look, the stakes are not high enough to do all that, but we’re left with this low-grade anxiety and the result is, we’re probably gonna be unsatisfied because of all that choice. And so rather than sort of empowering us that having all this choice and we can find whatever we want, we end up with this low-grade regret, “Ugh, God, that movie wasn’t perfect; there was probably something better.”

It’s this fear of missing out, fear of missing out on the perfect choice. So we often hear about this FOMO in a lot of walks of life, but the fear of missing out on the perfect choice exists in many domains and so the result is, well, let me go with the algorithm suggestion. Netflix thinks based on my prior watching that I’ll like this, let me try it. Or based on these decisions I’ve made in the past, the expert believes, my financial advisor thinks that I’m very risk-averse, they’re not gonna put me in Zoom stock because it’s volatile even though it went up, oh but it was volatile, what have you. And so the fear of missing out on that perfect decision that is elusively promised constantly by the explosion in choice and opportunities really leaves us with this tendency to run headlong into the arms of experts and technologies.

Brett McKay: Well, yeah, it seems like… It sounds like what you’re saying, that technology actually encourage us to think that way because technology, they want… There’s data with everything so you can see popularity, but that data could be meaningless. I’ve had instances where I’ve tried to get whatever some company said was the best, then I get them, I’m like, “This wasn’t that great.”

Vikram Mansharamani: This is the problem, this is FOMO invading all walks of life effectively. Think about this, Brett. I went and I get a drink at Starbucks, and I was kind of feeling like a coconut milk latte, but they had this picture up there that says, “Real popular.” The app suggested that dollars off if I get this other drink, this mocha something. I’m trying to manage my calories, I’m worried about fat, I’m trying to optimize the caffeine to, I don’t know, a caffeine to a carb ratio, some weird thing that someone somewhere said is important, and so I get it and yeah, it’s okay. But there was a perfection promised at one point. And by the way, this goes right, headlong in conflict with economic thinking. Economic thinking has often said more choice is always better. It can’t be worse.

I can ask you, Brett, do you like an apple or an orange? You said, “I like an apple.” So great. Do you like an apple, an orange or a pear? Well, now you either like the pear or you can still like the apple. The orange is never gonna be better. What we find with humans is, you say, apple, or orange, or pear, and then you introduce a banana and they say, “I like the orange. Wait, hold on a second.” Why did you like the orange now? You’d like the apple more than the orange. The apple’s still there. Now I introduce a banana and now you like the orange? What happened? Well, it turns how choice is confusing and we drown in these sort of decisions. We get paralyzed. You hear about analysis paralysis? It’s choice paralysis. There’s been wonderful research that shows after a certain point, more options actually paralyze rather than empower, and that’s what we’re finding.

Brett McKay: Right, and so that’s why we decided to go to Google, just Google, tell me what the best thing is to buy, or Netflix, tell me the best show to watch, ’cause I don’t wanna make the choice.

Vikram Mansharamani: Yeah, it’s easier. And by the way, in some walks of life, I would tell you, you shouldn’t think for yourself, you should just blindly follow what is right. When the stakes are low, why do I need to try to optimize the movie I’m gonna watch with my wife on a Friday evening or Friday evening. Why should I do that? Why not just… It’s an hour and a half to two hour commitment, and in fact, if I didn’t think of it as something to optimize, but something instead that try to satisfy us, I’d probably enjoy it better.

Brett McKay: Right, okay. But then when the stakes are high, you don’t wanna just rely on the expert or the technology.

Vikram Mansharamani: That’s right.

Brett McKay: Alright.

Vikram Mansharamani: No, that’s exactly right. You don’t wanna rely blindly when the stakes are high. Think of it this way, if you had to worry about a medical decision where a procedure that was somewhat risky that maybe had some side effects, that balancing act is gonna be a little more difficult, and one where I’d encourage to think for yourself.

Brett McKay: Well, something that I thought was interesting too, you argue in the book that this information overload, this information complexity, there’s too many choices is not only affecting lay people, but it’s also causing problems for the experts in the technology we rely on. How so? What’s going on there?

Vikram Mansharamani: Well, think about the experts, they live in silos, and they may, in fact, because they live in silos, not have an appreciation for where their work is useful or not useful. And so, I encourage the experts to also take a step back and see the big picture. One example I’ve used in the book, and I often talk about is, imagine Brett, if you went to your cardiologist and she says to you, “Look, you’re doing great, your health is fabulous. However, I’m noticing your cholesterol levels rise a little bit, it’s a little concerning. What I really wanna do is put you on a statin to lower your cholesterol levels. By the way, Brett, don’t worry about it. Statins are completely safe and proven to work. I myself as a cardiologist, take a statin, most of my medical school peers are on statins. In fact, every doctor in this practice is taking a statin. We really recommend you take a statin, it works.” And so you go ahead and take a statin. Later that year, you come back, you get tested, and lo and behold, your cholesterol levels have fallen. Fabulous, right? She did her job, you can claim victory. And we know with pretty serious good research that high levels of blood cholesterol are associated with higher risk of heart attack and she just lowered your blood level of cholesterol through a statin. Great.

So that’s a good thing. However, now you walk down the hall and you go see an endocrinologist and he tells you, “Brett, you know what, you’re doing great, health is looking good, except I’m seeing signs of pre-diabetes. It looks like you’re developing insulin resistance, and in fact, I think we’re gonna have to address this because something’s not right, there’s a warning here, and I’m worried because diabetes comes with an elevated risk of heart attack.” And so now we’ve crossed the silo away from the cardiologist to the endocrinologist, and we’re seeing the exact opposite impact. Why is that? Because the way a statin works is it interferes with enzymes that impact insulin production, etcetera, and so it interferes with the system. The fact that lower cholesterol is good for you is true, all else equal, but all else wasn’t equal. You took a statin, a foreign object that interfered with other things. And so there’s an example where crossing silos may result in a different insight than living within a silo, so I think it’s useful for experts to look beyond their own silos as well.

Brett McKay: Well, and besides experts in technology, you mentioned something else that can cause us to not really think for ourself, and that’s rules or procedures, or sort of the bureaucracy. Any examples of that causing us to be blind to different options?

Vikram Mansharamani: Yeah, look, sometimes checklists are useful, they’ve proven extraordinarily useful in reducing surgical error, have proven extraordinarily useful in aviation where pilots will go down a checklist to double check everything, etcetera. It’s a means to minimize that complacency that comes in with regular repeated actions and the complacency sometimes increases the error rate. So use the checklist and you reduce the error rate. However, what happens when we blindly rely on checklists or protocols is we stop thinking. And that’s a problem. There’s a story in the book where I talk about a checklist that was used to determine whether a patient should be removed off of a blood thinner, and the checklist said, yes, he should be removed off the blood thinner, and so this patient stopped taking blood thinner and later had a stroke. Well, it turns out one of them that wasn’t in the checklist was family history of strokes, wasn’t part of the checklist, and so this doctor took him off the blood thinner, saying, “Oh, the checklist thought no reason to stay on the blood thinner.” However, this person, his father, at that very age, had a stroke that he was. And so, if you use a little common sense and not rely blindly on the checklist, you might have a different recommended course of action.

So there’s an example, again, start from the medical. We’re sticking with medical examples, but it’s also true even within aviation. Captain Sully Sullenberger, famous US air pilot who landed on the Hudson, there was a checklist in the plane for what to do when you lose thrust in both engines. There was a checklist for that. However, that checklist was designed to be followed if you were at 35,000 feet cruising at 600 miles an hour. He was at 3000 or 4000 feet, hadn’t yet reached an ascent level that allowed him to glide very far. And so he put the checklist aside and he thought for himself, the result was a good outcome. Well, better than it would have been, I guess, is one way, I think. So, yeah, there’s a couple examples from checklists.

Brett McKay: Well, so it sounds like what all these things, experts, technology checklist, procedures, what they all do, one of the things they do is they direct our focus to a specific area causing us not to focus on other stuff.

Vikram Mansharamani: That’s right. Yeah, no, that’s exactly right. It’s about focus management. In fact, one of the things I often suggest is that we need to be mindful about where we’re focusing because the experts and technologies are spotlights and they’re shining a spotlight for us in terms of where to look or what to pay attention to, when in reality, the insight may exist in the shadows. We talked about the cardiologist and cholesterol, but I have to ask, why would you care about cholesterol? Would you care about cholesterol, Brett?

Brett McKay: No.

Vikram Mansharamani: I don’t think you should care about cholesterol. In fact, I don’t know why I should care about cholesterol. You care about cholesterol because it might impact your heart attack risk. Well, shouldn’t you just care about heart attack risk rather than cholesterol? Why are we focused on cholesterol?

Brett McKay: Well, another example from the medical, ’cause I get that’s an area where it’s complex, a lot of information that affects men is the prostate-specific antigen test. Is this idea you could detect prostate cancer really early by taking this test? And you think that’s a good thing, but it actually end up causing a bunch of problems.

Vikram Mansharamani: Yeah, it’s actually a tragic story, I think on many levels, for lots of men. So Dr. Ablin designed this test, he was a University of Arizona professor, and effectively what happened was, this was a test to manage… Originally, I think the intent was to manage people that had been, because of symptoms, identified as having prostate cancer. And so you use the PSA test to see in their blood levels, perhaps how that cancer was progressing and what to do, but the way you found out if a person had prostate cancer was they showed up with symptoms or there was some identifiable physical means to say, “Okay, there’s a problem here.”

Well, what ends up happening is big business, big pharma, or not pharma but big medicine, if you will, takes over and they start using this test as an identifier of prostate cancer. Well, it turns out, most men will die with prostate cancer, but very few men will die because of prostate cancer. And so it turns out that there’s actually… With age, there’s a greater preponderance of prostate cancer. So the PSA test gets hijacked, and so there’s more people starting to rely on the PSA test as a screening tool rather than a management tool, and so suddenly this becomes the focus. Urologists around the country, around the world, start saying, “Let’s get a PSA test score to see whether there’s a tendency or an issue of potential prostate cancer.” And then they end up looking more, and then when they look more, they find more, and when they find more, they treat more, and the result was at one point, Dr. Ablin, who designed the test, the scientists who came up with it, ended up writing a New York Times op-ed. It was the most read New York Times op-ed that year, that said something and what was called, The Great Prostate Mistake or something like that, where he said, “Listen, I’m sorry, this test was not designed for use in this way. The result is, millions of men have undergone treatments for an issue that may never have bothered them, an issue that may never have actually produced any identifiable impact on their life.”

In fact, he then had a book length treatment called, The Great Prostate Hoax, where he talked about how… And he starts it off with an apology demand saying, “There’s been millions of men who are probably incontinent or impotent because of procedures that might have been deemed unnecessary because of overreliance on this one indicator.”

Brett McKay: Right. And so that’s an example. Again, a misdirected focus. Yeah, it just made you blind to the bigger picture and just made you hyperfocused on one thing.

Vikram Mansharamani: That’s right.

Brett McKay: Well, another area that you talk about, and this is your domain of business and finance where misdirected focus, where you’re siloed and you’re just paying attention to specific things can actually hurt businesses in the way they promote people, and this, you talk about the Peter Principle.

Vikram Mansharamani: Oh! Thanks for asking, Brett. This is one of, I think it’s a genuinely comical manifestation of this problem of misdirected focus.

Brett McKay: Well, So for those who are familiar, what is the Peter Principle and how does misdirected focus lead to the Peter Principle?

Vikram Mansharamani: Yeah, so the Peter Principle… So there was this book in the 1960s, I think, written by Laurence Peters, and there was somebody else, but anyway, it’s called the Peter Principle. And what he found was, he went around and he was just frustrated by large organizations and bureaucracies, and what he did was he said, “Well, why are people getting promoted/ Why is this person on the job? Why is this person staying in the job?” And he did some research and looked into it, and what he found is really at some level, it caused me to chuckle when I first read it, and then when you think about it, it’s quite profound. He said, “Well, it turns out people get promoted by doing well in their current job, and the result is, you keep getting promoted if you do well.” Seemingly logical, right? The next question he asks is, When do people stop getting promoted? They stop getting promoted when they’re doing poorly in their job, and he calls that, that that person has reached their, “level of incompetence.” And so the result is eventually over time, an organization is filled with people that reached their level of incompetence and therefore nothing gets done.

And so, you can laugh about that ’cause you’re like, “Oh, my god, obviously, if this person is really great at customer service, they’re gonna get promoted to run the customer service team.” Well, that’s a different skill, managing people rather than managing customers. And if that person’s really good at managing that group of people, they may get promoted to managing a bigger, different operation, etcetera, and they’ll get migrated up. And so, the misdirected focus that I highlight is the Peter Principle suggests that people get promoted by how they’re doing in their current job, when in fact, you should really look to promote them based on how they might do in their next job, not the job they’re in. You may find an underperformer in your business who’s at a particular level, who once promoted may excel. Likewise, you can find someone who’s doing really well in their current position that if you promote them, they’ll really struggle.

And so, the focus on how someone’s doing in their current job to determine whether they will do well in their next job really doesn’t actually make a lot of sense. I mean, it’s a rewarding mechanism, but it’s not a mechanism that actually lines up people with the skills they need for the job they’re being asked to do.

Brett McKay: Gotcha. Okay, so in that case, if you’re working in a corporation where you determine promotions, don’t just look at how well they’re performing that, shall I look at the bigger picture of that person and see if they would do well where they’re at right now, or in a position higher?

Vikram Mansharamani: Yeah! Look, yeah. And Brett, Wayne Gretzky, I don’t actually know if it’s Wayne Gretzky or Wayne Gretzky’s father, there’s been debates on this, but there was a quote that came out of the Gretzky family, which was, “One should skate to where the puck is going, not to where the puck is.” So, evaluating people by how they’re doing in their current role is looking at where the puck is, we wanna know where the puck is going. If I promote Brett to this other job, will he do well? That is independent of how he’s doing in his current job.

Brett McKay: Or so we’ve talked about the problem, we have all these experts, technology, rules and procedures that direct our attention and sometimes to our detriment, let’s talk about how we can overcome that and be a little bit more self-reliant in the 21st century, and let’s talk about that managing focus. It seems like the first step is just resting control of your focus from experts, technology, not just completely outsourcing that thinking to them, but how do you do that when you have all these things, these algorithms, these experts and books and TV telling you, “Here’s what you need to do.” How do you rest control and start managing your attention for yourself?

Vikram Mansharamani: Yeah, it’s hard to do, Brett. Let’s be honest. So first of all, it takes effort, but let me actually clarify one thing that I wanna make sure it comes across here. I am by no means suggesting we shouldn’t listen to experts. I am not bashing experts, what I’m suggesting is, we have, for far too long, bounced like a ping pong ball between complete deferral to experts, which I think is problematic, that’s why we don’t think for ourselves when we blindly outsource. But we’ve also bounced to the other extreme, which is complete dismissal of experts, which I also think is wrong. What we need to do is keep experts in their spot, in their place. We are the main actors. Experts are supporting actors. So, we can take their insight… In fact, in the book, I say, “Keep experts on tap, not on top.” For that reason, I think there’s a role for experts and we wanna rely on them, and we wanna tap into them, and we wanna get insight from them, and extract value from them without completely blindly outsourcing to them.

Now, one exception, and in fact, again, in the book, I mentioned an example of a Stanford University professor, Baba Shiv, who realized that he and his wife had a cancer diagnosis and they decided they were gonna take the back seat, they were gonna do what the experts told them to do blindly. Now they realized that it was emotional, and so they mindfully decided to give up control. So, one being mindful, and number two, they then spent more time figuring out who they would give up their control to, so they were mindful of who they outsourced to, and they were mindful of the very outsourcing act. So, I’m okay with people outsourcing, you’re thinking, I just want you to do it mindfully and intentionally rather than subconsciously or just reflexively.

Brett McKay: Gotcha. And so it sounds, okay, if you make that decision to outsource your thinking and rely on experts, which you should, I guess one of the things you do, maybe you start asking, What does this person missing that because of their expertise they might be missing ’cause that’s not even in their silo?

Vikram Mansharamani: Bingo. That’s exactly right. Ask questions of where the information is relevant and where it might not be relevant, or what the insight is based on. I know it’s hard to ask experts, you feel like there’s a status dynamic, etcetera, but when you’re interfacing with an expert, I think it’s eminently reasonable to ask questions about how that expert came to the conclusion that they’ve come to, why they’re recommending it to you and how it applies to your specific context and your specific problem or objective. So I think that’s a very reasonable conversation to have, and an expert should be willing to guide you to understand why they’re coming to this conclusion.

Brett McKay: Right, so that even involves asking about their incentives. That could be uncomfortable. It’s like, “Well, do you make money if you tell me to do this thing?” You gotta know that.

Vikram Mansharamani: Absolutely! It’s uncomfortable, but worth doing. There’s a quote in the book, I think it’s from Warren Buffet, but you know, “Don’t ever ask a barber if you need a haircut.”

Brett McKay: Right.

Vikram Mansharamani: This sort of logic, I think that captures it, right? It sort of gets at it.

Brett McKay: Well, and besides, okay, just be more mindful of when you’re ceding control and also being mindful of the experts, their limitations or the technologies’ limitations, you also say another important thing to be self-reliant in the 21st century, just actually knowing what you’re trying to… What your goals are.

Vikram Mansharamani: Yeah.

Brett McKay: And I guess that seems like very basic, but how do you think people like, do they just not think about that? Why don’t you think people think about what their actual goals are when they decide to cede control over to an expert or technology?

Vikram Mansharamani: Well, it’s not that you don’t understand your goal, it’s just that you let your goal be subservient to the experts’ objectives, right? So ultimately, think of the cardiologist example. The cardiologist is an expert in heart health, you are worried about your wellness, your longevity, your risk of heart attack is part of that. Ultimately, we can even ask, Why do you care about a heart attack? If it doesn’t kill you, yeah, you don’t wanna have it because there’s risk of complication, etcetera. But ultimately, you care about living a life healthily, you wanna stay well. And so your objective really should be driving your interactions with experts. Again, think of yourself as an artist putting together a mosaic and experts have the tiles, there’s different tiles, there are different shapes, a different color, a different texture, you put them together based on your objectives. So, again, I think of it as the experts are pieces. You know where you wanna go with this whole thing, so take what you need from them.

Brett McKay: It also sounds like too, as you’re working with an expert, you have to look at results, and if the results, you’re not getting the results you wanted or desired, well, then you gotta change course and maybe find another expert to do something else.

Vikram Mansharamani: Yeah, or interface with them. I’m not suggesting experts are bad, maybe they don’t understand your objectives, maybe they have some bit of clarity of communication.

Brett McKay: Yeah, that’s a good… Yeah. Right, that’s a good point. A lot of people… He might be working on different assumptions than you are. And so, another tactic you recommend is just… You mentioned this earlier, just triangulate, so not just relying on one, get a second or third, sometimes fourth opinion.

Vikram Mansharamani: Yeah. And don’t hesitate to cross silos in those opinions that you seek, Brett. I’ll give you an example. Well, in fact, we talked about a cardiologist. So when you go to your cardiologist, she tells you to take a statin, why not ask your endocrinologist what they think about you taking a statin? The obvious assumption is, “Well, that’s not their domain, that’s not their silo, that’s not their area of expertise. Why would I ask them?” Well, because it might interact with them in some way, they may have a unique insight or perspective on this. “Oh, actually, Brett, we’ve seen people who take statins, it turns out they have an elevated risk of diabetes.” “Whoa, really? I don’t want that. Okay, let me re-engage with the cardiologist with this insight.”

So part of the triangulation logic is an acceptance and admission that every perspective is biased, limited and incomplete, so don’t just rely on one. And that’s really what I mean when I say triangulate, which is in the domain of financial bubbles, where I’ve spent some time thinking and writing, an economic perspective leads you to one insight, but a psychological perspective may lead you to another, and you add into it a political perspective, a credit perspective, or herd behavior, or even… What you find is that, “Oh, cultural perspective.” You get a different view than you would through any one particular lens. And it reminds me of this, I don’t know if it’s a parable, but there’s an often quoted story of the six blind men that stumble upon an elephant. So the one man grabs the leg and he says, “Oh, this is a tree trunk, it’s definitely a tree that we’ve stumbled upon.” Another one grabs the tail and says, “Whoa, hold on a second, what we have here is a snake.”

And it’s only through the integration of multiple perspectives that the group would be able to determine that they are in fact encountering an elephant, and then it’s the same way with a large portion of the uncertainty we face in our lives, whether it’s in medical, financial or other domains, is that it really requires integration of multiple perspectives to get our arms around what we’re facing, the problems, and even the potential solutions.

Brett McKay: Well, it sounds like too, besides getting a breadth of opinions from different experts, being self-reliant in the 21st century and knowing how to handle expert knowledge requires the person, you yourself, to develop a breadth of knowledge like, we read widely, have multiple perspective.

Vikram Mansharamani: Yeah. Look, I think breadth is really important at the individual level so that you can understand the limitations and boundaries of the silos in which experts and specialists live. And so reading widely, yes, is important, but it’s also just developing an awareness, and these are simple things that can be done to give you that awareness. Let’s just talk about reading information and the news, for instance, Brett. A lot of people will, now, because of technology, tunnel in based on existing searches or filters or alerts, they’ll be told, “Oh, there’s something you’re in the… I don’t know. You’re in the aerospace industry.” Great. “Here’s the 737 MAX problem.” And so you get news on that, your alerts come in every day, and you get your industry newsletter, and it comes to you and you read it. Whereas, if you took a physical newspaper or magazine and you flip through it, you will be exposing yourself to different ideas in different domains that may be adjacent, etcetera, and you’ll just be more aware. Right?

If you think about what’s happening in business, reading the Wall Street Journal physical edition rather than be algorithmically influenced, alert-driven online version, I think there’s some value in that and exposing you to breadth, and I consistently will read, for instance, the Economist magazine cover to cover in physical form, and I do that because just even flipping pages, even if it’s not interesting or not a topic I have real depth of interest in, I may get some value out of seeing the headline and even reading the first paragraph of it, and it’s very quick, but it gives me an awareness and a breadth of exposure that I wouldn’t get if I just said, “Alright, I wanna know about US-China relations. Let me just read about that.”

Brett McKay: Well, Vikram, this has been a great conversation. Where can people go to learn more about the book and your work?

Vikram Mansharamani: Sure. I think my website’s probably the best spot, which is just www.mansharamani.com. And that’s M-A-N-S-H-A-R-A M-A-N-I, or I’m on LinkedIn and Twitter as well, and you can find me there.

Brett McKay: Fantastic. Well, Vikram, thanks so much for your time. It’s been a pleasure.

Vikram Mansharamani: Thanks, Brett. I’ve enjoyed the conversation.

Brett McKay: My guest today was Dr. Vikram Mansharamani. He’s the author of the book, Think for Yourself. It’s available on Amazon.com and bookstores everywhere. Check out our show notes at aom.is/thinkforyourself where you can find links to resources where you can delve deeper into this topic.

Well, that wraps up another edition of The AOM Podcast. Check out our website at artofmanliness.com where you can find our podcast archives as well as thousands of articles we’ve written over the years about pretty much anything you could think of. And if you’d like to enjoy ad-free episodes of The AOM Podcast, you can do so on Stitcher Premium. Head over to stitcherpremium.com to sign up, use code MANLINESS at check out for a free month trial. Once you’re signed up, download the Stitcher app on Android or iOS, and you can start enjoying ad-free episodes of The AOM Podcast. And if you haven’t done so already, I’d appreciate if you take one minute to give us a review on Apple podcast or Stitcher, it helps out a lot. And if you’d done that already, thank you. Please consider sharing the show with a friend or family member who you think will get something out of it. As always, thank you for the continued support. Until next time. This is Brett McKay reminding you not only listen to the AOM Podcast, but put what you’ve heard into action.

Related Posts