Welcome back!

No apps configured. Please contact your administrator.
Forgot password?

Don’t have an account? Subscribe now

Rigged Consulting Firms Rankings Punish Boutique Consulting Firms

Rigged Rankings Punish Boutique Consulting Firms

Consulting firms rankings are a business. The very first thing to do in understanding a ranking is to ignore the research methodology or glowing background about the company publishing the ranking. The footnotes are hokum to create a veneer of credibility. To understand any ranking you need to always follow the money.

The flow of revenue around the business model of the publisher explains everything. By understanding the business model, you can predict the ranking order without knowing much about the firms in the ranking.

A lot of time, money, resources and hot air are fed into the reporting furnace to produce this ranking amalgam of supposedly ironclad authority.

Logic would dictate that in socially driven media, a consulting firms ranking is merely an investment that needs to get a lot of eyeballs, clicks, shares, tweets, pokes, prods, snaps, slaps and tickles. At its core, that is what a ranking is: an investment by a media company to be shifted around in social media.

What do you think will happen if small and relatively unknown firms were to dominate the global consulting firms ranking? What would happen if a boutique firm in Australia, Port Jackson Partners for example, ranked higher than Bain? For the purposes of this discussion, let us assume PJP was ranked higher because it produces higher quality work than Bain, even though it is unknown to the US market that generates most of the global traffic for these rankings.

This is what happens.

The ranking will not get many eyeballs, clicks, shares, tweets, pokes, prods, snaps, slaps and tickles. The ROI on the investment will drop because as traffic to the site drops advertisers pay less for ads. The ranking will be called a failure not because it was wrong but because it’s very truthful accuracy led to its lack of profitability. This is because failure is measured in ROI terms and not the accuracy of the ranking.

Therefore, most consulting firms rankings are geared to the US market, since it is the world’s largest market. And this is the stark irony. For a consulting firms ranking to be successful, it must have the large established US firms dominating the top.

Sadly, most global consulting firms rankings are really regional rankings of the US market masquerading as a global ranking.

If that tiny Australian firm, Port Jackson Partners, did feature strongly, the business fallout for the rankings would be pretty grim for the publisher. The armies of consultants at the larger firms like Deloitte S&O would be less inclined to share the story since their firm did not do well. The US media, the dominant media, will be less inclined to write up the story since the firm mentioned is not well known.

Which American MBA student will click on a link about Port Jackson Partners?

Such a ranking with the Australian firm at the top is, in economic terms, wasted ink.

The wasted ink implies weak traffic for the consulting firms ranking leading to budget cuts for the ranking project and retrenchments. So the consulting firms ranking editor invariably tailors the ranking to their main audience.

The net effect is that the brand power of the larger consulting firms creates a bias to create ranking news around them. It takes guts and sufficiently large cash cushions on the part of the publisher to rank consulting firms highly if they merit such a score, yet generate weaker traffic since their brands are not well known in the core US market.

There is another way to follow the money and this one is more insidious. Think of all the ads that run on the consulting firms ranking pages of numerous media companies. You must have seen them. Bain, BCG, Deloitte S&O etc., all run ads on the ranking sites or promote the rankings where they feature well.

So you have two sides of the same financial relationship here.

First, advertisements are obviously not free so consulting rankings publishers earn money off the advertisements of consulting firms on their website.

Second, by allowing larger consulting firms to consistently do well leads to the consulting firm promoting the ranking on their own websites. This promotion in the form of badges leads visitors to the consulting firms’ website to see the badge and visit the publisher, which drives up the publishers advertising rates.

The net effect is that the editor of the consulting firms ranking report is looking at his budget and sees two things. First, maybe 20% to 30% of his costs are covered by the adverts the large firms run on the ranking website. Second, his rates for other advertisers are driven up by the traffic from consulting websites.

Most boutique consulting firms do not have the budget to run such ads so the majority get sidelined from rankings.

It is plausible to imagine this rankings editor being extra careful to rank the larger consulting firms more highly and write more flattering pieces simply to protect his revenue streams. In fact, if a boutique consulting firm can afford to run an advert, the odds are it will garner a higher slot on the overall ranking of firms or the publisher will create a separate niche ranking category like defense consulting to collect those ad dollars from smaller firms.

We see this in just about every consulting firms ranking.

Does anyone honestly believe that consultants from 25 different consulting firms make up the 25 Top Management Consultants in the Consulting Magazine ranking? In the history of rankings and natural data sets that nature or man has ever produced, is there any data set where one group does not dominate even a little?

Tennis wins, Nobel prizes, GDP growth, export percentages, age, education distributions etc., all have some group dominating every single list. This is how real data should look: a few dominate the group followed by a tail.

One would expect a tail. That is the secret to testing if data has been cooked.

So what is a tail?

The best consulting firm would have maybe 8% of the top 25 consultants.

The 2nd best firm would have around 6%.

The 3rd best firm would have up to 5%.

The 4th best firm around 3%.

If you stack these frequencies per consulting firm in descending order from left to right you will see a tail starting from 8% going down to 6%, 5%, 3% and finally it tapers off. Depending on the dataset size the tail can taper off very slowly or end like a stub – but there is always a tail.

When I was a young scientist and saw this lack of a tail in any data, I merely assumed the data was tainted. When I became a management consultant and learned about business I realized there was usually money involved and someone was being bought off.

The lack of a tail is a sure sign that money/greed is somehow skewing the results.

Sure, rankings editors will be up in arms with this observation and complain that they are the most pious characters on the planet. Yet, how can they argue the logic above? Isn’t this what we call a conflict of interest? Is it not completely wrong to have a financial relationship, of any kind, with a consulting firm you claim to rate and rank?

To claim this is right is to say the US Attorney General should be appointed, be reviewed by and receive his salary from the very same banks he needs to investigate. If that sounds ridiculous then the current consulting rankings business model must be equally ridiculous.

Are consulting rankings just thinly veiled lobbying efforts by large consulting firms who have the marketing budget to sideline boutique consulting firms?

They probably are. The editors select metrics that are easy to measure and make the larger firms look good.

You can conceivably imagine a procurement manager somewhere in the world reading a consulting firms ranking and thinking that firm x must be so great due to the flowery rankings that they should be invited to the bidding process and possibly scored highly due to the ratings. This happens more than you think, yet the manager does not know the rankings are biased and likely inaccurate.

This arrangement benefits everyone but prospective consulting clients and aspiring applicants.

The result is that the publisher makes money, the consulting firm improves their standing and makes money while clients, and job applicants and other readers lose out due to the flawed intent in the rankings.

A consulting firms ranking publisher should disclose all direct and indirect financial relationships with consulting firms, otherwise the ranking means next to nothing.

Doctors are required by law to do it.

Pharmacists must do it.

Financial advisers are mandated to do so.

Advertisers must do it.

Lobbying firms are also required by law to disclose financial arrangements.

Why should rankings of consulting firms be any different?

Yet, while most rankings are flawed due to the financial relationships described above, the very behavior of readers forces publishers into this business model. Surprisingly, we, the demand market, are all liable for this since our three peculiar type of biases forces a publisher to act this way.

Only the input metric bias is a publisher bias.

Input Metric Bias

Let us conduct an experiment. Below we will describe one of the most elegant sports high-performance cars in the world. Once we are done, you need to guess the brand. Try not to skip ahead and peek at the answer!

Ready?

This car is regarded as being at the pinnacle of its field. Only a few are produced and it is virtually impossible to own one on short notice. It is Italian. It is a limited edition high-performance vehicle that always headlines major international auto shows. Every part is handcrafted to create a custom finish. The car generally retails upwards of US$1,000,000.00. There is probably no way to get ahead of the shortlist due to limited production numbers. Despite this, the company wants to limit production even further.

Guess the brand.

Most of you would guess this was a Ferrari special edition car.

Maybe a Lamborghini or McLaren limited edition vehicle.

Some may even guess a Bugatti.

Those who know little about cars will guess Mercedes.

You would be wrong.

It is a Pagani Huayra and you have probably never heard of it.

What is the point of discussing the Pagani Huayra and how is it linked to boutique consulting firms? The same thought process that made us think the answer was Ferrari or Lamborghini also drives our decision to underestimate strong boutique consulting firms.

Notice that in describing the car we focused mainly on the outputs and very little on the inputs like the premium pistons, wheels, navigation, steering or brakes used to produce the car. This is an important observation.

We focus on the net impact versus all the tiny inputs that create the impact. We focused on what the car was trying to achieve and not how it tried to achieve that. We really could not care less if Pagani Huayra hired Muccia Prada herself to design the leather interior. We worry about the quality of the interior irrespective of the brand. We do not talk much about the elegant Bose braking system or air-cooled engine. All we care about is how the car feels when we drive it.

Inputs should not matter at all. We look at the final product and judge the car. We look at the speed, the durability and the contours of the car. We try to convey the emotional feeling the finished car imparts on the user. We are interested in how it all comes together.

We would not buy the car if it had the world’s most elegant inputs that never really worked together to generate a stunning output, the car itself.

This is the mistake we make when assessing consulting firms. Due to an inability to judge the quality of the outputs, we tend to rely on judging the quality of the inputs. Think about that for a second.

The input bias is a bias that readers apply that penalizes boutique consulting firms.

Let’s run another thought experiment to demonstrate this point.

First, assume we asked ten large firms like McKinsey and BCG, and ten boutique consulting firms like Marakon & Associates to complete the same study for the same client. You do not need actual studies from clients since you can easily perform this experiment by picking published research reports on cost reduction, or another common theme, from the ten firms.

Second, we remove the logos and standardize identifying features: font, color scheme and exhibit styles. Now you have no reasonable way of determining which consulting firm completed the report.

Third, we asked you to rate and rank the reports.

The majority would struggle to do this. We would struggle because we have no real understanding of what is a great consulting report looks like. Sobering, isn’t it?

We would struggle because we actually need to know which consulting firm completed the report to assign a quality level to it. And this is the problem with most consulting firms ranking. They penalize a lot of firms not due to the quality of the work, but merely because the firm is not McKinsey, Bain, BCG, Deloitte S&O or some other large firm.

In effect, the rankings are already stacked against the boutique consulting firms before they even enter the competition. The people behind the rankings have already decided to reward Bain et al and are simply looking for plausible reasons to move them up the rankings.

In other words, we have no way of judging quality and rely on what others tell us is quality. Like the Pagani Huayra, many would not judge this car highly because no one had ever heard of it, and because they had never heard of it, we have no signals from friends and the press to influence their decision. And when you have to judge a car independent of any signals, you have to actually think about what determines quality of a car.

That is harder to do than it looks.

This is the problem with boutique consulting firms. Facing a brand or media signal vacuum, applicants tend to discount them since the majority of applicants rely on signals from the press and peers versus judging the actual quality of the consulting firm’s work.

In the absence of signals or an ability to judge the output quality, we tend to look at the size of the consulting firms, their scale, the capital they can deploy, their hiring ranks or their salaries. These are all inputs. They give us a clue to the outputs, but they are never a direct indicator of the output quality.

Why do we look at the revenue size? It is not like we assume Starbucks is superior to an elegant boutique coffee shop serving couture coffee. Do we assume a member of the multi-thousand strong MacDonald’s restaurant franchise network beats a Michelin-starred single location restaurant? If anything, size usually leads to control and quality problems. Why is it that we never link size to poor performance when ranking a consulting firm?

In just about everything else in life, we measure quality, except it seems when it comes to consulting firms ranking. For consulting, we look at inputs that we think are important and then deduce what the output of the work should be.

If the output is there why not simply evaluate that? Why do we go through all the trouble of analyzing input data that is usually provided by the consulting firm being ranked and therefore subject to rife manipulation?

If we extended the input-based method of most consulting firms rankings into our daily lives, the ludicrousity would be amplified to deafening levels.

A food critic does not visit the kitchen of a restaurant.

The critic does not read the resumes of the chefs.

The critic does not ask for supplier lists and delivery times to test produce freshness.

The critic does not judge the restaurant based on the excellent research piece about food economics that the head chef penned in the New York Times.

Hell no! It comes down to the food served and only the food served.

You do not measure an investment firm’s abilities based on their employees’ resumes, suits, cars, offices, testimonials, yachts or villas. That would be ridiculous. You look at their audited filings with the financial regulator which states their risk adjusted returns and investment strategy.

I think we can all agree that food and money are important, and in these two areas we look at outputs.

Yet, people behind consulting firms ranking seem to do the opposite. They measure revenue. They look at publications. They look at the hiring locations. They examine salaries. Is anyone actually going to measure the consulting work?

That is not to say revenue and publications should not be measured. They should be, but only in relation to how they help the firm to be great at consulting. You can see how this works when we penalize Bain for their relatively higher salaries.

For large firms, analyzing inputs gives them an unfair advantage. Let’s demonstrate how analyzing inputs gives boutique consulting firms an unfair disadvantage. Let’s use the case of the Pagani Huayra again.

Assume you had never heard of this car and therefore had no signal about its quality level. All you would know about the car is what I am going to present below. Furthermore, I am only going to describe the design inputs. No outputs will be discussed.

All of the information below is true.

The designer after whom the car is named entered the industry very recently. He started off by designing road caravans, then he designed caravans for agricultural farm harvesters. A caravans is the enclosed space where drivers sit as they steer the equipment. Sexy, isn’t it?

He followed this up by designing several road deformation meters for the department of transport and eventually ended up at design school and fashioned his first car in 1987. Finally, keep in mind that by the time he designed his first car in 1987, the likes of Ferrari had already become global icons.

Based solely on this description, 99% of readers would dismiss this car as crude and cheap based on the inputs, since the inputs are crude, new and lacking in pedigree. For crying out loud, the man was designing agriculture equipment seating cabins just a few years before he built his first car! That is the problem when assessing tiny boutique consulting firms, their inputs may lack pedigree and you need to measure their actual outputs.

Many will argue this cannot happen in the brutally efficient management consulting market. Yet, it can and it happens often.

So there you are scrolling through numerous sites, paging through one consulting firms ranking after another and speaking to your friends. And what do they all tell you? They all look mainly at inputs and a few meaningless output measures. When this happens, we invariably succumb to the bias of averages. In reading so many sources and speaking to so many friends, you are essentially running an un-weighted and statistically insignificant poll. At the end of this exercise you will think, “everyone is telling me AT Kearney is better than this tiny boutique consulting firm. Therefore, I will join AT Kearney.”

This is the bias of averages. We rarely worry about the credibility of the source and take a straight average. We seldom worry about the credibility of a source since we have no way to assess the credibility ourselves and simply go with the most popular and well-known source, or at the very least, the most common feedback.

The Miss Universe Bias

In the Miss Universe pageant all these beautiful contestants roll out and one lucky lady is bestowed the title of most beautiful women in the entire universe. Not just the galaxy, world or country, but the entire universe. Imagine that!

The event organizers are implying, through the title, that the winner must be special since the event organizers looked at every face, tooth, hand, finger, foot, toe, leg, torso, brain, hair strand, pore and blemish of every female in the universe, and picked this one lady.

Yet, they don’t do that, do they?

The Miss Universe pageant relies on feeder pageants. The Miss Universe contest simply takes the winners of the feeder pageants.

These smaller, less glamorous, feeder pageants are regional Miss Universe events or even National Beauty Contests. These national or regional events rely on their own even smaller state or provincial feeder events. And these state or provincial events rely on much smaller small town or city feeder events, which in turn rely on most miniscule suburb like feeder events in some tiny high-school all pushing the winner upwards.

It is highly unlikely the screening is that thorough for the lowest level suburb events. So even if the judging in the grand finale on national television is as excruciatingly thorough as claimed, which it is not, the winner is selected from an initial sample of poorly screened applicants.

This is the equivalent of hiring the foremost Nobel laureate physicists to analyze some data, but messing up the data they are analyzing. No matter how thorough they may be, the tainted data renders the conclusion meaningless.

Now, there are additional problems with the sampling.

First, you needed to apply to win. This is not a difference without a distinction. And you only apply if you see the application and you chose to apply.

This is a major reason why boutique consulting firms get punished. The direct analogy is to think of the feeder pageants as feeder newspapers. A tiny but exceptional boutique consulting firm in Charlotte, North Carolina may be doing a superb job but may not really crack the press in its hometown due to budget issues or possibly a culture of wanting to not be in the press.

Since the story does not run in Charlotte, a larger regional paper will not see the piece in the smaller paper and write about it, and since the larger paper does not write about it, the odds are very slim that an international paper like the Wall Street Journal, Financial Times or New York Times will write about.

Just as the Miss Universe contestants pool consists only of the “best” of the women who applied to the contest, and this is a tiny number, consulting stories in the major international papers only cover firms with the resources, focus and relationship to manage a great PR campaign.

Having a great PR campaign does not make established consulting firms great, just as not having a great PR campaign and not appearing in a major national publication does not make boutique consulting firms weak. In other words, the pool of consulting firms that float up to the New York Times and Wall Street Journal editors is already tainted since it likely ignores great smaller firms who choose to ignore PR expenditure and invest their funds in serving clients.

Second, the boutique consulting firms are really at the mercy of the smaller papers who, let’s be honest, do not care about having beefed up business bureaus. With budgets under pressure, they usually kill the business reporting desk and source business content by signing up to an Associate Press Feed.

And if they did invest in a single business journalist, do you think it is likely that a journalist from a tiny Charlotte local paper will have the insight, required knowledge or even motivation to write a great piece on an up and coming boutique consulting firm which does superb work? I would say no!

He is likely to be more interested in writing about community businesses like Pa Hickam’s Pork Pie chain that just opened a 3rd store downtown and will soon expand to that great metropolis called Atlanta, Georgia.

The incentives are misaligned. The boutique consulting firm in Charlotte needs to be featured in a fairly sophisticated local piece to be taken seriously and get attention at the international level, while the regional reporter needs a community piece to placate the local advertisers and stay in business through advertising revenue from local advertisers.

The Aged Obesity Bias

Another major reason boutique consulting firms are punished is because we tend to maintain an industrialist view of prestige in professional services. We assume that consulting firms must be big and old to have any merit.

So, let’s imagine McKinsey way back when Marvin Bower stepped in and was firing profitable partners simply because they sold the wrong type of work. Imagine going all the way back to that time in the 1950’s and trying to rank McKinsey. It would fare pretty badly by today’s measures of revenue, size and so on.Yet, the DNA of McKinsey was alive at that stage, was it not?

Consulting firms ranking is biased toward large firms.

In a time when boutique restaurants, markets, designers and companies are all conferred greater status to reap higher fees than established giants, we still punish boutique consulting firms. We have this weird industrial view about economies of scale and distributing fixed costs.

Scale only works in industrial businesses since the products are homogenous. Consultants and their ideas are not homogenous. One brilliant partner surrounded by a few capable and values-driven consultants can produce an outstanding recommendation and greater impact. In fact, that is how McKinsey, BCG, Bain, Booz, Roland Berger and AT Kearney all began.

Every consulting study is fought one study at a time, and doing 500 of the same studies at the same time around the world does not necessarily lead to higher value for individual clients. It may, and it may not. Just as a boutique consulting firm doing just 2 studies in one city in the world may not be necessarily weaker. Those two studies could be the start of a multi-decade and illustrious firm.

There is no rational basis for thinking a larger consulting firm is superior. That is why the outputs must be measured. Yes, you could argue that a larger firm like McKinsey understands its business and has a repeatable model. However, that does not mean a smaller firm with fewer offices produces lower quality work.

Confirmation Bias

Time to sum up with another bias that impacts both editors and readers of consulting rankings, and punishes boutique consulting firms.

How many of you know Bethany Maclean? I am guessing no hands went up.

How many of you know Enron? I am guessing all hands went up.

This is the lady who wrote the seemingly damaging Fortune Magazine piece that poked holes in Enron’s story. Way back in 2001 Enron was perfection personified. Everyone was singing its praises. McKinsey extracted best practices and collected $10 million a year for advice, HBR was churning out Enron cases studies like a Starbucks churns out lattes in the rush hour, Fortune was giving Enron consecutive prizes and almost everyone loved the company.

Then, McLean dropped this bombshell.

Years later we see this March 2001 story as the turning point but at the time everyone thought she was a slightly crazy person who had the audacity to challenge conventional wisdom. After all, conventional wisdom said that Enron was a great company.

At the time, her story sank without a trace until everything she pointed out turned out to be true. In fact, Enron’s share price continued rising after the story was published.

This is the bias we refer to. Readers are going to ignore things that do not meet their expectations.

When it was first published, Bethany Maclean’s piece was largely ignored and slowly moved aside. The world moved on to the extent that Enron’s shares increased after the story. Her piece did not get much airplay, so while many now claim this article started to chip away at Enron and launched her career, the reality is that this article was ignored.

MacLean was only vindicated when Enron collapsed and everyone then said, “Why no one warned us?”.

Well, there were plenty of warnings but it is damn tough to listen to some soft-spoken rookie reporter with cherub cheeks manning the part-time contributors desk at Fortune when every other credible source is saying otherwise.

This is the confirmation bias, or what we like to call the Shoot-Yourself-in-The-Foot bias. One of these boutique consulting firms may be the firm that displaces McKinsey and BCG in our lifetime. Yet, we have a bias to filter out things that do not support our deeply ingrained ideas and this will keep us from noticing that landscape is shifting.

For these reasons, boutique consulting firms tend to get sidelined in rankings.

The worst travesty is rankings that separate boutique consulting firms from larger firms to create multiple lists of winners, and naturally more ratings, goodwill and ad revenue. Multiple lists bring more traffic to the publishers, allow them to collect more advertising revenue and to crown more winners: a winner for every list. The more winners the publisher has, the more the ad revenue, the more winners sharing the rankings and the more social traffic they have, as winners will want to drum up awareness about their newfound status.

Yet, it is an insult to boutique consulting firms.

Having multiple lists is the terrible equivalent of having a male and female best actor/actress Oscar awards. Just as the Motion Picture Association of America told women, and still tells women, they are not good enough to be assessed alongside males, we are telling boutique consulting firms they are not good enough to be assessed alongside larger firms.

Who are we to make that decision?

Large and boutique consulting firms should be combined and ranked. The majority of weak boutique consulting firms will rank lower than the eye line of the world’s shortest dog but the few stellar boutique firms will earn the respect they deserve by going head to head against the major firms. Is that not the point of a proper consulting firms ranking?

Want to learn more about how FIRMSconsulting
can help your organization?

Related Articles

Career

Dealing with Fear and Need to Feel Important

Dealing with Fear and Need to Feel Important I would like to discuss today dealing with one of the qualities that rob people of the ability to influence and have gravitas. And you probably guessed already what it is. We are talking about arrogance and the damage to people’s lives when they are…

Career

Affiliate Program – StrategyTraining.com

The StrategyTraining.com Affiliate Program If you enjoy our programs and want to help your peers, we want to pay you for that. You can enjoy a lifetime recurring 33% commission on any of the memberships from our audio and video or reading training libraries via StrategyTraining.com Affiliate Program. You may have gotten…

Strategy

Understanding Risk in Strategy

Every Single Strategy Has Its Unique Risk Start with understanding risk in an investment strategy A business strategy tries to solve a set of problems to increase returns while minimizing risk. The return side is well-covered with numerous tools, analytics and frameworks to more or less accurately estimate free cash-flow,…