Share / embed this ranking on your site, blog or social media account.

Press ⌘ + C on Mac or Ctr + C on Windows to copy the embed code.

The Strategy Consulting Real Time RankingsTM measures the two reasons why strategy firms exist: the ability to provide effective strategy advice and influence the client to act on the recommendations.

View the real-time rankings View related rankings: CEO ranking

Thirty-four metrics.

Updated weekly.

Rewarding firms that place their clients’ needs first.

Please view the chart on a larger screen for a better experience.

Ranking and comparing the 19 most prominent strategy consulting firms using a live and continuous assessment which updates weekly.

Follow us for the weekly updates or embed the ranking on your site for real-time updates.

Press ⌘ + C on Mac or Ctr + C on Windows to copy the embed code.

Follow the ranking updates:          

Please view the chart on a larger screen for a better experience.

Footnotes

Hover over the ‘?’ and ‘#” symbols alongside each metric to see the detailed explanations of each.

Click/unclick a circle or title to lock/unlock the results.

Any firm may participate in this ranking provided we have a means of independently verifying the data.

The weighting is heavily skewed toward structural and long-term metrics.

This ranking only examines a firm’s ability to produce effective strategy recommendations and influence a client to act on them. Metrics like salaries, advertising, revenue and the legal structures of the firm etc. are only analyzed in relation to helping a firm achieve those two goals listed above. For example, a firm that pays a high salary which does not translate into superior recommendations is penalized.

Firms are ranked and scored using a proprietary list of 34 factors that distinguish between the health and performance of a consulting firm. The ranking is updated weekly as new factors, like Harvard Business Review publications, press articles, partner feedback etc. are updated.

Structural metrics should be seen as barriers. No matter how well a firm may do in other areas, if it remains a legally fragmented partnership, for example, it cannot be ranked highly since this legal set-up impacts the quality of the consulting work done. Firms need to fix these structural barriers to move up in the rankings.

The rankings do not consider past performance in any form or manner whatsoever. The scores should be read like share prices in that they reflect the likely future performance of the company. All other rankings measure past performance. For example, firms like Strategy& are being ranked on the their current abilities and current likely trajectory given their decision to be bought out by PwC. Irrespective of how strong their brand and work may have been, given their loss of independence and PwC’s fragmented legal structure, Strategy& is heavily penalized.

Short-term metrics are things firms can immediately fix to improve their rankings.

Long-term metrics are the outcome of fixing the structural and short-term metrics.

The KPMG ranking only covers the Management Consulting Group.

PwC Strategy& covers all of PwC Consuting & Deals and not just the former Booz unit, although we use the new unit's name. The improved performance of PwC is due to the increased integration/absorption which is benefiting the broader PwC Consulting & Deals team. An increased ranking should not be read as superior performance by the ex-Booz unit, Strategy&, but as a broader performance across the entire PwC Consulting & Deals. In many ways, what is good for PwC Consulting & Deals is probably bad for a standalone and ring-fenced strategy unit within PwC. We believe that trying to recreate a mini-Mckinsey within PwC is a destruction of PwC partner capital and this ranking penalizes efforts to keep the ex-Booz teams separate. Our editoral on Cesare Mainardi explains this argument and why we reward absorption into PwC Consulting & Deals.

The E&Y ranking only covers the IT & Performance Improvement Group.

The Accenture ranking only covers the Strategy Group.

The Capgemini ranking only covers the Consulting Group.

The IBM ranking only covers Business Strategy, Tech. & Data, Marketing Sales & Service, Operations and Supply Chain & Talent & Change.

Comments

23 responses to McKinsey, BCG, Bain, Deloitte S&O, Strategy& Rankings

  1. Namit,

    One can only review firms on a case by case basis. So I have no comments on categories of firms. Even McKinsey and Bain are light years apart in quality and culture.

    Michael

  2. Hi Michael,
    Thanks for this list. Apart from the Strategy firms mentioned above, in general, how would you rate Boutique Consulting firms with core expertise in Project Leadership? Their definition of Project leadership seems quite broad, including strategy, operations, M&A etc.

    Thanks.

  3. Hi Nicolas,

    This is a deliberate choice. Health is more important than performance.

    Michael

  4. I love these well thought-out ratings, however they focus on the “health” side of things. In other words, they seem to asnwer the question: “who is best positioned for long term success?”. It would be phenomenal to see actual performance ratings – and here I’m thinking especially of client satisfaction ratings on the actual performance of consulting firms.

    I developed criteria for such ratings a little while back, but I don’t have client data. Happy to discuss should you wish to explore

    Nicolas

  5. Hi everyone,

    I am sure you have all heard about Accenture, dropping annual performance reviews.
    Do you think it is a good idea for employees and the firm? Clients ultimately?
    How could it influence the above ranking?

    Kevin.

  6. Thanks Parag,

    That is an interesting idea and we will certainly think about it.

    Michael

  7. Perhaps you could add the functionality where a user can “design their own firm” and input their own values for the variables listed in the table above. If the site could recalculate the final score of this fictitious firm in real time, this would provide an easy way for users to get a feel for the different effects of various variable combinations without giving away the inner workings of your model.

    Parag

  8. Thanks Parag,

    We are always tweaking and updating the rankings section and one area we are looking at now is how to present the insides of a model that is not 2D. We will certainly release more as we think about better ways to present something like this.

    The honest answer is while we believe the model is sound, we do not believe we have the best approach to presenting it. That is a constant work in progress.

    Michael

  9. Thank you for the explanation. It is obvious the model does not just average the scores, but it was not at all clear from your earlier posts that there were conditional statements involved. Thank you for clarifying.

    I understand not wanting to release the model since you consider it proprietary, but it is a shame the ranking is not a bit more transparent. I feel that part of the beauty of your site is the member feedback and input, which you seem to take seriously. This is unfortunately not possible for a ranking that looks like a black box to readers.

    Parag

  10. Hi Parag,

    We have already explained the reasons below based on Zander’s question. So either you are discounting it, missed it or do not believe it is sufficient to justify the change. Either way, those are the reasons.

    The mechanics of the change in the algorithm are briefly worth touching upon.

    Bain has a higher numerical score in the rankings but when we then interpret that finding it is not always a good performance for Bain – a higher score is not always better. That is the assumption you are making. The same applies to all firms. There are many, many examples like this and we cannot go into all the tiny details every time we update the algorithm. Moreover, there is another reason we never explain all the details: our scoring, ranking and algorithm is proprietary and we do not share it in detail.

    That said, the algorithm does not conduct a straight count of the scores and averages them out. It works on an “if” and “then” model. If x is low and y is high, then score x is weighted much higher and vice-versa. These background relationships cannot be seen in a 2D setting.

    One example of a change in our thinking is the high scoring in HBR. That used to be positive in our ranking but can be negative for some firms. That is a recent change in the algorithm. If a firm does not have a route to the reader besides the HBR, then we rate such a dependence on the HBR as negative. However, McKinsey appearing in HBR scores highly since McKinsey also owns MQ. The same for BCG since it has BCG Perspectives. Bain only has a route to the market via HBR and it scores lower since it is dependent on HBR. So, two firms with the same appearances in HBR, score differently. In other words, two firms with the same score are ranked differently in the algorithm since its means different things.

    We explain this particular phenomenon in this article: https://www.firmsconsulting.com/consulting-rankings/mckinsey-netflixed-bain-bcg-hbr-monopoly/

    Michael

  11. Can you please give more detail as to what factors were given more weight in the recent rankings update that caused such a large jump? It seems like very few of the individual numerical values changed, yet only Deloitte and McKinsey’s overall score changed dramatically.

    I understand that your ranking seeks to measure future performance, but I am interested as to how Bain can trail Deloitte in the overall ranking despite scoring higher in nearly every single individual category you list. All I can come up with is you are either including something that isn’t in the above chart, are weighting one or two single variables very heavily, or have some other sort of fudge factor in there. Could you please comment as to what is causing this specific case?

    Parag

  12. Michael,

    Thank you for sharing the philosophy; that is the most important part because it drives the details of the model and the decision-making process. However, I do look forward to reading more about this.

    For me, it’s a funny coincidence that these changes were just announced. This past weekend I was dragged to the beach by friends. I enjoyed the chance to relax, but after 90 minutes in the sun I was itching to do something productive. Fortunately, before I was kidnapped I grabbed a copy of Michael Raynor’s “The Strategy Paradox”. Perhaps Deloitte’s rise has been influenced by this research, as it probably drew clients, and may have also been used internally.

    Zander

  13. Hi Zander,

    Explaining how/why we changed the algorithm will take a very long post since we have a huge model crunching the numbers for us! So, this is just a summary of the philosophy behind the changes. A longer post will come and I will answer any questions from readers as they come up.

    Many readers misunderstand our rankings. They think it tracks performance. It does not. It tracks the health of a business and forecasts how firms will do in the long term.

    So people reading this will say, Wow! how can Deloitte top Bain. However, Bain is ranked highly now for things it did in the past. We do not care about this past performance unless it helps us predict the future. We are looking at how Bain will do in the future and that picture is not so great for them.

    They will likely be acquired.

    Our ranking is like a share price indicator in that it only looks at the future. Share prices are indicative of future free cash flow discounted back. Our rankings are indicative of future positioning.

    Deloitte has a lot of problems and we have no issues pointing them out, and we will continue to do so. However, this is not so much Deloitte becoming more like McKinsey or Bain, or improving their strategy practice. Those things are all happening but are not the main reason for the change.

    The big difference is a greater need/appreciation in the market for the multi-disciplinary approach Deloitte takes and our belief Deloitte can one day merge its global consulting business into one P&L + fix its culture. Our ratings have shifted because we think those two critical changes will happen.

    Finally, like all research we constantly tweak our model and update the way we view facts as new information comes to light. We think that is good science. Frankly, right now most consulting firms are trying to be more like Deloitte in both a skills manner and even its legal structure. Even McKinsey is no longer a global partnership – despite what they claim in the press and public.

    Obviously that is not very detailed, but it should sketch out the broad context!

    Michael

  14. Michael,

    I would love to hear about how or why the ranking was changed. Deloitte seems to be impacted more than the other firms, and I am curious how they are able to compensate for apparent weaknesses such as their legal structure and staffing model. For readers planning a transition into management consulting, this is an interesting development.

    Zander

  15. Hi Scott,

    Provided you want to work for a firm which places their client’s needs first, then you are correct. Because that is how we define future performance.

    Michael

  16. Hi Michael,

    Since you describe these rankings as an estimate of the future health and performance of a firm, does this mean that, assuming all else equal, you would currently advise a graduating student to choose the highest ranked firm? Or is that an incorrect characterization of the rankings?

  17. All,

    The shift in rankings is mainly driven by a rising US economy. The audit firms tend to do much better in these conditions since they have greater exposure to US market conditions.

    The better the US economy does, the better they will do up to a point. They still have structural issues which will mean they can never really break past a score of about 5 or 5.5

    Michael.

  18. Thanks Scott. We will think about this and debate it internally.

    All scores look at the most recent week only. That said, HBR is not the highest weighted metric here. There are many that have a higher weighting.

    Quite a few metrics vary between months in a drastic way. Media presence, salaries, interview rigor are all things that change a lot. However, we need to determine if a drastic swing is a bad thing. Just because it does cause a drastic swing does not mean it should not.

    We will think about this more and post any changes as we tweak the rankings.

    Michael

  19. Hi Michael,

    Thanks for the quick response and explanation. I am totally in agreement with the spirit of the real-time ranking.

    Does the HBR score only look at the last month’s edition of HBR? If so, I would argue that it should at least consider a slightly longer period of time to prevent dramatic month to month fluctuations in the scoring.

    A firm that is publishing a cover piece every other month is doing a tremendous job supporting itself in the future and making an impact, however if the ranking only considers the latest issue of HBR, its score will rise and fall dramatically depending on whether it is an off month or not. You may argue that this is just a feature of a true “real-time” ranking, but such a dramatic scoring swing could mean a 5-10 spot ranking change for some of the middle or lower tier firms on a monthly basis. To say a firm is positioning itself that much better for the future one month by publishing an article and then have the impact of that article completely disappear from the next month’s ranking seems incongruent with the impact that article will actually have.

    None of the other metrics seem like they will vary as drastically on a monthly basis, and that is why I suggest extending the time period on which the ranking is based. One month is rather arbitrary anyways, it just happens to be the frequency with which HBR is published. How about three or six months?

    Scott

  20. Hi Scott,

    Thanks for this. We are always looking for ways to change the ranking – since nothing is perfect.

    The HBR score is not binary.

    Firms score 0 if they are mentioned in an article but the author is not from the firm.
    Firms score 5 if they pen a smaller piece
    Firms score 8 if they secure a feature piece
    Firms score between 8 and 10 if it is the cover piece, written by the CEO or a piece of great influence.

    Averaging out the scores defeats the purpose of a real-time ranking. A firm is only as good as the actions it takes today to support itself in the future. That is a deliberate choice in the rankings. We do not want to reward past success.

    Reducing the weighting is something to consider but HBR carries a lot of weight in the business world. To consulting firms, it matters to be in HBR.

    Hope that helps. As always, we are always tweaking the rankings, weightings and equations so feel free to post questions.

    Michael

  21. I am a big fan of these rankings and they seem like a very comprehensive set of categories. Thank you for putting so much work into keeping all of this information up to date.

    One critique that I do have concerns the Harvard Business Review category. It is unclear over what period of time you are sampling the journal, but it seems clear that it is too short. The fact that only one firm, McKinsey, has a score above 0, that it has a score of 10 (up from 0 a couple weeks ago), and presumably achieved this score from only one article (published January 1, 2015) gives this category relatively little meaning. If I am not mistaken though, that one article resulted in a jump of their overall score by 0.4 or 0.5, a huge margin, from a couple weeks ago.

    It seems like this category should either be given less weight, averaged over a longer time, or not scored in such a binary manner (ie. one article gives you a 10 and zero articles gives you a 0).

    Thank you for making such a great site and keep up the good work.

  22. Hi James,

    Thanks for your comments.

    The simplest way to answer this is that our ranking is like a share price. We are estimating future health and performance of the business. So, today’s input to a consulting firm will determine tomorrow’s output, because we cannot see tomorrow’s output yet and therefore cannot measure it in the ranking.

    If we were just measuring the consulting firms on their performance/health right now like most rankings, ignoring the future, then we would only look at output. Yet, we are not doing that.

    All the other questions you ask are based on the assumption that measuring input in our ranking may be a mistake, which it is not, because of the attribute we are measuring.

    Michael

  23. Michael,
    In light of your previous editorial that discussed input vs. output metrics, would you mind commenting on how you would classify the metrics listed above? I may be wrong, but isn’t the most important output metric, average quality of the work, missing from the metrics list? The “Consistency” metric touches on this somewhat, but seems more focused on the variation in the work quality instead off the average level of the work quality. Would you say some of these metrics are input variables (such as recruitment source)? Would you say that many of the other metrics are output proxies that attempt to give us some idea of what the average quality of the work is without directly assessing the quality of the work itself?

    James

Leave a reply

You must be logged in to post a comment.