SageCircle received an email from a reader asking whether we had seen the newsletter from a boutique analyst firm, which included a comment that Gartner has been increasing the number of Leaders on Magic Quadrants. The clear implication was that this analyst was accusing Gartner of corruption for inflating the number of Leaders in order to extract revenue from vendors in the form of analyst consulting days, research reprints, and so on. Of course, this analyst competes with Gartner for contracts and access to vendor briefings.
SageCircle has not noticed any “Leaders inflation,” but then we have not been doing any systematic, in-depth research which would be required for such an observation. Gartner gets criticized if it there are too few Leaders or too many. The joys of being the dominate market player; everybody takes potshots at you.
The boutique analyst firm offered no proof, nor does it describe the research methodology behind the claim, so we cannot evaluate the validity of the claim. Here are some general observations:
- The boutique analyst firm analysts could be looking at only a few MQs relevant to their coverage and these may have been around for a number of years. Maturing markets naturally see the vendors migrate up, and to the right, as the market consolidates through acquisitions or failures, vendors become better at execution, and so on
- The boutique analyst firm analysts do not notice that Leaders are not the only vendors who purchase reprints, vendors in all boxes – incredibly even vendors in the Niche box – acquire reprint rights and promote the MQs they are on. As a consequence, Gartner would not necessarily get incremental revenues because the Challengers and Visionaries might already be purchasing reprints of the Magic Quadrant
- We don’t believe there is any “Leaders inflation”
To see if the distribution of vendors around a MQ was skewed in one direction or another, we looked up a random set of MQs just to see what the breakdown was between the various boxes. Our example set consisted of hardware, software and services MQs:
.
Ldr | Cha | Vis | Nic | |
HW – Blade Servers | 3 | 0 | 2 | 7 |
HW – Global Enterprise Desktop PCs | 3 | 1 | 1 | 2 |
HW – Midrange Enterprise Disk Arrays | 6 | 2 | 6 | 5 |
SVC – Help Desk Outsourcing, North America | 12 | 10 | 0 | 1 |
SVC – ERP Service Providers, North America | 5 | 9 | 1 | 8 |
SW – Enterprise Application Servers | 4 | 2 | 13 | 9 |
SW – Enterprise Content Management Systems | 4 | 1 | 1 | 14 |
SW – E-Services Suites | 2 | 2 | 4 | 5 |
SW – Managed File Transfer | 7 | 3 | 22 | 3 |
SW – Mobile Data Protection | 4 | 0 | 3 | 5 |
SW – Social Software | 3 | 2 | 7 | 23 |
Total | 53 | 32 | 60 | 82 |
23% | 14% | 26% | 36% |
Columns: Ldr=Leader, Cha=Challenger, Vis=Visionary, Nic=Niche.
In this unscientific selection of MQs, we detect no particular “Leader Inflation” with Leaders having 23% of the total. However, different types of MQ have different distribution with the hardware Leaders capturing 32% of the total, while software Leaders only get 17%. At the individual research note level, the Help Desk Outsourcing has 52% Leaders, while the Social Software has only 9% Leaders. So clearly there are significant differences between different slices of the data in this small sample.
What about if we were to look at the same MQ, but from different years? Again using a small and unscientific example, we see that indeed the number of Leaders on the Managed File Transfer MQ did increase from one year to the next. However, the number of new vendors added overall was even greater so that Leaders experiences only 21% increase while Visionaries increased 57%.
.
Managed File Transfer MQ | Added | % of | ||
Year 1 | Year 2 | Increase | ||
Leaders | 4 | 7 | 3 | 21% |
Challengers | 1 | 3 | 2 | 14% |
Visionaries | 14 | 22 | 8 | 57% |
Niche | 2 | 3 | 1 | 7% |
Total | 21 | 35 | 14 |
To truly determine if there is some sort of skewing of vendor positioning would require an in-depth research project requiring a significant effort and data over a number of years. There would be a need to correct or account for many factors. Steps for an ongoing research project would include:
- Capturing data from all MQs over an extended period of time
- Comparing distributions by
- Market
- Initial release of the MQ and updates over time
- Tracking changes in the vendors listed to account for new entrants and market consolidation
- Tracking the discontinuation of MQs and for what reason
- Tracking changes in the lead analysts and contributing analysts
- Tracking evolving criteria to account for changes in position due to changes in the market
- Tracking evolving investment in and execution of analyst relations (AR) by the vendors – better AR execution can improve placement even if nothing else changes
There are unlikely to be any buyers – vendors, enterprise end users, media and so on – for this sort of research. So it is doubtful that any AR services firm or PR agency would actually invest the effort in doing the work to see if the claim about “Leaders inflation” is anything more than hot air. The lack of paying clients is also one of the reasons why nobody tries to audit the accuracy of analyst predictions and recommendations.
Bottom Line: Complaining about Gartner is a perennial topic for blogs and competing analyst firms. Rarely, if ever, do the analysts or bloggers expend the effort needed to provide solid facts and analysis to back up their claims. As a consequence, enterprise end-user clients and vendors should be skeptical about any claims about Gartner, Forrester, IDC and other large firms that are the targets of these attacks.
Question: How much would you pay to have someone audit the analyst research such as the Magic Quadrant?
I reviewed six of the MQs you point to above that fall within either my direct coverage area or my area of general interest/knowledge. While it’s yet another ‘unscientific’ view, I found the MQs pretty consistent with my own understanding of the markets and the players.
I might quibble here or there, but that’s the nature of the beast. MQs are judgment calls, pure and simple. Yes there are factors, and weightings, and a smidgen of methodology. They try to reduce a complex world into just two variables. They’re not the only measure, nor any measure of how good is vendor X for customer Y’s particular situation and requirements.
But from where I sit, these examples show sound, reasonable judgment of the players and their relative positions in the industry.
From Rob Curran via Twitter, http://www.twitter.com/robcur
RT: @carterlusher MQ Leaders inflation http://bit.ly/4kDU5G RC: More accurate title Can Competitor Prove Accusation? Not Gartner’s burden.
Thanks for the thoughtful rebuttal, Carter. I’ll take a stab at your closing question about how much someone might be willing to pay for an audit of Magic Quadrants with a question of my own: How much time do AR pro’s devote to influencing signature research placements? The two are intricately linked in my mind.