Occasionally we get asked – almost always from someone in the vendor community – why nobody audits the analyst research. For instance, here is an email from when I was the Director of AR for a major vendor: “Does anyone in the AR community ever take analysts to task for the accuracy of their predictions? There would seem to be a huge ‘debunking’ potential here – auditing the accuracy or otherwise of their claims.”
When the response by one of my AR colleagues was “Answer to your question is Naaah, we don’t take ‘em to task, though it would be enjoyable,” the original correspondent asked:
“I’m surprised that no-one in the industry takes them to task. Given the large fees charged for reports and prognostications it would be a real service for an independent agency to review the material 12, 24 and 36 months down the road and measure accuracy. This seems like serious low-hanging watermelon for someone with knowledge of the AR field to do as an independent and charge $$$ for their reports (hint: could even be done evenings and weekends and bring in some nice fees in addition to the day job :-)”
Because I just had a similar conversation with someone from Europe a couple of weeks ago, I thought I would go ahead and put my email answer to my then colleague into this post.
“If it was that easy and somebody (e.g., vendors or end users) would be willing to pay good dollars, don’t you think that there would be many firms offering such a service?
The task is actually incredibly difficult, expensive to do, fraught with copyright issues, and nobody is willing to pay for such a service.
Sure the vendors would all love to have someone ‘expose’ the analysts, but they don’t want to make any effort or pay any money – and most importantly – be identified as someone paying to try and discredit the analysts. Many vendors fear analyst retaliation so they would not want their fingerprints on such a service.
Reporters are not interested in whether the analysts are accurate or not, because they like the arrangement where they easily get fodder for their articles. If they couldn’t just pick up the phone and call their friendly industry analyst for a quote to pad their stories, they would have to work harder.
Enterprise analyst clients, usually IT managers, are not interested in whether the analysts are usually accurate because they are looking for ‘good enough’ insights that will help them with their jobs. They know the analysts are often only giving ‘educated guesses’ and that is fine because ‘good enough’ has more than enough accuracy for the task at hand.
Very significantly, it is not the knowledge of AR that would permit someone to audit the analyst predictions, but domain expertise in the market for each prediction. For example, Gartner’s Predictions for 2006 and beyond had 41 research notes in the series published in November and December 2005. That would require at least 41 market domain experts to evaluate the predictions. Each research note contains on average five predictions. So there are likely over 200 predictions that would have to be audited. Of course, assuming one could one’s hands on the research notes without getting sued for copyright violations.
There is also the issue of how to grade the research and predictions of the analysts. In November 2004 as part of its year-end Gartner Predicts series, there was the following information ‘By 2007, three of the top 10 PC vendors will exit the market (0.7 probability).’ Well, since the prediction two of the top 10 (IBM and Gateway) did exit the PC market. While not 100% accurate, one could argue that it was well within the ’0.7 probability.’ So how would this prediction be graded?”
Since I wrote that response to a colleague, blogging has exploded onto the scene. My most recent conversation included a riposte by the person I was talking to about the effect that the blogosphere changes everything when it comes to auditing the analysts. While blogs provide an easy and cheap platform for publishing and there is the possibility of crowdsourcing or communitizing the work of auditing, it still does not eliminate the hard work involved, copyright issues, and individual motivation. What I have seen to-date is mainly people criticizing the analysts and saying how bad the analyst research is, but what I have not seen are people doing the hard work of analyzing specific analyst output, grading the analyst output, and backing up their analysis with proof. I would be delighted to see the blogosphere systematically audit the analyst research, but I am not going to hold my breath.
Bottom Line: Complaining about the poor quality of industry analyst predictions is a perennial topic for blogs and AR professionals congregating in bars at conferences. While fun to complain about, there does not appear to be a business model to support a commercial auditing service. We also don’t see the motivations that would create a movement to create some sort of unpaid community effort to accuracy check the analysts.
Question: How much would you pay to have someone audit the analyst research?