QUESTION: It disturbs me that analysts don’t have hands-on experience with the products they advise their clients about. How are can they be credible?
ANSWER: Analysts, especially at the larger end-user centric firms like Gartner and Forrester, have access to what they would claim is the world’s largest testing lab – their clients. Analysts covering hot product areas, both software and hardware, are hearing from scores of clients every month who are evaluating products or using them in production. As a consequence, the analyst firms claim that they can develop a series of data points that quite accurately portray the quality of a vendor’s products.
Note: This is a classic SageCircle article from our pre-blogging newsletter. This particular article appeared in March 2001. It is being brought into the blogging era because ZL Technologies in its lawsuit against Gartner criticizes Gartner because it does not do “a single minute of independent testing of the products it purports to evaluate” (page 9 of the original court filing).
SageCircle Technique:
- Vendors can help themselves by ensuring that their product evaluation programs for prospects are top notch. This prevents prospects from getting frustrated during evaluations and pilots and passing this frustration on to their favorite analysts.
- Vendors should arrange for reference calls between analysts and their happy customers. The analysts are already hearing from disgruntled users of products, so vendors need to proactively balance this view with satisfied customers.
Bottom Line: Hands on testing, especially of large enterprise-grade hardware and software, has never been part of the advisory analyst business model and research methodology. Knowing that advisory analysts rely on hearing about the experiences with products and service that their end-user clients have should spur vendors into making sure that analysts are provided with a high volume of customer case studies, informal customer stories, customer references, and calls from happy customers.
Question: AR – Do you have a formal customer reference program?
Carter-
It depends on the technology and the marketplace. For some technologies, such as search, hands-on testing is plausible and often very revealing. Same for broad but shallow platforms like SharePoint (though that is changing). The analyst has to be careful, though, to balance their personal experience with that of customers who have used the tool in sustained combat conditions.
For most other “enterprisey” software segments, my experience is that it is next to impossible to simulate in a fair and accurate way how all the different types of potential customers may use the tool.
If there is a caveat, it is that SaaS-based delivery models, and their emphasis (for better or worse) on configuration over customization, is giving us new opportunities to get hands-on with the tools. I think that’s healthy.
Tweet by Gordon Haff of Illuminata:
@ghaff Think it’s case see less systematic hands-on testing by pubs etc. also. Biz models don’t support.
Tweet by Ron Shelvin of Aite Group:
@rshevlin “hands-on testing” is hardly a panacea to the challenges with analyst evaluations. it’s a weak argument on part of vendors.
Tweet by Jonathan Yarmis of Ovum/Datamonitor:
@jyarmis it may be ok for enterprise software but can you justify it for end-user solutions? you know i love to play 🙂
Adding to Tony’s points, I’d also say that very often analysts are briefed before products are commercially-available (pre-briefed) – so ‘testing’ a product that’s unfinished and unavailable isn’t really valid.
In addition, most analysts aren’t particularly well-qualified to truly ‘test’ products anyhow – most of them are writers and not necessarily (or at all) technical – there is a big difference. Plus no PR/AR person is EVER going to allow an uncontrolled/unsupervised ‘test’.
Having said that, there is tremendous value in analysts interacting with customers who are using products – looking over their shoulders if they are able to do so. I’ve done that many times and often found that the reality of using a product is far different than the analyst briefing suggested.
As an amusing supporting anecdote, there was a certain CRM vendor back in the ’90’s who took great pains to always stress not just how wonderful their products were, but also how they went to GREAT lengths to insure their customers were all ‘100% satisfied’.
It was almost believable – but that same vendor made the key strategic mistake of selling their product to many analyst FIRMS – and by doing so they made it crystal clear that their claims were far from reality – and were mostly marketing fluff.
You can’t evaluate products without customer input. You could without hands-on testing.
Having said that, I always try to get unsupervised hands-on opportunity to test, and usually get it. If a company doesn’t allow me to, that’s a bit suspicious. But like Tony said, the value you get out of testing really varies among the technologies and products. I can test a search product on the staple corpuses (Enron documents, a SharePoint VM, etcetera) but that’ll only give me a feel for their efficacy. And I’m under no illusion I could simulate a 1500 editor, high-bandwidth, customized WCM implementation. I talk to customers for that.
I do prefer the mix though. Demos are all smoke and mirrors and an elephant disappearing on stage. They usually have little bearing on customers’ reality.
For my research firm covering IP video surveillance, we do continuous testing of new products. The insights we get from our own testing is far superior to the fractured feedback that integrators and end users can provide to use. We continuously find important technological limitations that never are revealed through interviews and calls.
I do not have a position on whether analysts should be required to test products themselves. However, for our practice, we find it essential.
Social comments and analytics for this post…
This post was mentioned on Twitter by carterlusher: Good comment by CMS Watch analyst @TonyByrne about the usefulness of hands-on testing for industry analysts http://bit.ly/26EetZ…
There are at least two firms doing hands on testing, in the shape of in-labs demos. Both are French (sorry), one is Le CXP ( http://www.cxp.fr )