• Recent Posts: Influencer Relations

    Netscout continues unwise Gartner suit

    Netscout continues unwise Gartner suit

    Netscout and Gartner have scheduled their trial for next July. The case stands little chance of improving Netscout’s value. It does, however, risk harming the reputation of both analyst firms and analyst relations professionals. Over the last weeks, pressure has mounted on Netscout’s lawyers. Netscout claims Gartner’s Magic Quadrant harmed its enterprise sales and that the truth of Gartner’s statements […]

    Is this how the Quadrant lost its Magic?

    Is this how the Quadrant lost its Magic?

    Gartner’s Magic Quadrant is the most influential non-financial business research document. In the late 1980s, it was a quick and dirty stalking horse to provoke discussions. Today it is an extensive and yet highly limited process, based on the quantification of opinions which are highly qualitative. The early evolution of the MQ tells us a lot about the challenge of industry […]

    Saying farewell to David Bradshaw

    Saying farewell to David Bradshaw

    A funeral and celebration for David Bradshaw (shown left in this 2000 Ovum awayday photo, arm raised, with me and other colleagues) is to take place at West Norwood Crematorium, London SE27 at 2.45pm on Tuesday 23rd August and after at the Amba Hotel above London’s Charing Cross Station, on the Strand. David considered that that Ovum in that incarnation was […]

    David Bradshaw 1953-2016

    David Bradshaw 1953-2016

    David Bradshaw, one of the colleagues I worked with during my time as an analyst at Ovum, died on August 11. He led Cloud research in Europe for IDC, whose statement is below. David played a unique role at Ovum, bridging its telecoms and IT groups in the late 1990s by looking at computer-telecoms integration areas like CRM, which I […]

    AR managers are failing with consulting firms

    AR managers are failing with consulting firms

    Reflecting the paradoxical position of many clients, Kea’s Analyst Attitude Survey also goes to a wide range of consultants who play similar roles to analysts and are often employed by analyst firms. The responses to the current survey show that consultants are generally much less happy with their relationships with AR teams than analysts are. The paradox is that as […]

Lack of hands-on testing for products by industry analysts [AR Practitioner Question]

question-mark-graphic.jpgQUESTION:  It disturbs me that analysts don’t have hands-on experience with the products they advise their clients about.  How are can they be credible?

ANSWER: Analysts, especially at the larger end-user centric firms like Gartner and Forrester, have access to what they would claim is the world’s largest testing lab – their clients. Analysts covering hot product areas, both software and hardware, are hearing from scores of clients every month who are evaluating products or using them in production. As a consequence, the analyst firms claim that they can develop a series of data points that quite accurately portray the quality of a vendor’s products. 

Note: This is a classic SageCircle article from our pre-blogging newsletter. This particular article appeared in March 2001. It is being brought into the blogging era because ZL Technologies in its lawsuit against Gartner criticizes Gartner because it does not do “a single minute of independent testing of the products it purports to evaluate” (page 9 of the original court filing).

SageCircle Technique:

  • Vendors can help themselves by ensuring that their product evaluation programs for prospects are top notch. This prevents prospects from getting frustrated during evaluations and pilots and passing this frustration on to their favorite analysts.
  • Vendors should arrange for reference calls between analysts and their happy customers. The analysts are already hearing from disgruntled users of products, so vendors need to proactively balance this view with satisfied customers.

Bottom Line: Hands on testing, especially of large enterprise-grade hardware and software, has never been part of the advisory analyst business model and research methodology. Knowing that advisory analysts rely on hearing about the experiences with products and service that their end-user clients have should spur vendors into making sure that analysts are provided with a high volume of customer case studies, informal customer stories, customer references, and calls from happy customers.

Question: AR – Do you have a formal customer reference program?

9 Responses

  1. Carter-

    It depends on the technology and the marketplace. For some technologies, such as search, hands-on testing is plausible and often very revealing. Same for broad but shallow platforms like SharePoint (though that is changing). The analyst has to be careful, though, to balance their personal experience with that of customers who have used the tool in sustained combat conditions.

    For most other “enterprisey” software segments, my experience is that it is next to impossible to simulate in a fair and accurate way how all the different types of potential customers may use the tool.

    If there is a caveat, it is that SaaS-based delivery models, and their emphasis (for better or worse) on configuration over customization, is giving us new opportunities to get hands-on with the tools. I think that’s healthy.

  2. Tweet by Gordon Haff of Illuminata:

    @ghaff Think it’s case see less systematic hands-on testing by pubs etc. also. Biz models don’t support.

  3. Tweet by Ron Shelvin of Aite Group:

    @rshevlin “hands-on testing” is hardly a panacea to the challenges with analyst evaluations. it’s a weak argument on part of vendors.

  4. Tweet by Jonathan Yarmis of Ovum/Datamonitor:

    @jyarmis it may be ok for enterprise software but can you justify it for end-user solutions? you know i love to play🙂

  5. Adding to Tony’s points, I’d also say that very often analysts are briefed before products are commercially-available (pre-briefed) – so ‘testing’ a product that’s unfinished and unavailable isn’t really valid.

    In addition, most analysts aren’t particularly well-qualified to truly ‘test’ products anyhow – most of them are writers and not necessarily (or at all) technical – there is a big difference. Plus no PR/AR person is EVER going to allow an uncontrolled/unsupervised ‘test’.

    Having said that, there is tremendous value in analysts interacting with customers who are using products – looking over their shoulders if they are able to do so. I’ve done that many times and often found that the reality of using a product is far different than the analyst briefing suggested.

    As an amusing supporting anecdote, there was a certain CRM vendor back in the ’90’s who took great pains to always stress not just how wonderful their products were, but also how they went to GREAT lengths to insure their customers were all ‘100% satisfied’.

    It was almost believable – but that same vendor made the key strategic mistake of selling their product to many analyst FIRMS – and by doing so they made it crystal clear that their claims were far from reality – and were mostly marketing fluff.

    • You can’t evaluate products without customer input. You could without hands-on testing.

      Having said that, I always try to get unsupervised hands-on opportunity to test, and usually get it. If a company doesn’t allow me to, that’s a bit suspicious. But like Tony said, the value you get out of testing really varies among the technologies and products. I can test a search product on the staple corpuses (Enron documents, a SharePoint VM, etcetera) but that’ll only give me a feel for their efficacy. And I’m under no illusion I could simulate a 1500 editor, high-bandwidth, customized WCM implementation. I talk to customers for that.

      I do prefer the mix though. Demos are all smoke and mirrors and an elephant disappearing on stage. They usually have little bearing on customers’ reality.

  6. For my research firm covering IP video surveillance, we do continuous testing of new products. The insights we get from our own testing is far superior to the fractured feedback that integrators and end users can provide to use. We continuously find important technological limitations that never are revealed through interviews and calls.

    I do not have a position on whether analysts should be required to test products themselves. However, for our practice, we find it essential.

  7. Social comments and analytics for this post…

    This post was mentioned on Twitter by carterlusher: Good comment by CMS Watch analyst @TonyByrne about the usefulness of hands-on testing for industry analysts http://bit.ly/26EetZ

  8. There are at least two firms doing hands on testing, in the shape of in-labs demos. Both are French (sorry), one is Le CXP ( http://www.cxp.fr )

Comments are closed.

%d bloggers like this: