• Recent Posts: Influencer Relations

    Webinar: Survey shows new risks for analyst relations

    Webinar: Survey shows new risks for analyst relations

    A first glance at the Analyst Value Survey shows new risks emerging for analyst relations professionals. We’re hosting a webinar on November 30 to hear how leading AR professionals are responding to them, and what the best practice is for your analyst relations program. Three risks stand out massively. First, there a big gap between the firms that vendors think […]

    Vendors’ five key thoughts about analyst firms

    Vendors’ five key thoughts about analyst firms

    Five things stand out from vendors’ responses to a survey we conducted after our Analyst Relations roundtable at the English Speaking Union. Analysts (including analysts who call themselves consultants or advisors) are often thought to have bias, especially if most of their revenue comes from vendors. Sometimes the effort put into staying informed makes analysts seem very process-driven but less […]

    Join us for the Forum in San José on November 17

    Join us for the Forum in San José on November 17

    Should someone you know be at the year’s most important discussion on analyst relations? We’ll be at the free ARchitect User Forum 2016 in San José, CA, on November 17. Professionals from industry leaders will introduce the sessions: Lopez Research, Digital transformation; IBM, AR in large organizations; Cognizant, Managing analyst events;  Capgemini, AR knowledge management; Wipro, Intelligence-driven relationships; and ARinsights, AR […]

    Take the 2016/17 Analyst Value Survey

    Take the 2016/17 Analyst Value Survey

    The Analyst Value Survey is open! Each year several hundred users of analyst research tell us which analyst firms they use, and which are most valuable. In exchange, they get access to our results webinar, where they discover which firms are delivering the most value in key market segments. You can take part too. Go to AnalystValueSurvey.com and click on […]

    Guess Who’s Looking for Top Talent in Analyst Relations?

    Guess Who’s Looking for Top Talent in Analyst Relations?

    Looking for a new direction in your Analyst Relations career? October is a time when new opportunities pop up in the field. From IBM to Google, we gathered the top US Analyst Relations firms with vacancies needing to be filled. If you’d like to learn more about the opportunity and to schedule an interview, contact these firms directly. However, if […]

Lack of hands-on testing for products by industry analysts [AR Practitioner Question]

question-mark-graphic.jpgQUESTION:  It disturbs me that analysts don’t have hands-on experience with the products they advise their clients about.  How are can they be credible?

ANSWER: Analysts, especially at the larger end-user centric firms like Gartner and Forrester, have access to what they would claim is the world’s largest testing lab – their clients. Analysts covering hot product areas, both software and hardware, are hearing from scores of clients every month who are evaluating products or using them in production. As a consequence, the analyst firms claim that they can develop a series of data points that quite accurately portray the quality of a vendor’s products. 

Note: This is a classic SageCircle article from our pre-blogging newsletter. This particular article appeared in March 2001. It is being brought into the blogging era because ZL Technologies in its lawsuit against Gartner criticizes Gartner because it does not do “a single minute of independent testing of the products it purports to evaluate” (page 9 of the original court filing).

SageCircle Technique:

  • Vendors can help themselves by ensuring that their product evaluation programs for prospects are top notch. This prevents prospects from getting frustrated during evaluations and pilots and passing this frustration on to their favorite analysts.
  • Vendors should arrange for reference calls between analysts and their happy customers. The analysts are already hearing from disgruntled users of products, so vendors need to proactively balance this view with satisfied customers.

Bottom Line: Hands on testing, especially of large enterprise-grade hardware and software, has never been part of the advisory analyst business model and research methodology. Knowing that advisory analysts rely on hearing about the experiences with products and service that their end-user clients have should spur vendors into making sure that analysts are provided with a high volume of customer case studies, informal customer stories, customer references, and calls from happy customers.

Question: AR – Do you have a formal customer reference program?

9 Responses

  1. Carter-

    It depends on the technology and the marketplace. For some technologies, such as search, hands-on testing is plausible and often very revealing. Same for broad but shallow platforms like SharePoint (though that is changing). The analyst has to be careful, though, to balance their personal experience with that of customers who have used the tool in sustained combat conditions.

    For most other “enterprisey” software segments, my experience is that it is next to impossible to simulate in a fair and accurate way how all the different types of potential customers may use the tool.

    If there is a caveat, it is that SaaS-based delivery models, and their emphasis (for better or worse) on configuration over customization, is giving us new opportunities to get hands-on with the tools. I think that’s healthy.

  2. Tweet by Gordon Haff of Illuminata:

    @ghaff Think it’s case see less systematic hands-on testing by pubs etc. also. Biz models don’t support.

  3. Tweet by Ron Shelvin of Aite Group:

    @rshevlin “hands-on testing” is hardly a panacea to the challenges with analyst evaluations. it’s a weak argument on part of vendors.

  4. Tweet by Jonathan Yarmis of Ovum/Datamonitor:

    @jyarmis it may be ok for enterprise software but can you justify it for end-user solutions? you know i love to play🙂

  5. Adding to Tony’s points, I’d also say that very often analysts are briefed before products are commercially-available (pre-briefed) – so ‘testing’ a product that’s unfinished and unavailable isn’t really valid.

    In addition, most analysts aren’t particularly well-qualified to truly ‘test’ products anyhow – most of them are writers and not necessarily (or at all) technical – there is a big difference. Plus no PR/AR person is EVER going to allow an uncontrolled/unsupervised ‘test’.

    Having said that, there is tremendous value in analysts interacting with customers who are using products – looking over their shoulders if they are able to do so. I’ve done that many times and often found that the reality of using a product is far different than the analyst briefing suggested.

    As an amusing supporting anecdote, there was a certain CRM vendor back in the ’90’s who took great pains to always stress not just how wonderful their products were, but also how they went to GREAT lengths to insure their customers were all ‘100% satisfied’.

    It was almost believable – but that same vendor made the key strategic mistake of selling their product to many analyst FIRMS – and by doing so they made it crystal clear that their claims were far from reality – and were mostly marketing fluff.

    • You can’t evaluate products without customer input. You could without hands-on testing.

      Having said that, I always try to get unsupervised hands-on opportunity to test, and usually get it. If a company doesn’t allow me to, that’s a bit suspicious. But like Tony said, the value you get out of testing really varies among the technologies and products. I can test a search product on the staple corpuses (Enron documents, a SharePoint VM, etcetera) but that’ll only give me a feel for their efficacy. And I’m under no illusion I could simulate a 1500 editor, high-bandwidth, customized WCM implementation. I talk to customers for that.

      I do prefer the mix though. Demos are all smoke and mirrors and an elephant disappearing on stage. They usually have little bearing on customers’ reality.

  6. For my research firm covering IP video surveillance, we do continuous testing of new products. The insights we get from our own testing is far superior to the fractured feedback that integrators and end users can provide to use. We continuously find important technological limitations that never are revealed through interviews and calls.

    I do not have a position on whether analysts should be required to test products themselves. However, for our practice, we find it essential.

  7. Social comments and analytics for this post…

    This post was mentioned on Twitter by carterlusher: Good comment by CMS Watch analyst @TonyByrne about the usefulness of hands-on testing for industry analysts http://bit.ly/26EetZ

  8. There are at least two firms doing hands on testing, in the shape of in-labs demos. Both are French (sorry), one is Le CXP ( http://www.cxp.fr )

Comments are closed.

%d bloggers like this: