• Recent Posts: Influencer Relations

    Saying farewell to David Bradshaw

    Saying farewell to David Bradshaw

    A funeral and celebration for David Bradshaw (shown left in this 2000 Ovum awayday photo, arm raised, with me and other colleagues) is to take place at West Norwood Crematorium, London SE27 at 2.45pm on Tuesday 23rd August and after at the Amba Hotel above London’s Charing Cross Station, on the Strand. David considered that that Ovum in that incarnation was […]

    David Bradshaw 1953-2016

    David Bradshaw 1953-2016

    David Bradshaw, one of the colleagues I worked with during my time as an analyst at Ovum, died on August 11. He led Cloud research in Europe for IDC, whose statement is below. David played a unique role at Ovum, bridging its telecoms and IT groups in the late 1990s by looking at computer-telecoms integration areas like CRM, which I […]

    AR managers are failing with consulting firms

    AR managers are failing with consulting firms

    Reflecting the paradoxical position of many clients, Kea’s Analyst Attitude Survey also goes to a wide range of consultants who play similar roles to analysts and are often employed by analyst firms. The responses to the current survey show that consultants are generally much less happy with their relationships with AR teams than analysts are. The paradox is that as […]

    Fersht: some IIAR award-winners “just tick the boxes”

    Fersht: some IIAR award-winners “just tick the boxes”

    Some of the firms mentioned by the IIAR’s analyst team awards fall short of excellence. That’s the verdict of several hundred analysts who took our Analyst Attitude Survey, and of the CEO of one of the top analyst firms. Phil Fersht left the comment below on our criticism of the IIAR awards. We thought we’d reprint it together with the […]

    Do the IIAR awards simply reward large firms?

    Do the IIAR awards simply reward large firms?

    The 2016 Institute for Industry Analyst Relations’ awards seem to be rewarding firms for the scale of their analyst relations, rather than their quality. In a blog post on July 6th, the IIAR awarded IBM the status of best analyst relations teams, with Cisco, Dell and HP as runners-up. Together with Microsoft, which outsources much of its analyst relations to […]

Making Data Collection for Measurement Practical [AR Practitioner Question]

AR Metrics & MeasurementQuestion:  How do you make data collection easier for AR measurement programs?

This question gets to the heart of the measurement challenge-if it is too difficult to do, it will not get done.  Making data collection practical involves selecting the right mix of metrics, leveraging outside resources, and automating many tasks.

[This post focuses on the data collection aspects of an effective measurement program.  Therefore, the following related topics will not be addressed 1) picking and prioritizing the right metrics, 2) distinguishing between operational and performance metrics, and 3) using metrics to track performance against pre-defined goals.  For discussion of these topics, please see Online SageContentTM Library series “Metrics and Program Measurement.”]

SageCircle Technique:

  • Selecting the Right Mix of Metrics. First, to make data collection practical, you must pick metrics that meet measurement program goals (e.g. track what you want to measure) and are easy and cost-effective to generate (e.g. data collection requires moderate/minimal effort). Be clear on what you want to measure and collect only that data so you can encourage AR team participation. However, do not reject metrics that initially appear difficult to collect. New options to out-task and automate may make collection easier than you think.
  • Leveraging Outside Resources (Out-Tasking). Out-tasking is a variation of out-sourcing, but instead of contracting out a significant part of your AR program (which SageCircle rarely recommends) this technique refers to contracting out an activity or task. Out-tasking is especially appropriate for activities that are non-strategic, repetitive and routine. Economies of scale permit third-party AR providers to invest in the tools, processes, and databases required to accomplish these tasks more efficiently. Data collection activities that are good candidates for out-tasking include: 1) analyst AR effectiveness surveys, 2) press quote collection, 3) written research analysis, and 4) basic analyst information tracking and storage.
  • Automating Some Tasks. Automation can make data collection, especially for operational metrics, as simple as creating a query or running a standard report. For example, a common operational metric is the AR team’s number of analyst interactions. Traditionally, collecting this data meant searching calendars, finding post-its, and combing spreadsheets. AR teams can automate this task by using an integrated program management application to enter analyst interactions as they occur. Then, they can generate the number of interactions metric in a minute with a couple of mouse clicks.

Bottom Line: To collect and analyze the data for an effective measurement program, AR needs to be practical about the effort required to gather and report on metrics. You can increase the success of your measurement program through carefully selecting your metrics and out-tasking or automating their collection and reporting. 

Question: Which of these data collection techniques do you use?

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: