Guest Blog: Rob Bamforth - Principal Analyst, Business Communications, Quocirca Ltd
In the last of our series of five blogs, we look into how organisations should be measuring their video adoption. "Measure what is important, don't make important what you measure" is a simple mantra, but often sporadically followed, especially when it comes to measuring the return on IT or communications investment. Many technical indicators might be easy to measure, but do they really demonstrate the true value to the business? Also, it is too easy to fall into the trap of thinking that 'more' must always be better, when 'enough' might actually be the optimum point.
When it comes to determining the value of investment in video conferencing, a key indicator is often 'minutes used', but given that video is used to cut down on travel, is usage time sufficient? Wouldn't a better indicator of value be 'miles saved’? It might, but anecdotal evidence suggests that this is hardly ever measured, even when it has been a primary driver for purchase.
From Quocirca research, it seemed that while the majority of those managing their company’s video conferencing were at least ‘satisfied’ with the return on investment (ROI), a quarter were only ‘somewhat satisfied’ and around 1 in 6 were not satisfied or just ‘didn’t know’. Although this seems like damming with faint praise, the same research shows that the business value of video conferencing is rated highly - so why the ROI shortfall?
The problem is mainly one of lack of any measurement and most organisations could do with a better understanding of how much their video systems and facilities are being used, what are they being used for and by whom?
The first stage should be to benchmark the existing installation down to the level of each individual video endpoint, to check how many calls it is used for and how long each call is, over a period of at least a month. This could be correlated with further information such as any errors recorded by the system, or if logs are kept of user comments.
While error reporting (network glitches, packet loss or technical faults) is useful, it does not capture user perceptions, which may be positive as well as negative if affected by faults. Typically, only negative issues will have been noticed by those managing the system i.e. when problems occur and this will not provide the complete picture.
At this stage it might be useful to survey users for their perceptions; what types of meetings are they using video for, how many participants, does it work well for this purpose, but not for others etc. From the recent research it appears that the main inhibitors to increased use revolve around user discomfort of being on camera and preferring audio only as well as systems being too complicated. However, all companies, office facilities and employees are different, so it pays to find out specifics.
If usage levels appear low, it might highlight problems with difficult to use systems or a lack of user training or awareness, but there might be other, very legitimate reasons for it. Differences between vertical markets’ needs or regional geography can greatly affect usage levels, as can the degree to which the organisation is structured and how teams are distributed.
Organisations might find they need help to grow adoption, or that they are actually already very progressive for their sector. Either way, a little external advice from someone who can provide comparative guidance against how other users are faring would be very useful.
Hopefully then, things of real business importance can be measured.
In case you missed the earlier blogs in this series, they are available here:
Positive behaviour and technology adoption
Conversations, not conferences - informal, not scheduled
Line of business reasons, not just meetings