Subscribe by Email


Thursday, May 23, 2013

Analytics - Measuring data relating to user information - Part 5

This is a series of posts relating to the measurement of user information and analyzing the data related to that user information. In a previous series of posts, we have looked at some examples of data measurement, and how to use this data analysis for decision making. In the previous post (Measuring data relating to user information - Part 4), I looked at the other side of gathering data. One can go overboard in collecting data and start collecting information which would get the product and the organization in legal trouble and face protests from consumers. One always need to ensure that the data being collected has been cleared by the legal team or others authorized to ensure that the data collection does not go beyond what has been legally permitted.
This post will cover a common experience that people face when in the business of collecting user information and then trying to make some sense out of it. Even large companies face a similar problem - the problem of not doing anything based on this data. Seems very strange, but this happens a lot. Recently I had a discussion with an analytics consultant. He has been in the field for a decade and a half now, working for organizations and working independently. The biggest problem he faced was of getting people to commit for either getting enough resources to do the data collection, or even more, of analyzing this data to make informed decisions.
I have seen this problem myself. The software applications that we worked on had a large amount of instrumentation done in order to collect data on many different parameters. This was done after careful design by a team comprising the product manager, developers and testers, with design of what to do with the data once it is collected after the product has been released. However, it should be well understood as a part of the design for analytics that there needs to be effort to analyse the data and make sense out of the data. And this is where the biggest problem was. Resourcing was always a constraint, since analytics was never a priority over features for the product, and unless there is a change in this attitude, things will never change.
So, what used to happen ? All the data would be collected, would be a huge database of information that we were collecting for each version (and which would grow over a period of time while we made additions to the data collection techniques for the features in the application), but at the most, we would take one set of data and try to make sense out of that.
The team was also able to see this, and hence, the next time a discussion would be called for trying to to figure out what more data needs to be collected, what are some of the reports that would be useful, there would be less interest from the team in trying to be involved in the analytics for the ongoing version. This was getting to be a big problem, but there was no clear solution to this. The pressure of resourcing vs. feature work was not going to go away, and unless analytics was seen on priority with other feature work, things would not really change. And this is where teams and especially their managers need to be more immersed in what all can be done through analytics, primarily the advantages for business decision making.

Read more about this in the next post. (Analytics - Measuring data related to user information - Part 6)


No comments:

Facebook activity