Our approach has been working especially well lately. What I mean is, some non-analytics people are getting it, resulting in credibility and patience. Patience --- that's a very valuable commodity around here when the source is the project managers and the clients. We're exposing the process a lot more and they are understanding why measuring visit quality isn't done overnight.
Here is a brief outline.
Think of it as a spreadsheet. In fact, a spreadsheet is currently our best way of keeping it organized and grokkable.
First, we try to break down the web site in a meaningful way. The most common breakdown possibilities are by topic, by audience, by function, or by business unit within the owning organization. This is the first column of the spreadsheet.
Then we spend time with the consumers of the reports to find out what they want from their web site and, to an extent, from the business. The underlying question is "define success" but we learned early that "define success" is a terrible way to ask for what we want. So we ask them indirectly: why do they have the site/feature/sites section/audience/function/topic/business unit. What do they want to happen. And so forth. It's often a messy conversation that flips from one type of breakdown to another and it's usually up to us to either keep things on track or just give up on that and restructure it later.
These high-level success measure "ideas" are the second column. We now have a lot more rows than we started out with, of course.
Next, we take that discussion and the high-level success measures and operationalize them in terms of web site behavior. Usually, every vague measure from the discussion gets from 1 to maybe half a dozen operationalized pseudo-measures. Do I need to say that this is the third column of the spreadsheet?
At this point we circle back to the consumers to check on everything. The spreadsheet format is nice for that discussion.
Next comes a partial site inventory, and the fourth column. We get reeally familiar with the site at this point and attache specific pages or events to each of the operationalized definitions, to the extent possible. Here is where we start to find holes in the site, usability problems, things that aren't tracked, and so on. We often end up with a wall-size site diagram made of screen shots and bits of string showing links and paths, plus sticky notes and markups. We have a cache of very large foamcore sheets for this purpose. The result is primitive-looking but exceptionally useful. However, these spectacular art objects are mainly a method to get to the content that goes into that fourth column.
There's also a fifth column of notes about the holes, problems, untracked things.
The sixth column contains comments on how the heck we are going to measure these, using whatever analysis tool we have. If it's WebTrends, this column talks about content groups, path analyses, filters, and so on. More rows.
The seventh column starts to build specifications or configurations for the WebTrends reports we need. If it's a content group, will it be defined with Regular Expressions, and what's the common alphanumeric string? What should be the exact name of the content group? The URLs we put in column four are consulted over and over. Sometimes we realize we might need to get the engineers to change a URL ... that goes into the comments.
The eighth column sorts all those specs into WebTrends profiles as efficiently as possible. There's a lot of overhead for each profile so we try to keep the number down, but there are rarely fewer than four and often more than twenty profiles by the time we're done.
Usually, columns seven and eight are done concurrently with actually WT setup work and trial runs because, well, you learn a lot that way.
Right at this moment I'm doing a column 7-8 thing and I really have to get back to it.
I should add that column 7-8 stuff is best done on a WEEKEND. Because it's quiet and nondistracting.