Sunday, March 20, 2011
Sunday, January 30, 2011
What, exactly, do you consider a SEARCH ENGINE?
Consider this list:
Amazon
EBay
ShopZilla
Info.com
YellowPages.com
Bizrate
Target.com
Comcast.net
Do you want them in your search engines reports, or kept out?
Do you want to be paying for text ads on them as part of your paid search program, or not?
All of them have a search function. They may display results from their own site, or paid search ads, or organic results from the internet at large. Or all three.
In short, the search engines reports of our web analytics tools are messy because "search engine" just isn't easily definable these days.
- Really-truly search engines, where people go to search the whole internet. I think we'd all agree that Google is one.
- Shopping search engines, which also have value as price/feature comparison engines. ShopZilla and Nextag qualify for this.
- Primarily within-site search engines that happen to show related paid content. Target.com is definitely in this category ... the paid results may lead you to buy something at another store, but Target seems willing to take that risk. Same for Amazon. They've probably done the cost-benefit math.
- Same thing as the above, but you have to submit an xml file to get listed. Froogle comes to mind, may it rest in peace.
- Sites with a whole other purpose, of which search is a tiny part. (Have you ever noticed that if you do a search on Facebook, below all the FB-related hits is a set of organic results powered by Bing?)
Monday, January 12, 2009
The tough task of operationalizing
Yes, that’s THE question, and as Deborah (the author) said, the answer is usually pretty broad, just a starting point. More traffic, more sales, more customer engagement (ugh) are typical answers.
The FOLLOWUP question is, for me, a lot more difficult. The conversation usually branches after the first question, into 1) “Okay, but what does that mean?” or 2) “Well, aside from that, what kinds of things indicate partial success, or probable future success?”. And the one that seems to be the killer: "Which (of these two measures that we've worked out in the past hour) is more important?"
Getting people to operationalize, i.e. turn their generalities into specifics that are fairly finite and objective, is challenging and requires a really experienced person in the room (hopefully it's the metrics person). For a metrics person, the task is to turn a big generality into one or more littler, more objective ones, and continue that process until something pops out that can somehow be turned into reliable data and quantified. Sometimes the "data" version is right at the top level. Sometimes it's several levels removed. Sometimes it's nowhere. And often (another topic) it leads to surveys (because it's really just opinions) or crossing to other channels.
Surprisingly often, these “define success” working sessions for web analytics are the first time that the people in the room have actually had to work through, in their own minds, what they are after and how they’ll know it when they see it. Seriously. It’s amazing to me how people can do their jobs without operationalizing their goals, but they do it every day, all day, and seem successful.
The flip side of operationalizing is living with ambiguity, which is a valuable skill in itself. The curse of a mentality that likes to operationalize is the danger of being compelled to operationalize. "What do you mean by 'I love you'?"
One reason I like being in analytics is because the metrics person is often the one who pushes others to do this kind of mental exercise. In fact, I’ve seen metrics people labeled as “pests” about it (by those who are comfortable with ambiguity) … eventually accompanied by “that session was hard but it was incredibly helpful and productive.”
It’s pretty neat when, in one of these sessions, we reach a point where the operationalized objectives start to easily turn into actions … “we need to do more of such and such because it’s clearly related to an important objective” … “we have to think about whether we should be doing such and such because its relationship to the objectives is really pretty thin; it's cool but coolness isn’t enough.” Or, simply, "we won't be able to measure that unless we do such and such." It is NOT easy to get to this point in the discussion.
I dislike, I mean REALLY dislike, multi-syllabic jargon but “operationalize” is one to keep.
Friday, October 24, 2008
The few blogs I like
http://www.mymotech.com/
the MymoTech blog - I have no idea what "mymo" means but this blogger, Michael Helbling, is awfully good at telling it like it is ... when he actually posts, that is. His description of his experience with the TCO (Total Cost of Operation) of Google Analytics is one of the few done anywhere and it lays it out beautifully.
http://thebigintegration.com/
The Big Integration - one of two blogs by Jacques Warren, who consistently seems to add something to the general conversation. I give his blogs points for a fresh perspective almost every time. In other words, his writing tends to yank my leash a little bit.
http://webtrendsoutsider.com/
The WebTrends Outsider - written by a group of non-WT employees who act like product insiders. Clearly there is a ton of WT experience here, an ability to think outside the WT box, and some kind of crazy urge to help people. And a little bit of poking at WebTrends. Somebody's gotta do that.
http://semphonic.blogs.com/semangel/
Gary Angel's blog. The postings tend to be really long and multi-parted. But he's the most original thinker in analytics, IMHO. Not because he's coming up with original ideas (he does that), but because he pulls existing concepts and thoughts from other fields and gets them to work with web analytics. His fault, if any, is in not referring to those other fields and therefore maintaining our delusion that we are doing something special and unique. I think a lot of web analytics people are bottom-up --- they migrated into analytics and flourished there. I have a feeling that Gary Angel is one of those people who was already flourishing in other, bigger areas and migrated to analytics where he overlays intelligent adaptations of what he has absorbed elsewhere. It takes a special kind of mind to be able to do that. I save his blog for when I have the time to read it more than once. It makes me walk around to other people's offices at work saying "hey, you know, we oughta ..."
http://www.bruceclay.com/blog/archives/analytics/
The Bruce Clay Inc. analytics blog. When I grow up I want to work there, because they probably know everything. If I can keep up with their blogs (analytics, search stuff ...) then I feel like I have a chance of keeping up, period.
http://blog.webanalyticsdemystified.com/weblog/
Eric Peterson's blog. Eric has two things going for him that put his blog on this list. 1) He's very very smart which means he thinks and writes really well. 2) He's a true insider in the industry, possibly even "an elder statesman" and deservedly so. #1 is good enough all by itself. #2 adds the spice - early knowledge, the confidence to make some controversy (in a good way), and a great ability to see things with seasoned perspective. His recent take on Google Analytics is a perfect example.
http://www.minethatdata.blogspot.com/
Kevin Hillstrom's blog. He comes out of the catalog world and, please believe me in this, the catalog world is incredibly important, and different. And old. He knows data and selling and writes wonderfully and he doesn't pull punches. My new go-here-first blog as of February 2009.
Sunday, August 24, 2008
Google Trends for web sites? No thanks
Thursday, February 21, 2008
One True Thing
There's another piece - the clients and the fact that, as an agency, we have a variety of them.
This means we also have a variety of types of relationships with those clients in the analytics corner of the business. Our "deliverables" run the gamut:
- setting up their reporting tool and letting them look at the tool's output (with ongoing help as needed),
- delivering Excel scorecards of varying degrees of detail that allow them to skip looking at the tool's output,
- delivering the above plus written interpretation or summary or recommendations,
- deep-dives into a single, one-time analysis,
and my favorite:
- the free-ranging relationship that has come to be known here as "One True Thing" reporting.
I want to talk about the last one because it's just too enjoyable to keep to myself.
The name "One True Thing" comes from a smart and creative client person with a limited budget who wants it all. He appreciates that the detailed scorecard is important but wants to skip the other time-consuming deliverables while still getting at important stuff that he can actually use, whether to influence decisions or impress his organization. (Hey, both of these are really important!)
He said, "If I can get just One True Thing every month out of analytics, something that's interesting and helpful, then the whole effort is worth the money we put into it."
What resulted was very simple. We have a four-person phone call every month, a few days after the detailed scorecard is delivered. By the time of the call, he's looked at the data we send and may have questions or head-scratching puzzlers. He and his site manager also talk about conundrums they're dealing with in his business (where analytics might help), or recent events and changes for which they want to measure effectiveness. On our end, we might talk about new metrics or methods we've been playing with.
He and his site manager know his business needs, I know how to express his issues as analytics problems (I hope), and in addition our engagement manager and I each have tons of web experience so we throw that into the discussion as well. There's a lot of "what the heck," "I wonder why," and "wouldn't it be cool if" going on, and by the end of the phone call we've scoped a "One True Thing" topic that our analytics team will investigate before the next phone call. And, of course, we go over the outcome from the last One True Thing question, usually with just an informal email and some graphs instead of a formal report (saves billable time). The best part is if the analysis gets them to say "That's great! Just what we needed!"
A lot of mutual education goes on. We've all gotten better at these calls over time. The two of them have become more analytics-savvy and for our part we understand their business better than ever before. They are getting a lot for their money and we are getting to poke into analytics questions that we wouldn't have thought of. Along the way we get to invent new analyses and maybe add them to our overall analytics practice. We're also proposing this kind of relationship to other clients and the One True Thing Phone Call has become a bit of a standard. Or at least we'd like it to be.
The web analytics world tends to be a little too formal, IMHO, emphasizing quantity of data and dashboards as our output when what might help most of all would be these kinds of conversations that stress questions, answers, and insights without running up a big bill. Obviously, the point of this blog entry is to suggest that more people get back to basics like this. Partly because it's a good thing in itself, and partly because it's so much fun.
Saturday, January 19, 2008
Web 2.0 and usability
I just discovered a piece by Jakob Nielsen, the sometimes controversial usability pundit, who wrote a provocative article on Web 2.0.
His main point: “If you focus on over-hyped technology developments, you risk diverting resources from the high-ROI design issues that really matter to your users …”
Early in the article, he provides a working definition of Web 2.0 as a structure for the rest of his article:
- "Rich" Internet Applications (RIA)
- Community features, social networks, and user-generated content
- Mashups (using other sites' services as a development platform)
- Advertising as the main or only business model
The rest of the article is about the usability risks for each of these facets of Web 2.0, in the context of five kinds of web sites – marketing, e-comm, media, intranets, web applications.
Whether you agree or disagree, it’s worth reading. My main reaction was a big hurray, simply because of the refreshing clarity with which he defined his terms and deconstructed the issue. He's a hero to me just for that reason (another writer who jazzes me in the same way is Gary Angel over at SEMphonics). If I could find more people who work within a logical structure when they write, I would be a lot less confused in general.
It's almost a footnote that I agreed with most of his conclusions.
http://www.useit.com/alertbox/web-2.html
Monday, December 31, 2007
The DoubleClick bind
It becomes an issue with people who are invested in spending money on interactive advertising. They are, justifiably, wanting to take the DoubleClick numbers at face value. It makes the clickthrough rate look a lot better.
We're trying to look into it, with a lot of help from WebTrends custom reports, server logs, SDC logs, and Google Analytics as a sort of backup.
We know this:
DFA's numbers are 'way higher than the tagbased data, as much as 100% higher.
DFA's numbers are very similar to the numbers in server logs.
The two tagbased reports, WebTrends SDC and Google Analytics, agree with each other but are usually only half the size of DFA stats and server logs.
When we carefully compare individual page view events between server logs and SDC (tagbased) logs, we can separate out the "extra" events that don't show up in the tagged logs. Close examination does NOT show a pattern of repeated User Agent or IP information. There's no obvious evidence of one or two entities doing a lot of clicking. No easy answer about bots or cottage industry clickfraud.
However, we are finding one interesting thing that points to a possible explanation, or at least a corroboration of the notion that DoubleClick's numbers contain a lot of non-humans. DoubleClick hits are, as you know, marked with extra parameters. So, a given banner destination page will have, in the logs, some hits with DC marker parameters (coming from banner clicks) and other hits without those marker parameters (coming from non-banner sources). We did separate analyses of these two groups - only analyzing hits that were also first hits in a visit.
We found that if the hits had DC marker parameters, the discrepancy between server and SDC/Google logs was a huge one - in the up-to-100%-more range. But if the hits did NOT have DC marker parameters, the discrepancy was really quite small --- server and SDC/Google more or less had the same numbers. In other words, the hits that appeared in server logs but did not appear in tag-based logs were mostly banner hits.
We all know that there are several reasons for an event to appear in server logs but not in tag logs. Two big ones are: 1) an entity that does not execute javascript will not appear in tag logs (rather, the identity of the page being requested won't, although the hit will appear as a site hit), and 2) an entity that does not request images will not appear in tag logs.
It looks to us like a big proportion of clicks on banners are by entities that don't request images or don't execute javascript - basically, bots.
Someday, I'll talk to an insider who understands what's going on and who runs these bots and why. Could be legitimate, or not.
In the meantime, I'm continuing to say that WebTrendsSDC/Google Analytics numbers for banner traffic are the ones to trust, because they are probably the human traffic. DoubleClick numbers contain vast amounts of non-human traffic.