Archive for the 'recommendations' Category

Selling recommendation data as a business model

Pelago/Whrrl CEO Jeff Holden made some interesting comments in a discussion with Greg Sterling.  First, he talked about the opportunity for recommendations in local search:

He discussed how this might apply to the emerging arena of “footstream” data and recommendations for people and places in the real world on mobile devices. Tracking mobile user behavior yields lots of data about the types of places users go and their real-world behavior. This hypotehtically could deliver local recommendations based on user profiles and corresponding “footstreams.”

Given Jeff’s background from Amazon and the nature of the Whrrl application (which has now been somewhat re-positioned) I always thought this was the ‘end-game’.

But perhaps even more interesting, is the possibility of making money off the acquired data in a more direct way:

Footstream tracking of individuals would have to be personal by necessity (with all the potential privacy questions), but the local recommendations Holden spoke about could be provided anonymously to users who are grouped into certain profiles based on their favorite places and activities in the real world.

He hinted that there might be an emerging business model here for Pelago as a repository and provider of this type of data for other publishers and sites.

I can imagine this raising some interesting privacy discussions, but it seems entirely logical – even inevitable perhaps.  And a tangible indication of the value of collecting such usage data!

RELATED:

Publishers: Your Usage Data is More Valuable than Your Content

Advertisements

Thinking holistically about local search

Emerging mobile and social applications are changing the way we find local information from a search paradigm to a recommendation paradigm.  Just this week we saw the announcement of several new products promoting this shift – which Greg Sterling reflects on in this post.   And I agree with Greg that in some ways we have almost come full circle:

The underlying consumer behavior is simply asking for word of mouth recommendations and is as “old as the hills.” But the ability to efficiently ask many people for advice or a local business referral at once online is new. Reviews were step one; the combination of quasi-real time answers and social networks is an evolution of that phenomenon.

We’re seeing many different approaches to capturing and sharing opinions — and people vigorously debating the merits of these approaches.  Is it better to have lengthy, insightful reviews or should you just have a simple rating or voting system so you get more participation?  Can you just ask your friends?  Is an answer format better than a review format?

It’s going to be great to see how it all evolves – exciting times!

I believe a holistic and inclusive approach will be needed.  Perhaps the greatest challenge in local information is to achieve sufficient depth and breadth to provide truly meaningful recommendations at the local level.  A modest sized city has tens of thousands of businesses.  This means you need millions of points of view in order to fairly represent the different needs and preferences of consumers.  Simply put:  you need active participation from a large population of local users.

This has several practical implications:

  1. You need to accommodate the different ways users want to interact with local information, but still be able to aggregate this information in meaningful ways.
  2. We can’t afford to ignore the implicit signals provided by all users.  These signals include the things they search for, the maps they request and the businesses they call.  Research on movie recommendations published by participants in the Netflix Prize clearly shows that this kind of implicit data is critical to creating high quality recommendations.
  3. A small percentage of participants will create the majority of the explicit opinions – the silent majority still needs a way to find and evaluate opinions that are consistent with their preferences.  We won’t all have 1000+ friends to ask.

RELATED:

Netflix Prize has lessons for local search.
Forget search: local is a recommendation problem.

Netflix Prize has lessons for local search

The Netflix Prize seeks to substantially improve the accuracy of predictions about how much someone is going to love a movie based on their movie preferences.

So what does this have to do with local search? Researchers working on this problem have found that you should ignore everything you know about the movies (the genres, the actors, etc.) and base your predictions on how people have rated them.  For local search this means we should base recommendations on people’s preferences – the businesses they like and have used – rather than categorical data about businesses.

Lessons from the Netflix Prize

From a New York Times article on the Netflix Prize:

“You can find things like ‘People who like action movies, but only if there’s a lot of explosions, and not if there’s a lot of blood. And maybe they don’t like profanity,’ ” Volinsky told me when we spoke recently. “Or it’s like ‘I like action movies, but not if they have Keanu Reeves and not if there’s a bus involved.’ ”

So, you can’t base movie recommendations on a simple categorization like ‘action movie’.  Its difficult for most people to articulate what they like or dislike about a movie. So, its better to base predictions on what you know about which movies the person likes or dislikes. In fact, researchers have consistently found that adding in categorical information about the movies doesn’t help with making predictions at all. (And there have been numerous attempts.)

Applying it to Local

The research from the Netflix Prize shows us that you shouldn’t recommend a pizza place to someone just because it’s a pizzeria (i.e. it’s category) and it’s ‘close enough’ to you.  That’s because the best choice can be influenced by many factors:

  • Am I a person who likes my pizza ’straight-up’ or exotic?
  • Am I in a rush and looking for the quickest option?
  • Am I bored and looking for something new?
  • Have I just come back from an exotic location and looking for an old favorite?

Today,  local search engines continue to rank results based primarily on  factors like category and location.  Even sites such as Yelp with extensive reviews rank and present results to users without considering the preferences of the searcher.  This is akin to simply presenting movies ranked by popularity alone.

Research coming out of the Netflix Prize shows the way towards a different way of thinking about local.

RELATED:

Forget search: local is a recommendation problem

Forget search: local is a recommendation problem

If I’m looking for local facts – an address, directions, a phone number – then it’s appropriate to frame the problem as a search or information retrieval one.

But otherwise, I need recommendations based on my preferences, situation and current need.  And here, the search paradigm fails spectacularly.

The Random Ranker: at Best Arbitrary…

The search paradigm starts by trying to match the words in my query with the words in the listings — and maybe with the words in reviews for the listings and maybe even the website of the listings.  It then retrieves the ones where there is some kind of match and ranks the results.  Often this default ranking is called something like ‘relevance’.  But there is no explanation of how this so-called ‘relevance’ is determined — and it certainly isn’t obvious by looking at the results.

Do you really have any idea how Google decides which 10 listings to include in the ‘featured box’ on a search results page?  From the point of view of the user, such an ordering is at best arbitrary and at worst suspicious.  This undermines user trust and confidence.

In web search, the path to relevancy usually involves a number of incremental refinements to my query.  After each query, I quickly scan the page and if the results are clearly not what I’m looking for, I re-formulate it and try again.

This approach isn’t applicable in local.  Quite often I get to a more or less reasonable set of results in short order.  In fact, the basic search problem – getting some reasonable listings – just isn’t that hard.

But my real problem, is finding a way to differentiate between the options.

Manual Filtering: Too Complicated, Doesn’t Work…

Now, it’s true that most services provide me with various tools to filter and re-order the results.  But the results are frequently less than satisfying:

  1. The tools are complex and time-consuming.  You’re making me – the user – do all the work.  So adoption of evaluation tools tends to be low.
  2. The information is often incomplete and inaccurate – so the filtering doesn’t work properly.
  3. There is too much information — too many reviews, too many conflicting opinions — and I have no basis on which to evaluate the alternatives – because I don’t know the reviewers or I don’t know the area.
  4. And often, I can’t actually do the evaluation on the criteria that matter to me — and sometimes I can’t even articulate those criteria.  I just want someone I’m going to like to cut my hair — is that too much to ask?

Recommendations

I need help in the form of meaningful and transparent recommendations.  Meaning — you need to explain to me how you arrived at the recommendations.  Are they based on other people’s opinions – how do I know I should trust these people?  If it’s based on expert assessment – what are the credentials of those experts?

And you need to learn my preferences and understand my situation in a painless way.  Don’t make me fill out a bunch of questionaires – they don’t work, I don’t know what I want and I’m not going to fill them out anyway.  Instead, learn what I need form what I do and what I’ve done and what I’ve told you I like.

5 ways to re-frame Yellow Pages

At the recent YPA (Yellow Pages Assocation) conference Malcolm Gladwell set the stage from some productive industry discussion by urging participants to re-frame the Yellow Pages.  Neg Norton has a great summary on the YPA blog.

So, in the spirit of continuing the conversation, I humbly submit the following five suggestions:

1. Proof is even better than research

Yellow Page advertising has always been (rightly) sold on the basis of a proven ROI.  Why not build on this position by making EVERY print, online and mobile ad track-able using tracking numbers.  Then you can definitively prove the ROI to EVERY one of your advertisers.  Do it for all your advertisers – even, perhaps especially for – subscription products.

2. Be the mobile maven

Mobile audiences are exploding.  But mobile advertising is slow to catch up.  They really need local advertisers but don’t have access to them.  You do – why not get together?  (And of course, continue to develop your own branded mobile experiences, but also look at how you can reach the mobile audience in other ways.)

3. Be the social connector

People are talking about your advertisers on twitter and Facebook.  What are you doing to help them join the conversation?

4. Recommendations rather than results

Be the matchmaker by helping consumers figure out which business is the right one for them.  Utilize tools like ratings and recommendations but also leverage your reputation.  Make it really easy to use.

5. Yellow pages connect

Unleash innovation by providing software developers with access to your data — and a share of the revenue from the leads they generate.  Wouldn’t you rather be sharing some revenue with an innovator using your data rather than buying your leads from Google?  You’ll make more money and be further ahead strategically.

What would you add to the list?  What would you delete or change?

Tracked calls are user generated content

How we think about things matters.  I enjoy finding and creating new ways of looking at things because they generate new insights which can eventually lead to disruptive change.  In fact, this blog is about exploring perspectives on mobile, local search and advertising.

And the seed for new perspectives comes from conversation.  Last week I got a new perspective on call tracking as the result of a twitter conversation (with @sebprovencher).

You can think of tracked calls (or clicks or any user action really) as a form of user generated content. In fact, especially for calls resulting from local searches, it’s an extremely valuable form of user generated content:

  1. EVERY user provides this feedback EVERY time they make a call (assuming you are tracking the calls of course); and
  2. It’s a highly structured signal – the user has taken a very explicit action to contact a business.

And, in the case of calls, you can further strengthen the signal by also keeping track of how long the caller stayed on the phone.

Utilizing User Generated Content

To be sure, this implicit user generated content is different from content like ratings and reviews.  You probably don’t want to display something like “300 people have called this person’s ad” (though, thinking about it that might actually be kind of interesting).

However, through appropriate analysis this data can be used to provide users with recommendations based on their situation and preferences.  For example, by looking at this data you can learn what people are looking for on Friday night versus Wednesday morning.  This is the sort of analysis we’re doing at Predictabuy.

And here’s another interesting thing: combining this implicit data with coventional ratings results in better recommendations than you can get by looking at the ratings alone.