KDnuggets : News : 2008 : n06 : item24 < PREVIOUS | NEXT >

Publications


Subject: This Psychologist Might Outsmart the Math Brains Competing for the Netflix Prize

Wired, By Jordan Ellenberg, 02.25.08

At first, it seemed some geeked-out supercoder was going to make an easy million.

In October 2006, Netflix announced it would give a cool seven figures to whoever created a movie-recommending algorithm 10 percent better than its own. Within two weeks, the DVD rental company had received 169 submissions, including three that were slightly superior to Cinematch, Netflix's recommendation software. After a month, more than a thousand programs had been entered, and the top scorers were almost halfway to the goal.

But what started out looking simple suddenly got hard. The rate of improvement began to slow. The same three or four teams clogged the top of the leaderboard, inching forward decimal by agonizing decimal. There was BellKor, a research group from AT&T. There was Dinosaur Planet, a team of Princeton alums. And there were others from the usual math powerhouses -- like the University of Toronto. After a year, AT&T's team was in first place, but its engine was only 8.43 percent better than Cinematch. Progress was almost imperceptible, and people began to say a 10 percent improvement might not be possible.

Then, in November 2007, a new entrant suddenly appeared in the top 10: a mystery competitor who went by the name "Just a guy in a garage." His first entry was 7.15 percent better than Cinematch; BellKor had taken seven months to achieve the same score. On December 20, he passed the team from the University of Toronto. On January 9, with a score 8.00 percent higher than Cinematch, he passed Dinosaur Planet.

...

One such phenomenon is the anchoring effect, a problem endemic to any numerical rating scheme. If a customer watches three movies in a row that merit four stars - say, the Star Wars trilogy - and then sees one that's a bit better - say, Blade Runner - they'll likely give the last movie five stars. But if they started the week with one-star stinkers like the Star Wars prequels, Blade Runner might get only a 4 or even a 3. Anchoring suggests that rating systems need to take account of inertia - a user who has recently given a lot of above-average ratings is likely to continue to do so. Potter finds precisely this phenomenon in the Netflix data; and by being aware of it, he's able to account for its biasing effects and thus more accurately pin down users' true tastes.

Read more.

Bookmark using any bookmark manager!


KDnuggets : News : 2008 : n06 : item24 < PREVIOUS | NEXT >

Copyright © 2008 KDnuggets.   Subscribe to KDnuggets News!