Forum for discussion about the Netflix Prize and dataset.
You are not logged in.
The leaderboard displays the top teams ranked according to their best valid score on the quiz subset for the currently active prizes. The leaderboard also indicates, in the '% Improvement' column, the percentage of improvement over the performance of Cinematch, also on the quiz set. By default only the top 40 teams are shown; you may specify an integer number of leaders to display or 'all' to show leaders.
The red and blue lines are not headers. Rather they indicate "lines in the sand" where certain RMSE performance improvement levels exist for the Netflix Prize. Team entries will sort above or below the red and blue lines according to their best RMSE scores on the quiz subset.
Red lines indicate a required level of RMSE performance improvement on the quiz subset over some verified system corresponding to an available Prize. There will always be exactly two red lines. The red line labeled "Grand Prize" indicates a 10% improvement over Cinematch on the quiz subset, the first system verified by the judges. The red line labeled "Progress Prize" for a given year indicates a 1% improvement over the verified performance of the previous Progress Prize winner, also on the quiz subset. At the start of the Contest, the Progress Prize entry indicates a 1% improvement over Cinematch on the quiz subset.
Blue lines indicate specific level of RMSE performance reached by verified systems, again on the quiz subset. The initial blue line shows the performance of the Cinematch system on the quiz set, which also serves to demark the line of no performance improvement for the contest. As Progress Prizes are awarded, additional blue lines will be added to the display showing the level of RMSE performance achieved by the winning teams. And, of course, the red Progress Bar line will then reflect the new performance level required for the next Progress Prize.
Additional data about each level, including information about winning entries when available, may be found by following the title link in the red and blue lines.
Entries that sort above a red line are likely candidates to qualify for the corresponding Prize; entries that sort below a red line are less likely candidates to qualify for that Prize. As prizes come and go, so will the qualifying candidates. Since qualification and verification are based on test subset scores, which are not revealed publicly, there some chance that the actual order and even membership in the qualifying group at the end of a 30-day last call period might differ from the leaderboard. This is the price of ensuring robust results and minimum verification times.
The leaderboard suggests the performance of different systems but is not a final arbiter for winning the Prize. In fact, entries on the leaderboard control only one aspect of the Prize mechanics: After January 2007, once an entry sorts above the "Grand Prize" red line, having exceeded the 10% improvement in RMSE on the quiz set, an announcement will be sent to all team leaders indicating the start of a 30-day "last call" period for the Grand Prize. See the Rules and this discussion in the Forum FAQ on the procedures followed to determine prize winners.
For convenience, a list of each team's individual submissions, with quiz subset scores if valid, is provided to each team via the Update page on the Netflix Prize site. The team password is required to access this information.