I love Netflix. For better or for worse, I’ll patiently wait for the next season of Daredevil to be released on the service, and then devour the new episodes in one glorious binge weekend. However, many times I’m faced with the dreaded task of finding something to watch. If you have ever sat on your couch, with a significant other or a friend, trying to decide what to watch on Netflix, you can relate to this. Almost every occasion, I spend so much time looking at potential options, that I literally run out of time. I’ve had evenings with family, where we spent the entire night looking at trailers in Netflix and never actually settling on a choice. The sheer number of viewing options is paralyzing. Talk about first-world problems!
To combat this, Netflix is constantly trying to innovate and improve how they suggest movies and television series to us. If they do a better job at recommendations, then we are less likely to spend entire nights debating the options and more likely to happily watch something while being bathed in the warm, reassuring, blue glow of our screens. Netflix has invested heavily into improving its recommendation system. They even held an open competition that lasted from 2006 to 2009 with a grand prize of 1 million dollars. The goal of the competition was to create an algorithm that outperformed the existing Netflix algorithm when it came to predicting user ratings of Netflix content. Over the years, Netflix has continued to roll out many more changes and improvements to its algorithm.
When trying to figure out what we’d like to watch, Netflix considers a staggering amount of factors, collected from Netflix customers’ usage throughout the entire world. In order to know which factors to consider, they have needed to be laser-focused on proving what metrics are important and measuring their actual impact on their recommendation algorithm’s performance. Netflix’s recommendation system is a multi-million-dollar investment and their software engineers and data scientists cannot propose changes without hard data to back up their pitch. Proposals based on intuition or “gut feelings” are simply not acceptable when trying to shape a multi-million-dollar project.
This mentality of demanding hard data over “intuitive hand-waving” is something that came up in my work and it relates to Vehicle Detail Page (VDP) views. In a dealership that I work closely with, we always treated VDP views as a decent barometer of how well a vehicle is performing. The more “hits” the car gets, the quicker that bad boy should sell. From an intuitive level, it made perfect sense to us that VDP views would be a decent sales predictor. If nobody is checking out a car online, that means people aren’t interested and the car won’t sell. Conversely, if a lot of people are viewing a car online, then that car clearly is receiving good attention that will translate into a sale. However, one day we asked ourselves a simple question: “how do we KNOW that VDP views are a good sales predictor?” In other words, we were applying the Netflix mentality – let’s stop assuming VDP views are a good sales predictor for intuitive reasons, and actually prove it to ourselves with statistical rigor. Let’s produce some hard data.
To properly evaluate VDP views as a sales predictor, we first had to frame the question we wanted answered. In our case, the question was: “Do cars with more VDP views sell faster than cars with fewer VDP views?” In mathematical terms, this translates into determining if VDP views correlated positively with sales. To calculate the correlation of VDP views to sales, we reviewed the last 2 years of web traffic data that we had from our website, along with corresponding sales data over the same 2-year period. When we computed the correlation, we were shocked: VDP views barely correlated with sales. We double-checked, triple-checked, and quadruple-checked our calculations because we still believed our intuition was true and the hard data was false. Yet each time we redid the calculations, we obtained the same result. The implication was a little jarring, if we had a car with 300 VDP views and another with 25 VDP views, we didn’t really know which would sell faster. While weird at first, we slowly came to accept it and use this fact to improve our decision making. Still even today, knowing what we know, we occasionally find ourselves slipping back into thinking that more VDP views means a car will sell faster.
After studying our dealership, I then spent 6 months working with web traffic and sales data from other dealerships throughout North America and Europe to evaluate VDP views as a sales predictor. The goal was to see how VDP views correlated with sales across different dealership sizes and web traffic numbers. At a very basic level, the results showed that the correlation of VDP views to sales varied significantly among different dealers. In some cases, VDP views did not correlate with sales and in other cases, they correlated mildly with sales. In other words, when making decisions, some dealers can safely factor-in VDP views while other cannot. One area where this typically arises is vehicle repricing.
I love to ask dealers about their pre-owned repricing strategy, because one topic that always seems to surface is VDP views. Almost all dealerships review VDP views when looking at a particular vehicle when considering a price adjustment. If the vehicle is receiving few to no VDP views, most will interpret that as a problem and weigh it heavily in their decision to lower the price of the vehicle. We did the same thing at our dealership before we learned that VDP views barely correlated with our sales. Now knowing the true correlation of VDP views to sales in our dealership we can make better decisions when it comes to repricing and marketing spend.
Tons of factors influence the sales correlation of VDP views, including inventory, marketing campaigns, types of web traffic, and even local shopping culture. Each dealership is fairly unique. The lesson of this story is not that VDP views are useless and that you should ignore them. Instead, the lesson we learned was that the significance of VDP views was unknown in our dealership until we adopted a “Netflix mentality” and pushed for hard data. What’s true for another dealer, might not be true for you, and vice versa. Never assume – always verify and validate for yourself.
Join Noah John for his session “Mythbusting VDP Views as a Car Sales Predictor” at the Digital Dealer 23 Conference & Expo this Sept. 18-20th in Las Vegas. Register now and start building your agenda by choosing from more than 100+ educational sessions!
Author: Noah John
Noah John is the Co-founder of Autoscores, a predictive analytics company that helps car managers predict their inventory car sales. Before Autoscores, Noah was a software engineer at EA Sports. He has BAs in computer science and mathematics from Rollins College and a MS in computer science from Colorado State University. His graduate work included machine learning, neural networks, and computer science education.