Taking a Closer Look at the J.D. Power NFL Survey

Yesterday you may have seen an article by Darren Rovell on ESPN making the rounds with the headline: “Survey: Protests top reason NFL ratings dipped”. Funny enough, since yesterday the headline for the story has been changed to now read: “Anthem protests led poll of reasons viewers tuned out” – a subtle change, but an important one because that first headline drew a lot of attention, yet:

  1. It wasn’t entirely accurate (e.g. the question was related to viewership and attendance, so you can’t tie the anthem protests specifically to ratings), and
  2. It hyper-focused on only one specific element of the survey, skewing perception of what the full survey results showed

The headlines that had been used by J.D. Power were more objective. For example, the report itself had the title “How Did Off-Field Factors Affect NFL Viewership and Attendance Among Sports Fans?” And their own tweet promoting the survey used a similar title (although they were also happy to retweet the ESPN headline):

After the ESPN story came out, you may have seen my initial response on Twitter where I expressed some “doubts”:

And later in the afternoon, Deadspin was happy to get into the mix with their own article about ESPN’s article, titled “How to Mangle a Survey, by Darren Rovell” where they shared some of the same numbers I had in my tweets (Side note: Deadspin should probably be the last place throwing stones about how content is framed to create interest).

Before we move forward, I encourage you to download and read the full report by J.D. Power in the PDF below. Big thanks to Christina Settimi (@csettimi) for sending this over.

Hopefully you’re still with me – if so, let’s take a look at a couple more items I think are very important – Positioning and Survey Structure:

Positioning: Yes, the ESPN story decided to take arguably the most “buzzy” part of the survey and focused on it, although nothing in the article itself was incorrect. However, this shows how easy it is to take the same set of statistics and create any number of “technically correct” headlines. For example, all of these could have been used:

  • More than twice as many fans increased their NFL viewership last season
    • This is true: 27% of fans watched more vs. 12% watching less (2.25x).
  • Reasons for decreased NFL viewership vary dramatically by market
    • The anthem protest percentage alone varied from as low as 13% to as high as 38%.
  • Over 40% of sports fans are watching less NFL due to slower games
    • If you combine the 24% that chose game delays and the 20% that chose excessive commercials, you get 44% (the specific total would need to back out how many respondents selected both). Yes, the commercials may not only be about how they slow the game, but the wording here is vague.
  • Survey: Only 6% of NFL fans site cord cutting as a reason for lower viewership
  • Survey: Six percent of NFL fans are already watching less due to cord cutting
    • This is the same statistic, but using the word “only” vs. “already” paints two completely different pictures.

I’m a statistically oriented guy, but I always say that communication is just as important as analysis, especially when you look at how easily something like simple survey percentages can be manipulated.

Survey Structure: Unfortunately, I can’t see the actual survey format, but just from the J.D. Power report, there are a few things that I think should be looked at more closely:

  • Did you notice the little asterisk next to the “why” question in table 3? Just below that table, you’ll see “*Multiple response, will not add to 100%.” Multiple response means that people were allowed to select more than one reason. So just because someone selected “National anthem protests” doesn’t mean it was the most important factor – it just means it had some level of impact. There is nothing in the survey to identify the primary reason, just any reason.
  • Additionally, the fact that the reason was presented as a simple multiple choice with predefined options allows for priming to take place vs. asking a more open-ended question that would actually generate an unaided response. I won’t go into a lot of detail on priming since I can’t see the full survey, but it could easily be a factor.
  • The survey only took place is major markets, specifically 11 of the top 17 in total size, including the top 8 overall. Funny enough, they didn’t include the 9th largest market Atlanta, where I feel pretty confidently you wouldn’t see much of a decline in NFL consumption last year. While these markets have a diverse set of residents, I wonder if this introduces a bias to the responses vs. if the survey had been executed nationwide in markets large and small.
  • Finally, the audience of respondents only includes people who have attended a professional sporting event. If you are asking questions that also involved TV viewership, you are again missing a very important part of the audience you’d logically want to survey.

We live in an era of immense competition for attention, both in professional sports and in the media. I can definitely understand why certain elements of this report would draw more attention than others. However, if we are going to use survey research like this to draw conclusions that can impact our business, we must dig deeper than the headline to identify all the relevant insights. Only then can we make a clear, accurate assessment of the situation. I hope this was helpful!

One thought on “Taking a Closer Look at the J.D. Power NFL Survey

Comments are closed.