J.D. Power hasn’t released the underlying data for their latest tablet owner satisfaction survey, so the headline “Samsung Ranks Highest in Owner Satisfaction with Tablet Devices” left many confused about how Apple’s iPad could scored more circles and still lost in the end. Here’s how:
A) J.D. Power does not rate or compare tablets, and it never has. What they do is rate owner satisfaction with a particular device, in isolation from other devices and satisfaction ratings, and then they compare the results. Even if one tablet was capable of teleportation and shooting phasers, as long as owners of another tablet report that they are more satisfied with their tablets (maybe they are not aware of tablets that can teleport people, or maybe they just like taking a bus), the other tablet will rank higher in owner satisfaction. The words “in owner satisfaction” are crucial.
B) The scale is only absolute within the domain of each manufacturer. We can assume that most owners surveyed only owned one of the tablets. Hence, their satisfaction is relative. Just like one person can be very happy with their $3,000 Corolla while another person is unsatisfied with their $40,000 Lexus IS 250 (perhaps because their friend just got the more powerful IS 350, or because they’re a spoiled little brat who never worked for the car to begin with, and instead got it as a gift from their rich grandparents who are too sweet and naive to understand that they are inflicting a deep, permanent psychological disability on their grandchild through these expensive gifts that the kid has no ability to appreciate due to the fact that such appreciation would imply an understanding of hardship and work, the very two things the grandparents have made unnecessary for the said child), in the same fashion you can have some owners be happy with a crappier tablet while others are less happy with a better tablet. J.D. Power doesn’t measure if one tablet is worse than the other; it measures how satisfied with them their owners are.
C) The circles do not represent absolute ratings, such as when critics rate movies, or when CNet rates products; instead, they show manufacturers’ relative placement within the group. So if 3 tablets get performance ratings (on a scale of 1 to 200) of 149, 150, and 151, then they will respectively get 1 circle, 3 circles, and 5 circles despite the fact that their performance is rated virtually identical. (Note: I said “rated,” not “is”.) From this we can conclude that Apple barely won the 4 categories, and lost the Cost category by a wider margin. Again, “won” here does not mean that Samsung tablets’ performance or features were near identical to those of Apple tablets. It means that the survey participants’ perception of those metrics was similar, presumably without having used the other company’s tablets and hence having little to no frame of reference.
Of course, we can’t conclude an exploration of conundrum without pointing fingers – that’s like being Santa and doing all the work of delivering presents without taking advantage of the free cookies and milk. :) So I blame J.D. Power for not disclosing the underlying scores with every report and thus making their chosen method of presentation extremely confusing; I blame the media for spinning this as if J.D. Power said that one tablet is better or worse than another; and – let’s be candid with ourselves – I blame those of us who quietly accepted the results of this survey for years while iPad was on top, but suddenly started demanding full disclosure of data and an immediate explanation from J.D. Power when iPad didn’t come in as #1.
Let’s all hope iPhone’s rating never slips, or none of us will feel safe coming online and playing on Twitter for weeks thereafter. Now excuse me as I use my crappy, 2nd-place tablet to teleport myself to Hawaii.