We shouldn't take NHTSA safety ratings too lightly. It's our government spending our tax money to give the consumers an easier way to compare the safety among many vehicles. We shouldn't question NHTSA star safety system on how they reached the score. NHTSA has its way to do the test, analyze the result, and give it a score.
Why shouldn't we question how NHTSA rates cars? Anybody can give a car a score. What makes their ratings meaningful?
Sorry, you spurred a rant. NHTSA is not transparent about their rating system, has caught a lot of flak for protecting manufacturers from FOIA requests through confidentiality agreements, and ignores the vast majority of consumer complaints. I've tried looking for information about how they determine ratings based on crash test data but have come up short. Trying to do a deep dive on their web site leads a lot of broken links, empty query results, errors like "The NHTSA Crash Test database is currently unavailable due to daily maintenance." that persist 24/7, and bits of information that may be interesting but don't answer any of my questions. In their press releases and information they put out for the public, this is the kind of explanation they give:
5 stars = Injury risk for this vehicle is much less than average
4 stars = Injury risk for this vehicle is less than average to average
3 stars = Injury risk for this vehicle is average to greater than average
2 stars = Injury risk for this vehicle is greater than average
1 star = Injury risk for this vehicle is much greater than average
Well, that means nothing to me. How do they define average? What does it mean to be less than or greater than the average, what does it mean to be 'much' less than or 'much' greater than the average? These differences should be objective and quantifiable. Are they? I've Googled for answers and haven't found any.
The vast majority of ratings these days are 4-5 stars. How can that be the case if 4- and 5-star ratings are supposed to be better than average? Does the average change from year to year? If so, how can you compare used vehicles to new vehicles? They also say you can't compare front crash ratings from vehicles in different classes. Why?
Also, how do they assess injury risk based on the hundreds of variables they collect in a crash test? Is there some kind of formula and is it based on research and studies that I can go look at? Again, I've Googled and come up short.
More importantly, is there a correlation between NHTSA ratings and accident/crash statistics? I found just one research paper online that attempted to correlate real world data with the 2011- NHTSA NCAP ratings and it wasn't conclusive.
One nice thing that NHTSA does do is publish crash test reports, so if you're inclined you can see what the real differences are, IF you know how to interpret the data. But when you read these reports, it can be pretty hard to correlate the results to the ratings.
Just because our 2017 CX-5 has a 4-star overall rating and we should disregard the NHTSA safety rating? Then 2 years ago when 2015 CX-5 was having a perfect score and everybody was happy here and trashed others getting 4 stars? No matter how we're trying to ignore the inferior 4-star overall rating on 2016~2017 CX-5, the fact of matter is a 5-star overall rated 2015 CX-5 IS SAFER than a 2016~2017 CX-5 in NHTSA's crash test.
That requires a leap of faith. First, the 2017 is a new model and NHTSA hasn't published any crash test reports for it. Second, the 2016.5 is structurally the same car as the 2013-2015, so why should it be rated differently? The previous generation CX-5 was crash tested three times, all at MGA facilities, first in 2012, then in 2013, and then again in 2016. Go to NHTSA's site, download the test reports for all three tests, and compare the detailed data, particularly the accelerometer data in Appendix B. Compared to the typical differences you see from one car to another, these three crash test results are essentially equivalent. Even the pictures look pretty much the same. Which shouldn't be a surprise since it's the same frigging car.
So how do they get different ratings? The 2013 is rated 4/5 front and 4/5 overall, but the 2014 & 2015 are rated 5/5 even though they're the same car and the test results are equivalent. Then in 2016, the ratings drop again even though it's the same car being tested again and the results look equivalent. How does that happen?
The same thing happened with the CR-V. The last generation CR-V has been tested three times, in 2012, 2014, and 2015 at Calspan and MGA facilities. Go to NHTSA's site, download the test reports. It's the same car, and the test results were basically equivalent in each test, as you would expect. But the NHTSA ratings changed from 5/5 overall in 2012-2014 with the only 4-star rating being the driver's side frontal impact, to 4/5 overall in 2015 with the only 4-star rating being the passenger's side frontal impact (driver's side 5/5), and then back to 5/5 overall in 2016. Yet it's the same car.
BTW, I've mentioned before the reason why 2017 CX-5, although it's improved on all frontal crash categories, got 4-star overall safety rating is because it got worse than 1st-gen CX-5 on front passenger and combined rear seat ratings during the side crash. So your statement "why do different cars get different overall ratings even though their constituent ratings (front, side, rollover) are exactly the same?" doesn't stand, at least for CX-5.
No, I'm not talking about different vehicles. I'm talking about when the scores change for the SAME vehicle based on the SAME test, when there's been no design changes between model years and not even a retest.
We should question what the Mazda did to make the rating suddenly getting worse on 2016 CX-5, not the NHTSA because NHTSA doesn't change thier test procedures in these years. And NHTSA doesn't play favoritism against any car manufactures either. Besides, it's the NHTSA who caught safety problem on fuel filler pipe on CX-5 and forced Mazda to stop the sale immediately until an acceptable resolution given and a recall was initiated.
Like I said, Mazda didn't change anything, and the detailed test reports indicate they didn't change anything, but the ratings are different. This happens for other manufacuters too. I don't think they play favorites, but their ratings don't seem to be consistent and I wonder whether they are based on objective criteria that are written down and accessible to the public. Checking on the NHTSA isn't my hobby and I haven't filed any FOIA requests, so I know I'm not all knowledgeable, but I try to be an informed consumer and I expect that if government agencies are going to influence buying decisions in a free market they had better be consistent, objective, and transparent.
For the purposes of making an argument, let's say the NHTSA rated vehicles on braking performance. But you had no idea whether the rating was based on 30-0, 60-0, and/or 70-0 results, or whether wet or dry, or ABS on or defeated. And they didn't say what 1,2,3,4,5-star ratings meant in terms of braking distance, and the standard may vary by class of vehicle, and it isn't a consistent standard from year to year anyway. How much attention would you pay to it then?