So, let’s talk about benchmarks, shall we? Benchmarks are basically tests that you run on a phone to determine its power/performance/etc. culminating in a score. That score gives you a sort of baseline as to how a phone performs against competitors running those same tests. Recently, some companies were caught cheating on those tests and reporting somewhat false data to make themselves look better in the eyes of…whoever.
These latest perpetrators are only the most recent in a long line of companies who have “massaged” those results to make themselves look better. Essentially how it works is an OEM will open up the processor a bit more to make the phone work faster and better than it does under normal conditions, resulting in a higher score. The thing is, most companies get caught doing this and it just makes them look silly.
Data is good
I understand the desire for benchmarks. As phone enthusiasts, we want to quantify data, so that phone A is measurably better than phone B. It gives us order in a world where even $300 and $400 phones are, by some accounts, as good as $600 to $700 phones. The problem is that “pretty darn good” is an entirely subjective opinion, and not a fact that we can point to. One man’s “pretty darn good” is another man’s “no good, very bad thing”.
But benchmarks that rate a phone’s overall performance allow us to take away the question marks and the bias of a phone reviewer and get down to cold hard facts. This phone is a (to be clear – I’m making up numbers here) 10,932 and this other phone is a 11, 243. This phone is 311 points better. So that’s a good thing. We like data. Data is our friend. As long as other phones are compared to the same benchmark, we can get a good idea where phones stack up against one another.
But the thing about benchmarks is that various benchmarks measure various different aspects of a phones performance. The tests themselves are somewhat subjective because benchmark A will prize this and that metric over another metric. So what you end up with are a litany of different benchmarks that the same phone tests differently on. You end up with muddier waters than when you started, and you get reviewers that end up having to run a dozen tests and then report back on dozens of statistics. It’s not realistic.
Some might argue that if you use the same benchmark for every phone you can get a solid base line, but the fact that there are a number of them out there just confuses the issue. Even if a reviewer chooses a consistent benchmark test to run, there will be others that will invalidate those results. So a reviewer will report that an LG G6 got a higher score than a Samsung GS8. But what does that really mean?
Clearly making things harder
Once you start factoring in all the different benchmark scores – which may not even be accurate if the OEM is cheating – then you’re left with the same subjective arguments – this phone is better on this benchmark, but this phone is better of this one. This is why most reviewers that I watch don’t use benchmarks and in fact actively avoid them. They are subjective results based on data, which doesn’t really help anyone.
Benchmarks are annoying, because they are fundamentally flawed. The same phone can run five different tests on five different benchmarks and get five different scores. But beyond that five different phones can end up in different orders of “Rank” based on five different benchmarks. Does that help the cause? I don’t think so, but what about you?
Where do you stand in the world of benchmarks? Are these valuable tools that should be tested with every phone? Is it a reviewer’s job to report back benchmark scores and how they related to the competition? Or are you more of the opinion that benchmark tests don’t really tell you anything, except how a phone relates in its own little world? Sound off below with your thoughts. Spoiler alert, we probably won’t be adopting a benchmark test here at Pocketnow, but if you make a compelling enough argument, who knows?