"You'd be silly to make the assumption that greater click through = greater revenue."
Performance is based upon how well the conversion funnel performs. Revenue of course matters, but the way you get that revenue is to get people to go through the conversion funnel. What this A/B test shows, though crudely, is one optimization of one portion of the funnel. When you get more people to click on, say, a product page, you're "widening the funnel" at that particular point.
It's entirely possible to widen the funnel higher up, and still have the same amount of revenue; however, this doesn't mean that it was a useless optimization. Whenever you widen the funnel, you get the potential for higher revenue, especially with further optimizations. That makes for an increase in performance.
Lastly, widening the funnel at any point also means that you are getting increased page views, increased product views, and increased stickiness. That is _always_ a good thing, regardless of whether you increase your revenue or not. It means that people are more likely to interact, and buy from, your site in the future-- users are naturally more likely to go to a site that they know. So, greater click throughs to a product page, though you may not immediately see an effect, will almost always lead to greater revenue.
A wider funnel isn't always better. The thing that should be optimised is volume multiplied by some measure of "lead quality" (and total money spent is probably a good approximation for a business to use).
For example, would you rather have 1000 people through the funnel with a 1% average likelihood of purchasing (for an expected 10 sales) or 100 people through the funnel with a 50% average likelihood of purchasing (for an expected 50 sales).
It may well be that you can get more people through the funnel but it doesn't make it a good optimisation if the people you are enticing are unlikely to buy anything from you.
Real world example: Apple products. Apple specifically target the top end of consumers (narrow funnel) who are happy to pay a little bit more, whereas other producers go for the mass market (wider funnel) sales at a lower margin.
Although in this case, my justification is - the 'distance' between this product listing and an actual purchase is maybe 4 steps total (ie. view product, add to basket, give details, pay), and there are decreasing response percentages at each step. If I tried to measure actual sales with my current tool the test would take much much longer to conclude. Rest assured, I've got AB tests running elsewhere in the process too :)
But this makes your entire article nonsense, do you not get that?
People could have been clicking through more because one was better at conveying information than the other, so they didn't need to click through to make a purchasing decision. Maybe the list meant that they ignored the text?
What if the end result is you still only sold 20 boxes of tea for each? Your assumption would then be wrong, the grid would be better because it's serving less page views.
What if because the consumer had to click around more you actually had less sales? They took too long to find what they wanted?
You're sort of touching on two of the important metrics; product views per session, and page views per product view. You want to maximize the former, and minimize the latter. Why would you want to maximize the former? Stickiness. Your customer will be more likely to stay around and purchase, whether it's now, or later, if they see a lot of different products-- even if they aren't finding what they're looking for (which, btw, is impossible to divine from this metric).
"What if because the consumer had to click around more you actually had less sales?"
That's very difficult to measure-- you can't assume that a customer at the browse portion of the funnel is able to find what they're looking for or not, just by looking at the number. You might want to look at the number of add-to-cart actions per product view for a better answer to that question.
So which format resulted in a higher revenue? You'd be silly to make the assumption that greater click through = greater revenue.