Most UX researchers today can back up their claims with empirical data.
HCI work in 1992 was very heavily based on user research, famously so at Apple. They definitely had the data.
I find myself questioning that today (like, have these horrible Tahoe icons really been tested properly?) although maybe unfairly, as I'm not an HCI expert. It does feel like there are more bad UIs around today, but that doesn't necessarily mean techniques have regressed. Computers just do a hell of a lot more stuff these days, so maybe it's just impossible to avoid additional complexity.
One thing that has definitely changed is the use of automated A/B testing -- is that the "empirical data" you're thinking of? I do wonder if that mostly provides short-term gains while gradually messing up the overall coherency of the UI.
Also, micro-optimizing via A/B testing can lead to frequent UI churn, which is something that I and many others find very annoying and confusing.
Tognazzini and Norman already criticized Appple about this a decade ago, while the have many good points, I cannot shake the feeling that they simply feel like the were used to just brand Apple as user friendly in the 90s and that Apple never actually adopted their principles and just used it as it fit the company's marketing.
Hmmm, I don't quite see where that supports "Apple didn't do empirical validation"? Is it just that it doesn't mention empirical validation at all, instead focusing on designer-imposed UI consistency?
ISTR hearing a lot about how the Mac team did user research back in the 1980s, though I don't have a citation handy. Specific aspects like the one-button mouse and the menu bar at the top of the screen were derived by watching users try out different variations.
I take that to be "empirical validation", but maybe you have a different / stricter meaning in mind?
Admittedly the Apple designers tried to extract general principles from the user studies (like "UI elements should look and behave consistently across different contexts") and then imposed those as top-down design rules. But it's hard to see how you could realistically test those principles. What's the optimal level of consistency vs inconsistency across an entire OS? And is anyone actually testing that sort of thing today?
I cannot shake the feeling that they simply feel like the were used to just brand Apple as user friendly in the 90s and that Apple never actually adopted their principles and just used it as it fit the company's marketing.
I personally think Apple did follow their own guidelines pretty closely in the 90s, but in the OS X era they've been gradually eroded. iOS 7 in particular was probably a big inflexion point -- I think that's when many formerly-crucial principles like borders around buttons were dropped.
HCI work in 1992 was very heavily based on user research, famously so at Apple. They definitely had the data.
I find myself questioning that today (like, have these horrible Tahoe icons really been tested properly?) although maybe unfairly, as I'm not an HCI expert. It does feel like there are more bad UIs around today, but that doesn't necessarily mean techniques have regressed. Computers just do a hell of a lot more stuff these days, so maybe it's just impossible to avoid additional complexity.
One thing that has definitely changed is the use of automated A/B testing -- is that the "empirical data" you're thinking of? I do wonder if that mostly provides short-term gains while gradually messing up the overall coherency of the UI.
Also, micro-optimizing via A/B testing can lead to frequent UI churn, which is something that I and many others find very annoying and confusing.