> Comparing smartphone and camera is really apples to oranges at this point, as smartphones aren't even capturing photos, they're entirely repainting scenes.
Calm down, it's not that bad. Take for example night sight or astrophotography; it's using ML to intelligently stitch together light across time because available light in one moment is not enough to capture anything intelligible. Your end result is an accurate representation of what your eyes see (e.g. my own face in a nighttime selfie) and what is sitting there in the sky (the stars). You can call that repainting, but I disagree, it's more information aggregation over the temporal dimension.
Super resolution is similar, using shakes in your hand to gather higher resolution than you can accomplish with a single frame of data from your low res sensor grid. 2-3x digital zoom with super res technology is actually getting more information and more like optical zoom. It's not just cropping+interpolating.
Now...portrait mode. That's clearly just post-processing. But also...does blurring the background using lens focus have any additional merit vs doing it in post (besides your "purity"-driven feelings about it)?
At the end of the day, I want my mirrorless to do more than be a dumb light capture machine. I spent $X thousand+ for a great lens and sensor, so I want to maximize. It should do more to compensate automatically for bad lighting, motion blur, etc. It should try harder to understand what I want to focus on. As a photographer, I should get to think more about what photo I want taken and think less about what steps I need to take to accomplish that. My iPhone typically does a better job of this than my $X000 mirrorless. So I use my iPhone more.
> Take for example night sight or astrophotography
Oh speaking of astrophotography. It occured to me that all those pretty images of remote planets and nebulas have been doctored to hell and back.
What I don't know is where I can find space images that show the visible spectrum - i.e. what I'd see if i managed to travel there and look out the window.
Calm down, it's not that bad. Take for example night sight or astrophotography; it's using ML to intelligently stitch together light across time because available light in one moment is not enough to capture anything intelligible. Your end result is an accurate representation of what your eyes see (e.g. my own face in a nighttime selfie) and what is sitting there in the sky (the stars). You can call that repainting, but I disagree, it's more information aggregation over the temporal dimension.
Super resolution is similar, using shakes in your hand to gather higher resolution than you can accomplish with a single frame of data from your low res sensor grid. 2-3x digital zoom with super res technology is actually getting more information and more like optical zoom. It's not just cropping+interpolating.
Now...portrait mode. That's clearly just post-processing. But also...does blurring the background using lens focus have any additional merit vs doing it in post (besides your "purity"-driven feelings about it)?
At the end of the day, I want my mirrorless to do more than be a dumb light capture machine. I spent $X thousand+ for a great lens and sensor, so I want to maximize. It should do more to compensate automatically for bad lighting, motion blur, etc. It should try harder to understand what I want to focus on. As a photographer, I should get to think more about what photo I want taken and think less about what steps I need to take to accomplish that. My iPhone typically does a better job of this than my $X000 mirrorless. So I use my iPhone more.