I honestly believe that hiding complexity behind a closed door does not eliminate it. However, a lot of software and service vendors have a vested interest in convincing people otherwise. And, historically, they've had all sorts of great platforms for doing so. Who doesn't enjoy a free day out of the office, with lunch provided?
It's also much easier to hide complexity than it is to remove it. One can be accomplished with (relative) turnkey solutions, generally without ever having to leave the comfort of your computer. Whereas the other generally requires long hours standing in front of a chalkboard and scratching your head.
On the other hand, hiding complexity behind closed doors can be a very valuable thing, if it lets you keep track of who knows about the complexity behind each. I can't count the number of issues I've encountered that would have taken minutes instead of hours if only I'd known which specific experts I needed to talk to.
> It's also much easier to hide complexity than it is to remove it.
There are multiple ways to hide complexity. Some of them make it easier to remove (eg, refactoring), others make it nearly impossible to remove. In a service market there’s a perverse incentive to move toward the latter.
There is an element of _how_ as well. You could create simple monoliths or overengineered microservices. Or, complex monoliths with heavy coupling vs cleanly designed microservices with clear separations of concern.
> Are microservices meant to separate data too? As in, each service has its own database.
Yes.
> Wouldn't that lead to non-normalisation of the data
Yes. But it's not as bad as it sounds. That is how data on paper used to work, after all.
Business rules (at least ones that have been around for more than 5--10 years) are written with intensely non-normalised data in mind.
Business people tend to be fine with eventual consistency on the scale of hours or even days.
Non-normalised data also makes total data corruption harder, and forensics in the case of bugs easier, in some ways: you find an unexpected value somewhere? Check the other versions that ought to exist and you can probably retrace at what point it got weird.
The whole idea of consistent and fully normalised data is a, historically speaking, very recent innovation, and I'm not convinced it will last long in the real world. I think this is a brief moment in history when our software is primitive enough, yet optimistic enough, to even consider that type of data storage.
And come on, it's not like the complete consistency of the data is worth that many dollars in most cases, if we actually bother to compute the cost.
There's a progression ever developer that grew up on SQL Server/relational data needs to go through...
1. The models (plural intended) of a business are not necessarily a relational. There could be an upstream/downstream relationship. There could be an event-based relationship. There may be no relations at all (documents are handy in these scenarios).
Stop assuming you start with an entity relation diagram. That's an immediate limiting factor when listening to the business describe their processes.
2. There is no such thing as immediate updates to any database. There is _always_ latency. Build software understanding this.
3. Operational data and Analytical data are TWO DIFFERENT THINGS. (sorry for the shouting)
Operationally, I only need concern myself with the immediate needs of the user or process. If I'm doing "something" to the customer domain, I don't need to know or do anything else. If I'm doing something to the order domain, I may need to notify some other domains of what I'm doing or have done, but that's secondary and not _immediately important_. Inventory systems should have built-in mechanisms for levels and never need to know the exact up-to-date figures.
My operational domains can notify whole other systems on changes in data. So your analytical system can subscribe to these changes and normalize that data all it wants. I can even build user interfaces that display both operational and analytical data.
Micro-services are brilliant at operational domain boundary adherence. Events are brilliant at notifying external boundaries of change.
The caveat I point out to my clients is that thinking in this way is very different than we're used to and often comfortable with. It takes time to identify the best boundaries and events for the models of a business. But if you put in that time, the result will be software that your business personnel can actually understand.
> Are microservices meant to separate data too? As in, each service has its own database.
Ideally yes, to scale.
Sometimes you have a service with obvious and easy-to-split boundaries, and microservices are a breeze.
Some things that are easy to turn into microservices: "API Wrapper" to a complex and messy third-party API. Logging and data collection. Sending emails/messages. User authentication. Search. Anything in your app that could become another app.
However, when your data model is tightly coupled and you need to choose between tradeoffs (data duplication), having bigger services, or even keeping it as a monolith.
Btw, if you don't care about scalability, sharing a database is still still not the best idea. But you can have a microservice that wraps the database in a service, for example. Tools like Hasura can be used for that.
It should be about constructing software in partnership with the business and reducing complexity with modeled boundaries.
You can leverage the cloud to do some interesting things, but the true benefit in is _what_ you construct, not _how_.