I would never, ever renounce my citizenship voluntarily. It gives me access to what I call home, my friends, my family, a massive job market. Politics are a bit rough right now but imagine if that clears up in ten years time and you can't go back. Keeping my passport will always assure that I can get back to The Netherlands.
Your Dutch citizenship doesn’t come with an obligation to laboriously file taxes to your home country even if you live abroad, and you are in no risk of being denied a bank account in any EU country. This is something US passport holders uniquely have to deal with, hence the phenomenon of dual citizens renouncing the US citizenship.
Really? Like actual internal floppy drives, and not just USB floppy drives (which even Windows still supports)?
I actually wouldn't expect macOS to support actual floppy drives since the OS's list of supported devices doesn't include any that shipped with floppy drives. The fact that I cannot install the latest macOS on any devices older than 2019 is a related, but separate problem.
In this case, what would internal floppy drive mean? The last Macs with floppy drives (I think Old World G3s?) used a custom Apple controller, integrated into the chipset, with a bespoke 20-pin cable.
A USB floppy drive behaves almost identically to a USB hard drive-yet another SCSI block device. The cost of keeping support for them is minimal
This is very different from legacy PC floppy drive controllers which spoke a completely different protocol, which was very complex and full of footguns
Legacy floppy controllers also had various legacy features almost nobody used, like soft deletion of sectors (IBM added this in the 70s for use with primitive database systems), or attaching tape drives using the floppy interface (nowadays if you buy a brand new tape drive, the interface options are SAS or Fibre Channel)
Years ago I was making the case that instead of digging ourselves into the Amazon eco-system with S3 storage, EC2 instances, DynamoDB and various other Amazon specific cloud products... we should just host virtual machines and have everything in there using open source products.
People looked at me like they saw water burning but that would have made the dependency on the US a lot easier to sever. Just move the VM's.
I've operated at companies using both models, and have observed similar reactions to suggestions of using the cloud.
To me it's like anything else in engineering, are the costs, risks, and benefits fully understood, and worth the tradeoff in the particular context.
I worked for a startup doing internet of things, the consumer would buy a device and get lifetime service baked in. And that company was a step further, just renting space in a colo was incredibly cost efficient, which supported the sales model and competitive landscape of that product. But it was also very costly to attention, one of the most valuable resources. But it can also get costly in non-intuitive ways, an example that comes to mind is we started to get interviews where a generation of candidates no longer had experience with metal, it was a foreign world to them.
With more experience, I find it's really the costs that get severely underestimated, both for and against the suggestion.
Especially in larger organizations, it's easy to lose track of all the distributed soft costs that DIY can bring (and all the bus factors that may be involved). There are lots of people that kinda want to get paid and get benefits and which require some level of management structure.
At some point, you have people (on here and elsewhere) questioning what all these people in an organization do. PART of the answer is that they're doing internal work that could have been outsourced in various ways.
I’ve struggled with convincing colleagues to host on something other than AWS. I’m not sure they understand the costs and aren’t simply doubling-down on the evil they already know.
In fact, I had no idea our static website at a scale-up in 2019 was costing us 90€/month; it came up when we were told to cut costs. Developers don’t always have a say in these things.
Heck, I then went and got a series of certifications in GCP. Even then, I’m not sure I’d understand the full complexity and pricing options of GCP. Smaller clouds and simple VPS solutions really are the overlooked option.
I am running my startup out of a self build GPU server from our office with a backup to the cloud.
I only pay for the IP address as electricity is included in the rent.
If the startup fails, Ill have thousand other potential use case for it and in the worst case, it will make for a awesome gaming machine.
The machine is a beast and I can serve a lot of users with it. In fact, and quite funnily, I already serve much more users with it than a lot of my older clients do with their software running on expensive k8s setup because „scale“ :-)
And last, but not least, I had a lot of fun building it. Its just nice to hear that thing humming away in the corner.
> The machine is a beast and I can serve a lot of users with it. In fact, and quite funnily, I already serve much more users with it than a lot of my older clients do with their software running on expensive k8s setup because „scale“ :-)
Honestly even if you have a single server, running k8s (or maybe Docker Compose for really simple cases) on it is still the simplest way to manage it (assuming you have more than 1 service, anyway). One configuration file format, one CLI tool, zero special paths to memorize, no filesystem permissions to configure, pretty good security out of the box, access to a whole bunch of helm charts and operators (for example, cert-manager, external-dns, prometheus, alert-manager, some logging operator for centralized logging with a decent UI and search, and a postgres operator for backups / replication / failover), etc.
I don’t disagree. What you know matters, but I think it’s a lot easier to learn Kubernetes than it is to learn all of the disparate tools that you need to know to cobble together something similar. Moreover, because Kubernetes is somewhat standardized, you are much more likely to be able to find quality sources on the Internet (or LLLMs, nowadays) and similarly you’re much more likely to be able to find personnel who are familiar with it compared to some bespoke alternative.
It’s also worth noting that Kubernetes is conceptually quite simple—once you realize that it’s just a database of resources that are being watched by controllers, things start to click into place and it feels much simpler.
In some sense Kubernetes is a bit like democracy or capitalism—it’s the worst in its class except for everything else that has been tried. :)
> Capitalism winning does not show it is better as there are a lot more factors. ... Google could have built something else and they still would have succeeded at doing what they did.
Maybe Google could have built something better than Kubernetes, but my point was that this doesn't do me any good. I can't _use_ the hypothetical better-than-Kubernetes product because it's hypothetical. So in the world of things that actually exist, Kubernetes is best in class despite the many valid criticisms of it.
> In my opinion, everything you wrote are opinions
Yes, my comment was my opinion.
> Installing and managing rke on bare metal was more difficult than doing the same with nomad for me.
Maybe Nomad is better. I haven't used it. I'm skeptical that it has the ecosystem breadth that Kubernetes has, but I'm happy to be wrong.
> Or another example, installing clickhouse using apt was easier and worked better than doing it with docker.
That's not really a useful comparison because (1) a system typically involves a lot more than just a singular database and (2) running a system involves a lot more than getting the software onto the machine. If you want to make a meaningful comparison, you need something like Ansible or Cloud Init to invoke apt and to wire everything together and at that point Kubernetes is _likely_ already easier. Especially when you consider logs, metrics, certificates, DNS, etc.
The whole business model is around “Optimization through custom tools”.
We can go with your idea, sure: a few months in, an Account Manager from the cloud provider shows up and says your bill could be reduced by 50% if you just adopt some changes, using their custom, super optimized tools (“minor changes” will be the mantra).
And now you have your own company looking back to you on how can they get those savings, people who don’t understand what a VM is and cannot differentiate salesforce from an elastic container, as everything is “cloud”, but heard “50% off”.
Preventing this from happening requires a clued-in CTO and equivalent senior level leadership who can defend against such 'attack' methods and knows the difference between, for instance, paying a monthly recurring cost to host a Linux/KVM virtual machine and paying for some totally 'cloud' SaAs.
Further, it needs people in decision making roles who understand and value the strategic differences between having an infrastructure concept that is trapped in one provider's proprietary software tooling ecosystem (aws, azure, etc), vs things built on open standards that are portable.
> Preventing this from happening requires a clued-in CTO and equivalent senior level leadership
Most CTOs (and increasingly M2s and M3s) I've met are what I call "box architects". You know the ones who love drawing boxes, moving one box inside another box, drawing a line between 2 boxes or changing a unidirectional arrow into a bidirectional one, then declaring the hard part is done and now we need any random engineer to implement that or "Is there an AWS service that does that? I just don't see the value in us doing it in house".
A "super optimized tools" is just a box that you swap for another box and the "minor changes" will be just a couple of arrows than need to change or another box to swap for another box. You get them to feel good about doing architect stuff plus the 10x reduction in the bill. They can always replace that box with another box later after all.
We call those people PowerPoint architects. They haven't coded or built anything for so long, if ever, that they wouldn't even how to, by this point. The only tool they know is PowerPoint and their slides have more boxes with words than any big box store
> Preventing this from happening requires a clued-in CTO and equivalent senior level leadership who can defend against such 'attack' methods and knows the difference between, for instance, paying a monthly recurring cost to host a Linux/KVM virtual machine and paying for some totally 'cloud' SaAs.
And the reality is eventually you'll get a clueless one, and everything will revert to the mean.
And the mean is heavily influenced by marketing propaganda.
This is true, but making money in any business is constantly fighting against the entropy and regression to the mean. Also, maybe, just maybe it's an example of relative competitive advantage and paying more for the AWS is the right call.
2018 - I see you are hosting your own PostgreSQL in EC2, you can use our managed solution
2020 - you are already using 18 our services (note, at this point you might still be using non-vendor products, like VMs, managed DB, and so on), why not use our IAM instead of rolling out your own auth.
2024 - you are now deeply locked, lets add more lock-in, why don't you use this tool to optimize your costs (welcome DynamoDB)
At this point, no one would ever question next tool from salesman. Because engineers see that company doesnt have strategy to move to another cloud, why should they reject this new tool?
also consider the people who are involved, a lot of times after 2 years you have totally new people in your team, they won't have context and constraints you had in the past when deciding to buy "just VM", they see it as "we already use AWS"
That. Also add the parts that engineers are terrified to their bones of moving elsewhere because they don't know how to use anything else and will act as extension of the salesman to make sure they don't need to learn anything.
I had many conversations with a former boss about the Azure sales team. They would come in, say they can do it cheaper, simpler and better — he was immediately convinced.
I would do a calculation based on their public price plan and come up with a 5-10x price compared to the bare metal OVH solution that perfectly fit our use case. I would then ask the sales team where I made a mistake in my calculation and hear nothing back.
A few months later, they would come back with the same pitch and the whole process would repeat...
AWS has been (blatantly) using Microsoft method of making their way in. Redis, Elasticsearch, whatnot, all follow the same procedure: 1. Here is a managed service. 2. Here is a fork of the managed service where we manage the server (you don't see) with 15% off in price/credits. Easier backups with clicks etc. 3. We are dropping support of managed-X, move to our fork. 4. Due to the market conditions, our forked service is now 50% more expensive. 5. Ah also, you cannot export/download your backups because they are in proprietary format. 6. Locked-in.
You'd be wrong to laugh at them, because different cars of the same general size can indeed vary 50% or more in fuel efficiency. It's fair to be skeptical of promises of huge savings, and question why your counterparty would benefit from giving you those savings, but sometimes there's a good reason.
The 2026 Toyota Corolla GR has 22mpg combined, and the normal 2026 Corolla has 35mpg with gas or 50mpg with a hybrid powertrain.
It is true that fuel economy regulations make it much less practical to deliver gas guzzlers out of pure laziness. (As you may know, the Corolla GR is by far the most expensive of these options, because it's designed to achieve horsepower over mileage.)
> Do people actually take claims like that from glorified salesmen seriously?
People who know the tech, no
Non-technical middle management types, yes. It produces revenue when done aggressively enough, google "solarwinds sales people" for many anecdotal examples of extreme persistence. Not that I agree with it.
I prefer not using managed services but I kind of understand the appeal. Instead of paying several engineers, that you have to vet first, to configure and maintain the services adjacent to your product you can just pay AWS or Azure or someone else to maintain the service. Then you can concentrate your whole manpower on your product. In case the service goes down you can blame someone else and maybe even recover some money. On the other hand it of course makes you dependent on the provider.
Yep, been in a job like this. Use AWS because the team is three people and they don't want to waste time on patching, database administration, networking etc. I agree you pay more but in that team we were just able to get on with building the products.
> Instead of paying several engineers, that you have to vet first, to configure and maintain the services adjacent to your product you can just pay AWS or Azure or someone else to maintain the service.
Your engineers who all have to possess AWS or similar certs before you hire them, work for free?
A move off VPS to managed services doesn't reduce your headcount or labour costs.
You are correct. Someone has to manage and plan the infra. But that is the same for on prem or other non cloud. What you don't necessary need is several database admins, several network admins, several kubernetes admins, etc. I don't necessarily agree, but that is the calculation. Azure hires the 24/7 admins for the service and you pay a bit more to get a share of them. I have heard this argument in person.
I think there is a very narrow space where you need the resources that this provides and it's not yet more cost effective to have your own team of admins. At a certain headcount a the number admins don't matter that much anymore.
You are correct but I don't know about the cost structure. Also you have to somehow verify that they do a good job. You sometimes only see bad work when something goes wrong. Also you have to first find a company that provides the service.
The cloud makes it simple. They offer you managed service X. They hire experts for service x and you pay a part of the cost on top of your infra cost. No searching. No vetting. You just use the service.
I see the why this might be attractive. It isn't to me. But the pencil pushers like it.
In my experience it doesn’t take long until you use such complex offerings from the cloud vendors, you need those ops engineers anyways. Just with slightly different skillsets.
I'd say you need people with certain skill sets anyways but at a certain scale you have to get specialized people for some service. Database admins, kubernetes admins, network admins. At a small scale that can be one or two people. But if you want 24/7 with a bigger scale you need multiple people for each role. You have to find them, pay them, schedule their absences.
To some management types it looks like a good deal to not deal with that and just let Amazon/Microsoft/Google/etc. deal with finding people to support the service and just pay a bit extra to the infra cost. Then you can only hire cloud infra admins. I don't think it works that way but that is what I have observed.
Calculations from me and others have proven that cloud providers use 5-10x multipliers when selling you things. The less you use them, the better is your bottom line. At the beginning it maybe makes sense to use cloud credits to get you moving, but when credits expire or your organization grows, it is wise to invest in people that can setup things on their own. The biggest lie that cloud providers managed to sell to the world, that you don't need knowledgeable people to run things in cloud.
There was a period when development and system adminstrators were really concerned about vendor lock in and would choose on the basis of the ease of moving to a different platform, Java and J2EE was clearly based on this mindset. I have always found it odd people have been willing to adopt AWS with no apparent easy route off given its price.
Projects like Ceph and Minio have existed for years, though?
Beyond that, I just don't understand your point of view at all. Do people unironically think there is some super special dark magic being done in the bowels of Amazon, as opposed to just...code that runs on (virtual and physical) machines? The open source community yielded Linux but it's just sooo impossible for it to yield an object storage service? What a strangely shackled view of the world.
> No it’s using an army of extremely well paid engineers, something I guarantee the parent comment has no access to
That's a different argument to the one I replied to, and the reply to "they have expensive infra people" is "you have to have expensive product-trained people to use them anyway".
The suggestion was to replace DBB and S3 with some VMs. Presumably those VMs would be managed by the engineers part of the parent commenter’s organization. They do not have access to as many engineers as AWS, nor do they pay them as well.
Not arguing about cost effectiveness here. Just pointing out how silly it is to suggest that you can replace DDB/S3 with some VMs ran by a midsize organization
Agreed - I use AWS at work and try and keep the services we use to a minimum. S3 and DynamoDB are ones that somewhat lock us in but the way we use them, they are replaceable (not relying on any niche features). The different queue services would definitely be harder to swap out though.
That's genuinely my baseline, then I ask 'why do we want to manage this dependency?'
I can appreciate the desire to close gaps on expertise deficiency and make a vendor responsible, but the whole schtick of 'outsource everything and focus on your business for advantage' always rang to me as just an excuse to give our money to vendors.
Its almost as if the whole case for vertical integration is just taken as a wash
Most cloud VMs have network-attached storage working through a billing layer, and its IOPS numbers are pathetic. This makes running your own DB in a cloud VM much less reasonable. Now you can use local NVMe, but you still have to set up your own failover.
The original promise of the cloud is "you pay us less than you pay your sysadmins", which is not entirely unreasonable, especially at early stages.
Of course running on bare metal from Europe's own Hetzner is even more cost-efficient, if you already have a lot of sysadmin chops.
I'm not sure why you would compare states to cities?
And while The Netherlands as a country is dense, the cities are not, partially due to the massive amount of urban sprawl that The Netherlands has (compared to other European countries).
Amsterdam has a density of just under 5,000 people per square kilometer. That is way less than New York City, and less than any of the Burroughs except Staten Island. Manhattan comes in at 28,000, so over 5 times more. Amsterdam Metropole has only 950 people per square kilometer.
NYC, San Francisco and Boston are massively more dense than Amsterdam. Chicago, Philadelphia and Miami are about the same. Washington D.C. and LA are only slightly less dense.
To add, The Netherlands in the 1970s was going full-on towards suburbanization and urban sprawl. Even today it has one of the lowest amounts of apartments in Europe and the most urban sprawl. So if they didn't go for bicycles, it would have been America 2.0. Just look at Ireland.
In other countries bicycles aren't really needed because you can just walk everywhere.
There is Copenhagen. And Dutch cities. I don't know if there are any other European cities with extensive separated bike lanes. Valencia has some bike lanes but I wouldn't call them extensive. Only 143 kilometers versus 515 kilometers in Amsterdam, which has a similar population.
It likely depends how you're defining separated. Some cities go for completely separate routes, some curb separation, some bollard separation, some on-footpath separation. Some use a chaotic mix (Dublin, say, is building a 300km segregated system, but because this is being delivered by seven local authorities plus the National Transport Authority, it is a mix of seemingly every possible solution, including weird stuff like contraflow bike lanes, and bike lanes between on-street parking and the footpath).
Berlin certainly qualifies. If not by metric, then by vibe. Lightyears ahead of Chicago which I rate as a good city to bike in the US (and getting better under the current batch of aldercreatures).
143km sounds like quite a bit though, especially since separated bike lanes are usually for main thoroughfares, whereas many low-traffic side streets you simply bike down the middle.
A lot of Berlin's is footpath-based, right? They seem to be talking about segregated bike lanes, so that arguably wouldn't qualify (though it _is_ likely much safer than on-road).
Oh I guess I didn’t know the distinction. As a user of both I actually prefer the footpath based to the segregated bike lanes, although it seems to work best on the widest of Berlins streets where all of the following can coexist laterally:
- Storefront
- Outdoor seating
- Footpath (room for both ways)
- Bike lane (one way)
- Greenery (trees or shrubs)
- Car door buffer
- Parking lane
- One or two lanes one way traffic
- Green median
- All the above mirrored for the other side
Inclined to agree; only real problem with the on-street ones (presuming there's enough room for them) is that visitors unfamiliar with them tend to walk in them and cause a nuisance. The first time I was in Berlin I nearly got hit by a couple of bikes before I realised that the slightly-differently-coloured footpath meant something...
Malmö has some of the best bike lanes in the world, IMO even better than Amsterdam and Copenhagen, who I'd say are top tier. Literally 2 meters wide at some points and separated by divider from even walking traffic
Forgot that one. Yes, with its low density and flat ground Malmo is a good candidate for cycling and they did it well. It will not take long before they are up to Dutch standards.
Comparison with Amsterdam is maybe not fair as Amsterdam has the worst cycling infrastructure of The Netherlands.
reply