They're not exclusive. Quite often developers use a filesystem with a database.
Store the file on the filesystem with a unique name. Store the original name, the unique name, the owner, tags, a description, locking, auth, enforce uniqueness, and track access with the database.
Then try and keep things performant and handle concurrency!
Try doing all of the above just using a filesystem and you'll either:
1. Waste years making a rubbish database.
2. Do a bad job trying to do everything with flat files.
It's definitely not just for JS web apps anymore—you can run Rust, Python, and even standard Docker containers now. Plus, things like D1(SQL) and R2(Storage) give you the entire backend stack ready-made.
But you're completely right that it doesn't replace a raw VM. Cloudflare's goal is to abstract away the infrastructure so you don't have to manage a Linux server just to host an API or SaaS. But if you actually need OS-level access, background daemons, or to run legacy code, you absolutely still need EC2 or a traditional VPS.
Using vim to do this seems silly. Nano is also nearly always present, and doing those “basic” things is 10x more straightforward in an editor that isn’t modal and just gets out of your way.
I’ve often in my career witnessed engineers who’ve cargo culted the need for vim, but they only know how to hit ESC !wq or whatever, and one errant keystroke puts them in modal hell of some sort that, often requiring they just close the terminal and try again.
I don’t begrudge those who want to become power-VIM-users, though it seems wildly awkward to me, to each their own. But if you just want to use it to do the “basics” on ssh sessions, using nano makes more sense. PGUP and PGDN and Home and End and arrows work just fine to navigate, and the bindings for most things are printed right on the screen (except Ctrl-S to save… for some reason, but it works).
Can you cite a source for this? There's no question that they're vastly more complex, but I would think that modern car manufacturing is far more exacting (and efficient) than in the past.
If you're saying that older cars are more repairable, I'm happy to agree with you, even without a source to back up that claim.
An easily visible one is air intakes. Many manufacturers have shifted to plastic. Peteo-engineering has advanced a lot, but they will still get brittle and break.
Interior wise, you can look at things like fabric durability-- lower deniers can be cheaper, but will wear sooner. Springs/foam in seats are another example, but this will vary across manufacturers, models and trims.
This isn't exclusive to financial engineering manufacturers like Stellantis or Nissan, either. Toyota has had issues with simple things like rust proofing (whether intentional or not) on 1st generation Tacomas leading to massive recalls and things like plastic timing guides prone to wearing out. Ford with the wet clutches having belts submersed in oil.
German cars needing body off access for rear timing chain maintenance at 80k miles. Water cooled alternators (really, VW?). All types of "why?" if you follow cars once they are 3+ years old.
It seems like there are a lot of regressions that probably result from cost cutting, while others may exist to simply drive service revenue.
In the United States, the Environmental Protection Agency assumes the typical car is driven 15,000 miles (24,000 km) per year. According to the New York Times, in the 1960s and 1970s, the typical car reached its end of life around 100,000 miles (160,000 km). Due in part to manufacturing improvements, such as tighter tolerances and better anti-corrosion coatings, in 2012 the typical car was estimated to last for 200,000 miles (320,000 km) with the average car in 2024 lasting 160,545 miles according to the website Junk Car Reaper.
I think you're talking about apples and oranges, as parent appeared to be cataloguing recent design defects. Which are pretty common too.
That'll influence the average reliability minimally, unless you were unlucky enough to buy one of those models.
Personally, why I'd rather get something at 120k mileage w/ 250k+ max examples on the road by that calendar date. You'll know whether they designed a lemon.
Add: undersized Tacoma rear leaf springs, multiple manufacturers' head gaskets, a few early aluminum engines (? from memory)
There are many other considerations, too. Years ago I scraped Craigslist and Autotrader, grouping cars by generation/make/model/drivetrain to be able to predict longevity based on quantity for sale versus original sales figures. If a model sold 100k per year for 10 years and only 3 were for sale in year 13, that isn't a great sign. Cheap cars will tend to have cheap owners who are more likely to skimp on maintenance, typically leading to more accrued issues and a shorter lifespan for the vehicle. Some cars are just poorly engineered, and the markets are relatively efficient in pricing resale value. The definition of "high mileage" is going to vary by who you ask. Domestics 150k, German 80k, Japanese 200k, Korean 100k. These are subjective averages (some cars like Theta engines, Darts, even late model GM 6.2s have engine failures <40k), based on when they start disappearing due to repairs being more than the vehicle is worth, but based on what I saw then and kind of observe still.
Leaning on those prior mentioned product mixes, keep in mind that Japanese manufacturers weren't in the American market 60 years ago, so market mix would be wildly different. (Multiple 400k+ mi Toyotas in my family, along with 60 year old GMs, but with aftermarket or rebuilt engines.) The cost of vehicles (and repairs) relative to prevailing wages will impact the repair vs replace balance. Trade publications like Cox/NADA/Adesa/etc. are always cited by financial blogs when mentioning consumer spending/state of economy by average age of cars on the road. Why cars get junked or totaled has shifted drastically, too. Steel bumpers were easy to replace, modern bumper covers with styrofoam backing and aluminum crumple zones, not so much. Tolerances is a vague term in that veiled PR piece on that wiki article. Machining has improved. Tech like direct injection and improved lubrication (synthetics) have done much more in terms of efficiency and longevity. In a lot of cases, manufacturers try to get more and more horsepower from the same displacement by pushing tighter engine tolerances (crank/main bearings, pistons/rings, valvetrain) and things like higher compression ratios and revs, leading to more heat and earlier failure. So while you have better initial engineering, you are closer to the point of failure. For another example, interference engines will grenade themselves if you ignore timing belt maintenance, but in the meantime, you get more horsepower by getting more air into the cylinders.
A v6 Camry or Accord is going to be have more hp, be faster,more reliable at same age, be quieter and get 3x the mpg than nearly any muscle car of the past.
Unfortunately it seems that many Americans prefer giant vehicles that place more emphasis on their size (and status) than materially important factors like reliability engineering or fuel economy.
Obviously these are ancedotal examples, they can be confirmed by wasting hours reading about cars and watching mechanic review videos from people who work on them daily (I am partial to the CarCareNut on YT).
Efficient manufacturing means exactly building stuff as cheaply as you can get away with.
There's a reason why roman architecture is still standing: it is massively overbuilt, the very opposite of efficient (they also used to make the architect stand under his own arches as they removed the temporary support, that could have contributed to the overbuilding).
Is it? Every city in Roman empire had temples and forum. Where are they still standing? Maybe half a dozen survived, like pantheon in Rome or temple in Nimes, but it's extremely rare. Maybe they weren't overbuilt at all?
It seems like you both are looking at different definitions of built well. One pertaining to how well the car will perform over its lifetime. The other describing the build process. Not necessarily exclusionary, but different.
I agree there are a lot of things outside the computer that are a lot more difficult to reverse, but I think that we are maybe conflating things a bit. Most of us just need the code and data magic. We aren't all trying to automate doing the dishes or vacuuming the floors just yet.
I know it's cliche to say it, but most of the tech debt I've seen is on the frontend.
Most backends are relatively simple. Just a DB with lots of code wrapping it. But even the worst backends are relatively simple beasts. Just lots of cronjobs and lots of procedural code. While the code is garbage, it can be understood eventually. The backend is mature... even the tech debt on the backend is a known quantity!
But the frontend... damn the complexity and the over engineering are something unique. I think there is a fetish among frontend developers to make things as complicated as possible. Packages galore and SO MANY COMPONENTS.
As soon as people start inventing their own design system, UI framework, and sub packages I think the frontend is doomed for that project.
Async really turns FE into a nightmare. Simple concept: user logs in, get userID, get feed associated with ID, get posts on feed, get reacts on post.
Sometimes the tech debt is that BE can't pass this data all at once yet. Fine. Let's fetch it.
But then FE gets creative. We can reduce nesting. We can chain it. We can preload stuff before the data loads. Instead of polling, let's do observers. Actually these aren't thread safe. And you know what, nothing should be in the UI thread because they'll cause delays. And this code isn't clean, one function should do only one thing.
Actually why are these screens even connected? We should use global variables and go to any screen from anywhere. Actually everything can be global. But global is an anti-pattern. So let's call it DI and single page application and have everything shared but everything must also be a singleton because shared immutability is bad too.
> Maybe I'm not the target market for this, but how hard is it REALLY to manage a RDBMS?
It depends:
- do you want multi region presence
- do you want snapshot backups
- do you want automated replication
- do you want transparent failover
- do you want load balancing of queries
- do you want online schema migrations with millisecond lock time
- do you want easy reverts in time
- do you want minor versions automatically managed
- do you want the auth integrated with a different existing system
- do you want...
There's a lot that hosted services with extra features can give you. You can do everything on the list yourself of course, but it will take time and unless you already have experience, every point can introduce some failure you're not aware of.
> There's a lot that hosted services with extra features can give you.
I totally agree with that, but in my experience 99% of "application developers" don't need all these features. Of those you listed, I only see "backups" as a requirement. Everything else is just - what I said - features for when your application is successful and you want something streamlined.
I would have no concerns around reliability uptime running my own database.
I would have concerns around backups (ensuring that your backups are actually working, secure, and reliable seems like potentially time intensive ongoing work).
I also don't think I fully understand what is required in terms of security. Do I now have to keep track of CVEs, and work out what actions I need to in response to each one? You talk about firewall rules. I don't know what is required here either.
I'm sure it's not too hard to hire someone who does know how to do these things, but probably not for anything close to the $50/month or whatever it costs to run a hosted database.
The vast majority of products with paying customers need better availability than “database went down on Friday and I was AFK until Monday, sorry for the 3 day downtime everyone”
It's not about it being hard, it's about delegating. Many companies are a bit less sensitive to pricing and would rather pay monthly for someone else to keep their database up, rather than spending engineering hours on setting up a database, tuning it, updating it, checking its backups, monitoring it and making it scale if needed.
Sure, any regular SME can just install Postgres or MySQL without even setting much up except with `mysql_secure_install`, a user with a password and an 'app' database. But you may end up with 10-20 database installs you need to back up, patch and so on every once in a while. And companies value that.
On the pricing bit, I have to say edge driven SQLite/ libsql driven solutions (this is a lot of them) can be a mixed bag.
Cloudflare, Fly.io litestream offerings and Turso are pretty reasonably priced, given the global coverage.
AWS with Aurora is more expensive for sure and isn’t edge located if I recall correctly, so you don’t get near instant propagation of changes on the edge
The bigger thing for me is how much control you have. So far with these edge database providers you don’t have a ton of say in how things are structured. To use them optimally, I have found it works best if you are doing database-per-tenant (or customer) scenarios or using it as a read / write cache that gets exfiltrated asynchronously.
And that is where I believe the real cost factors come into play is the flexibility
Or at least they should. I’ve worked many places where thousands of dollars in engineering hours were wasted on something after they refused to use a service for a fraction of the cost. Some companies understand this but others don’t.
Backups are a PITA I wanted to go exactly this route but even though I had VMs and compute I can't let any production data hit it without bullet proof backups.
I setup a cron job to store my backups to object storage but everything felt very fragile because if any detail in the chain was misconfigured I'd basically have a broken production database. I'd have to watch the database constantly or setup alerts and notifications.
If there is a ready to go OSS postgres with backups configured you can deploy I'd happily pay them for that.
reply