Hacker Newsnew | past | comments | ask | show | jobs | submit | ozr's commentslogin

I haven't heard a compelling argument that anything needs to be fixed with email-based auth patterns. It is imperfect but not bad, and every proposed alternative seems to be worse.

The article seems to lean into security and usability concerns.

On the security front: the weak-point is still the human. If you hand over your credentials to someone nefarious, well.. you handed over your credentials to someone nefarious.

Usability isn't convincing me either. One of the great things about email is that it really is the lowest-common denominator, as another commenter mentioned above. (Almost) everyone, from kids to the most tech-inept luddite have some sort of email.


One flaw is I'm pretty sure a lot gmail account is lost forever. Contacting Google to retrieve access would not go well. Related is that if you try to self host email your messages are unlikely to reach anyone.


Self-hosting outbound email is hard.

Self-hosting inbound email is trivial. Anybody will send email to any random domain, they're just not willing to accept it from random sources.

And the latter is what is relevant for password recovery.

I self-host inbound but use established servers for outbound through my ISP and have had no trouble with that setup for a while. Forwarding to people through my domain has gotten a bit more challenging lately but I've got it working well enough to satisfy gmail so far. (The advantage with forwarding is you only have to convince one server to accept it, not everyone in the world, and there's some crypto stuff involved now that involves trusting some keys, not just a domain or IP, which also helps a lot.)


>Self-hosting inbound email is trivial. Anybody will send email to any random domain, they're just not willing to accept it from random sources.

That is simply not true. I have self-hosted email service and starting about 1.5 yr ago some big email services don't deliver emails to my server anymore. And there are many similar cases reported...

So one can say that even if an independent email service is willing to accept email traffic from any sender it does not guarantee that customers of all other services can have delivered their emails to addresses at the service.


That seems a great compromise. I hadn't registered the distinction in direction. Even without organising the forwarding part there are plenty of organisations that email me password resets that I don't need to send email out to.


> Self-hosting inbound email is trivial. Anybody will send email to any random domain, they're just not willing to accept it from random sources.

In terms of authentication, this is not entirely true. It's less common these days, but I used to have a lot of trouble with sites rejecting my attempts to create accounts with e-mail addresses from my disposable-e-mail-generator of choice.


Just yesterday I tried to register for a service using one of my own domain names with self hosted email. The confirmation mail arrived, but as soon as I clickes the link I was told that my email address wasn't allowed.....

Not sure what kind of crap some folks are smoking, really.


> from my disposable-e-mail-generator

Well, I suspect those are more specifically blacklisted.


I'm not saying there aren't flaws, I'm saying none of them happen at a rate significant enough to be worth switching to another system (with an entirely new set of flaws).


GPT-4 (and Claude) are definitely the top models out there, but: Llama, even the 8b, is more than capable of handling extraction like this. I've pumped absurd batches through it via vLLM.

With serverless GPUs, the cost has been basically nothing.


Can you explain a bit more about what "serverless GPUs" are exactly? Is there a specific cloud provider you're thinking of, e.g. is there a GPU product with AWS? Google gives me SageMaker, which is perhaps what you are referring to?


There are a few companies out there that provide it, Runpod and Replicate being the two that I've used. If you've ever used AWS Lambda (or any other FaaS) it's essentially the same thing.

You ship your code as a container within a library they provide that allows them to execute it, and then you're billed per-second for execution time.

Like most FaaS, if your load is steady-state it's more expensive than just spinning up a GPU instance.

If your use-case is more on-demand, with a lot of peaks and troughs, it's dramatically cheaper. Particularly if your trough frequently goes to zero. Think small-scale chatbots and the like.

Runpod, for example, would cost $3.29/hr or ~$2400/mo for a single H100. I can use their serverless offering instead for $0.00155/second. I get the same H100 performance, but it's not sitting around idle (read: costing me money) all the time.


You can check out this technical deep dive on Serverless GPUs offerings/Pay-as-you-go way.

This includes benchmarks around cold-starts, performance consistency, scalability, and cost-effectiveness for models like Llama2 7Bn & Stable Diffusion across different providers -https://www.inferless.com/learn/the-state-of-serverless-gpus... .Can save months of your time. Do give it a read.

P.S: I am from Inferless.


There are plenty of third-party video hosts out there that let you do this. Wistia, Vimeo, etc.


I know, and neither of them are the second biggest search engine in the world and have _all_ the traffic and viewers.


I'm bullish on AI, but I'm not convinced this is an example of what you're describing.

The challenge of understanding minified code for a human comes from opaque variable names, awkward loops, minimal whitespacing, etc. These aren't things that a computer has trouble with: it's why we minify in the first place. Attention, as a scheme, should do great with it.

I'd also say there is tons of minified/non-minified code out there. That's the goal of a map file. Given that OpenAI has specifically invested in web browsing and software development, I wouldn't be surprised if part of their training involved minified/unminified data.


> These aren't things that a computer has trouble with

They are irrelevant for executing the code, but they're probably pretty relevant for an LLM that is ingesting the code and text and inferring its function based on other examples it has seen. It's definitely more impressive that an LLM can succeed at this without the context of (correct) variable names than with them.


minification and unminification is a heuristic process not an algorithmic one. It is akin to decompiling code or reverse engineering. It's a step beyond just your typical AI you see in a calculator.


Fwiw, I've never paid for Copilot. I was automatically given free access for open source contributions. My largest public repo had maybe 100 stars. I've made minor commits to larger repos.

I don't know what the threshold is, but I'm fine with the trade-off I received.


Presumably some feature/jailbreak of JPay (and the like) tablets.

https://offers.jpay.com/jp5-tablets/


Sounds awful.


> do people doing their own server setup like this use containerization at all?

Depends on what you're deploying, really.

If it's one Go service per host, there's no real need. Just a unit file and the binary. Your deployment scheme is scp and a restart.

For more complicated setups, I've used docker compose.

> Also like setting up virtual networks among VPSes seemed like it required advanced wizardry.

Another 'it depends'.

If you're running a small SaaS application, you probably don't need multiple servers in the first place.

If you want some for redundancy, most providers offer a 'private network', where bandwidth is unmetered. Each compute provider is slightly different: you'll want to review their docs to see how to do it correctly.

Tailscale is another option for networking, which is super easy to setup.


There’s rarely, if ever, a _need_ for containerisation. Even for a single static binary though, there are benefits like network and filesystem segregation, resource allocation, …


It rarely makes sense to hire for a specific need. I want people that are smart and high agency. Seeing how they approach problems like this is generally enough to tell.

I've done similar interviews in the past and they are remarkably high signal.


Don't take it too personally. Downvoting/flagging it makes it clear to people who come across it in the future that it's wrong.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: