Hacker Newsnew | past | comments | ask | show | jobs | submit | stephen123's commentslogin

I like leopard links for that. cmd+t type "go/" and I get all of the shortcuts I've configured.

https://leopardlinks.com/


How were they doing millions of events per minute with postgres.

I'm struggling with pg write performance ATM and want some tips.


If you're not already doing this: remove unnecessary indices, partition the table, batch your inserts/updates, or try COPY instead of INSERT.


Turn off indexing and other optimizations done on a table level


What do you do to then query the data? I usually need indexes so queries are not slow. Perhaps I could insert into a staging table then bulk copy the data over to an indexed table, but that seems silly.


If your application language/framework allows, you can do the batching there. e.g. have your single request handler put work into an (in-memory) queue. Then another thread/async worker pull batches off the queue and do your db work in batch, and trigger the response to the original handler. In an http context, this is all synchronous from the client perspective, and you can get 2-10x throughput at a cost of like 2 ms latency under load.

I gave more detail with a toy example here: https://news.ycombinator.com/item?id=39245416

I've since played around with this a little more and you can do it pretty generically (at least make the worker generic where you give it a function `Chunk[A] => Task[Chunk[Result[B]]]` to do the database logic). I don't have that handy to post right now, but probably you're not using Scala anyway so the details aren't that relevant.

I've tried out a similar thing in Rust and it's a lot more finicky but still doable there. Should be similar in go I'd think.


Isn't that basically the idea behind the "lambda architecture"? Of course you typically don't use the same product for both the real time and the batch aspects.


You said you struggled with writes... so I mentioned an advice on how to speed up writes... the internet know a lot more about this than me tho


Could replicating to a DB with indexing (purely for queries) work?


If one can't keep up, the other one can't either.

You could use partitions though.


What’s your hardware? RDS? Nvme storage?


Its google cloud sql.


I think its because of btrees. Btrees and the pages work better if only the last page is getting lots of writes. Iuids cause lots of un ordered writes leading to page bloat.


That was fun. I like birds but I'm no twitcher, spent a good 10min going through getting better at recognising the sounds.

It would be fun if you could pick a country the birds are from. Australian birds sound silly.


Looks great.

What are people using to store and send retries?


Plain ol’ sql


as in, a database queue?


Good luck, the product looks slick.

I see you have traffic coming from indiepa.ge, so I wonder which indie hacker are you and are you on twitter. I always wont to know who's behind something, particularly if im going to install their tracking tool. ;)


You can find me at @victoor (spanish) or @falcon_maker (english)


Yea, I find go's concurrency very error prone compared to something with futures.

I've even got a library for using futures in Go. https://stephenn.com/2022/05/simplifying-go-concurrency-with...


"completely static" has a gotcha though. I was getting segfaults yesterday because libc was removed from my docker image.

I had to add `CGO_ENABLED=0`


It seems like a waste of time to me. Wrapping errors adds context. But you can usually get enough context from stack traces.


But you don’t have stacktraces.


Nice, thank you. It was only the other day I was copy and pasting from the old site. Looking forward to installing the lib and deleting some code.


That old site was rough . Hopefully this one is a much better experience for you.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: