Hacker Newsnew | past | comments | ask | show | jobs | submit | csiegert's commentslogin

How does that work? Is there a VSCode extension that works with all of them? I’ve only used the Claude Code extension for VSCode and would prefer something like that.


Now give us a 17-inch laptop, please!


Why don’t you change the order to “do work, if successful, grab a connection from the Postgres connection pool, start a transaction, commit, release the connection to the connection pool”?


That’s what we should do, yes. The problem is that we were just sorta careless with interleaving database calls in with the “work” we were doing. So that function that calls that slow external service, also takes a &PgConnection as an argument, because it wants to bump a timestamp in a table somewhere after the call is complete. Which means you need to already have a connection open to even call that function, etc etc.

If the codebase is large, and full of that kind of pattern (interleaving db writes with other work), the compiler plugin is nice for (a) giving you a TODO list of all the places you’re doing it wrong, and (b) preventing any new code from doing this while you’re fixing all the existing cases.

One idea was to bulk-replace everything so that we pass a reference to the pool itself around, instead of a checked-out connection/transaction, and then we would only use a connection for each query on-demand, but that’s dangerous… some of these functions are doing writes, and you may be relying on transaction rollback behavior if something fails. So if you were doing 3 pieces of “work” with a single db transaction before, and the third one failed, the transaction was getting rolled back for all 3. But if you split that into 3 different short-lived connections, now only the last of the 3 db operations is rolled back. So you can’t just find/replace, you need to go through and consider how to re-order the code so that the database calls happen “logically last”, but are still grouped together into a single transaction as before, to avoid subtle consistency bugs.


"Work" might require a transaction which reads with lock, computes, (launch missiles), and then updates.


This is really tough in a large organization with features that cross across product domains.


WS is already taken by the WebSocket ws://… protocol.


Please add punctuation. I had to read it twice to understand.


It’s definitely nice to know that nothing gets executed, read, written or sent without permission from the user when running a program/script with Deno.

You complain the flags always have to be set to get anything working so they are supposedly useless. No, you don’t have to set them in a grant-all fashion. All flags allow fine-grained permissions, e.g. --allow-env=API_KEY,PORT only allows access to the env vars API_KEY and PORT instead of all env vars. The same principle applies to --allow-net, --allow-run, --allow-read, --allow-write, etc. See `deno run --help` or https://docs.deno.com/runtime/fundamentals/security/ for more.


I’ve got two questions:

1. What does it look like for a page to be indexed when googlebot is not allowed to crawl it? What is shown in search results (since googlebot has not seen its content)?

2. The linked page says to avoid Disallow in robots.txt and to rely on the noindex tag. But how can I prevent googlebot from crawling all user profiles to avoid database hits, bandwidth, etc. without an entry in robots.txt? With noindex, googlebot must visit each user profile page to see that it is not supposed to be indexed.


https://developers.google.com/search/docs/crawling-indexing/...

   "Important: For the noindex rule to be effective, the page or resource must not be blocked by a robots.txt file, and it has to be otherwise accessible to the crawler. If the page is blocked by a robots.txt file or the crawler can't access the page, the crawler will never see the noindex rule, and the page can still appear in search results, for example if other pages link to it."
It's counterintuitive but if you want a page to never appear on Google search, you need to flag it as noindex, and not block it via robots.txt.

> 1. What does it look like for a page to be indexed when googlebot is not allowed to crawl it? What is shown in search results (since googlebot has not seen its content)?

It'll usually list the URL with a description like "No information is available for this page". This can happen for example if the page has a lot of backlinks, it's blocked via robots.txt, and it's missing the noindex flag.


'But how can I prevent googlebot from crawling all user profiles to avoid database hits..'

If user profiles are noindexed then why should you care if google are crawling, when almost every other crawler out there does not obey robots.txt?

It's not in google's interest to waste resources on non-indexable content, you are worrying far too much about it.


The developer said it’s not adult content.

https://www.reddit.com/r/SaaS/comments/1dfwg1i/comment/l8mlm...


Your unit is wrong. Giga means billion. The universe is ~14 Gyr old.


You're entirely correct, my mistake; unfortunately I can't edit the comment any more.


Mail the device to Spotify’s headquarters. Better, a Swedish artist should build a memorial made of these devices in front of Spotify’s headquarters.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: