Hacker Newsnew | past | comments | ask | show | jobs | submit | DoubleFree's commentslogin

The postgres query optimizer will try to minimize the number of pages read from disk (and the number of intermediate pages written to disk). Benchmarking the query optimizer by making the shared buffers large enough to hold all the data therefore seems wrong, as you're then measuring the speed of the query optimizer and the join processor, instead of the quality of the generated query plans. It would not surprise me if the generated plans for these versions are actually all the same and this is only measuring execution speed.


No, it optimizes cost, which includes pages read from disk _and_ things like CPU use. Cost is an arbitrary unit meant to correlate with time spent, not by disk loads, so it's completely reasonable to compare plans with everything loaded in RAM. (Cost is by convention scaled such that one page read from disk is 1.0, but that's a different things from “the optimizer will try to minimize the number of pages read from disk”. It could just as well have been scaled so that 1.0 was 1 ms on some arbitrary machine.)


It is certainly possible that the plans are similar, and that improvements to the execution engine are being measured. The join order benchmark was designed to test optimizer quality. It is worth noting that in addition to trying to measure the number of pages read from disk, the PG optimizer also tries to reduce the number of tuples examined by the CPU, the number of predicate evaluations, etc. All these numbers are rolled up into a "cost," which is the function that the optimizer minimizes.

It is also true that measuring cold cache and warm cache performance can produce different results, and this experiment is certainly in the warm cache scenario. But, the cold cache scenario suffers from the problem you mention as well: an improvement to PG's B-tree that saves a few IOs will dominate any kind of CPU-based improvement (at least at the data size of the join order benchmark).

FWIW, the plan for the query with the P90 latency changes from a plan that uses loop and merge join in PG8.4 to a plan that uses hash join in PG16 (where it is no longer the P90 query), which is at least some evidence of optimizer improvements.


> It is certainly possible that the plans are similar, and that improvements to the execution engine are being measured. The join order benchmark was designed to test optimizer quality.

I don't think that it's possible to test optimizer quality in isolation -- not really, not if it's to be in any way practical. Many (most?) individual improvements to the optimizer are hopelessly intertwined with executor enhancements. This is usually fairly obvious, because the same commit changes both the optimizer and the executor together. Sometimes it's much less obvious, though, because its more a case of the optimizer and executor coevolving.

It's probably still the case that a lot of the improvements seen here are pure executor enhancements, but I can't say that I'm very confident about that. (As the main person behind those B-Tree IO savings you might expect me to have more confidence than that, but I find it horribly difficult to tease these issues apart while keeping the discussion high level/relevant.)


A government's primary purpose is to act on behalf of its constituents, and if these daytrippers are substantially detrimental to Amsterdam's livability, they should be discouraged from going there.

Or, to put it in more financial terms, the cost of compensating for the negative effects of these tourists might be much higher than the tax revenue they bring in.


I think the counter is that in almost every town locals hate tourists, but an average citizen hasn't thought through the consequences of not having the tax base support them.

As someone who currently lives in NYC it would be great for me if tourists stopped coming to the city and magically the level of restaurants, sanitation, and public transit didnt change at all.


Amsterdam's old town is tiny if you compare it to NYC, if you have limited capacity it's also rational to prioritize tourists who are likely to spend more per capita (I'm not sure if this will necessarily be the long-term outcome of the band though).


I live in NYC too, but imagine if all of Manhattan was midtown. That's basically Amsterdam these days. The suburbs are fine, but it's a nightmare in the city center. Imagine Times Square but like 6x the size.


You can't compare Amsterdam tourists with "every town" tourists. Sorry that you have never visited but it is quite apparent if you have.


Sure, but cruse ship tourists bring in vastly less money so it’s not a constant benefit from a given level of harm.


The Dutch are quite pragmatic and they had years to think about this.


Fly postgres is not managed postgres, it's cli sugar over a normal fly app, which the [docs](https://fly.io/docs/postgres/) make quite clear. Their docs also make clear that if you run postgres in a single-instance configuration, if the hardware it's running on has problems, you database will go down.

I believe the underlying reason that precludes failing over to a different host machine, is that fly volumes are slices of host-attached nvme drives. If the host goes down, these can't be migrated. I _think_ instances without attached volumes will fail-over to a different host.

Of course, that's not ideal, and maybe their CLI should also warn about this loudly when creating the cluster.


I would substantially nuance most of what you say. I really like vim and use it almost exclusively, but it is not necessarily the "only" way, or even necessarily the "best" way to do things. If the vim grammar clicks for you, great, by all means use it. There are people for whom it doesn't click, and that's fine too. I do encourage everyone to try it for a bit.

I also would not say learning vim makes you a better programmer or a better writer. It makes inputting and changing text easier and faster, but that's not what programming or writing is about.

I agree that the vim grammar is nice, but the bigger thing that differentiates editing text in vim versus other editors is its modality. And it is the modality that allows it to have a grammar in the first place. And that grammar does break down in places, just look at every keybind starting with g.

So yes, try vim, because it is pretty great. But if it doesn't work for you, move on to something that does.


Space is not quantized, so yes, the electron has infinite possible locations. There are, however, places the electron cannot be, depending on its energy. See, the electron's energy is quantized, meaning it can only have a predetermined set of values (see atoms' electron shells). The energy determines the wave function and the wave function squared is the probability distribution for the electron's location. This wave function has roots, indicating where the electron cannot be.


I personally really like using Rust, but I don't think it's necessarily better than go. There are some areas where Rust will definitely be better or more ergonomic, such as bare-metal or low level systems programming tasks, but many other areas where Go's design works better and Rust will only hold you back (looking at you, borrow checker).

That said, I would still recommend learning Rust, because it will teach you about lifetimes and aliasing, even if you don't want to learn about them. When building for the web, I think Go's easier dynamic dispatch and GC will save you some headaches though.


Let's do a little calculation. For an upper bound, let's say you press your full weight onto it and the coffee grounds' resistance is not the limiting factor. That's maybe 90kg, times the acceleration of gravity, for about 900 newtons. That force is applied to a pi*(30mm)^2 or ~3^-3 m^2 area. That gives us a pressure of 900/3e-3 = 3e5 Pa or 3 bars.


Hoffmann measured this with a pressure sensor modded aeropress - most typical brews would not exceed half a bar.


Getting over 1 bar gets you to a spray of coffee that sprays everywhere: https://youtu.be/Qz_GZpzpst4?t=500


why does he have it so far away from the cup?


Elderly people also use bikes and often transition to electric bikes or sometimes tricycles if needed. As other people have commented, public transportation in The Netherlands is also quite decent, so that also sees much use by the elderly.


std::thread::scope was reintroduced in 1.63


And all it really does is force a thread.join when "scope" goes out of scope. Maybe just me but seems only marginally useful.


In Rust if you have a &Thing right here, you can't give that to some new thread unless you can promise the thread has a shorter lifespan than the &Thing does - otherwise the thread could access it after it ceases to exist, and you've introduced Undefined Behaviour.

Without scoped threads this means all references you want to give to a thread must be allowed to live forever, the lifetime 'static, since your threads might live forever (the mechanism to kill them could be discarded unused somehow). Since scoped threads always cease to exist when they go out of scope, we can give them references that live any time longer than that scope.


That's exactly the same as the crossbeam implementation. It's quite useful especially for relatively simple parallelism tasks.


The important part is it safely allows you to pass references or other objects with lifetimes to those threads. (Or at least lets the compiler know so you can avoid having to write unsafe things yourself.)

This happened to be useful for something I was doing at work where the wrapper for a native C library expressed the required lifetimes of the library constructs using lifetimes. (Eg. You must set up Foo before Bar, and Foo must outlive Bar. It was OK to pass Bar and use it from multiple threads, but it had a lifetime attached. Using scoped threads we can satisfy the compiler's lifetime checker, since it knows even though we gave Bar to many threads, none of them outlive Foo.)


It's a building block allowing more threading abstractions to be build on top of it without using unsafe code.

Most importantly it works with references like `&`,`&mut` or `Cow` etc.

E.g. in the example below it use used to ad-hoc implemet a typical simple parallel fold pattern where a collection of data points is chunked and split across num_cpu threads where each chunk is processed in parallel followed by combining the results.

https://play.rust-lang.org/?version=stable&mode=debug&editio...

Now in practice for many use cases you want to use more convenient/optimized libraries like rayon for this kinda of things instead of ad-hoc implementing them by hand.

But having the necessary primitives to implement it ad-hoc yourself without unsafe code is grate anyway.


Doing this using a destructor, or worse in straight-line code, turns out to be unsound. So it's more than marginally useful for those familiar with Chesterton's fence.


I didn't say it was useless, only that its use is highly limited to a very specific case, namely "do a bunch of things at the same time and block until completion".


If the scope is the lifetime of the program, then it hardly makes a difference. Note that rust programs exit upon return from main anyways.


This is the point of lifetimes. Everything has a lifetime whether or not it is annotated. Sometimes the compiler doesn't know what the lifetime of an object is, particularly in relation to other objects.

If you can scope something to main() then that is a 'static lifetime and there is no need for a scoped thread for it.


Rocket is moving towards async as well


Rocket is not moving anywhere right now. 0.5 supposed to be async, but it's delayed more than a year because maintainer doesn't have time.


You don't have to use async in Rocket v0.5.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: