Hacker Newsnew | past | comments | ask | show | jobs | submit | staplung's commentslogin

In total, a little over one dozen astronauts died on shuttle flights (14). No astronauts died during Gemini or Mercury. Three died in a test on Apollo 1. The shuttle failure rate was nowhere close to 1/10. In fact, it was 1/67 (2 failures out of 134 flights).

The Tower of London arguably qualifies as a fort built to protect its inhabitants from the city. In its original form, its most impressive and formidable defenses faced London.

Cool article but I think the write-up no longer matches the actual code. Snippets in the article use `*p->p` a lot. The *p is a parser struct defined above as

  struct parser {
    const char* s;
    int pos;
    struct grid* g;
  };

Notice there is no `p` member within. Assume the author meant `*p->pos`? And indeed if you look at the code in github the parser struct is defined as

  struct parser {
    const char *s, *p;
    struct grid* g;
  };
So there's the missing `p`, even though it's no longer an int. So I presume the member variable was once known as `pos` but got renamed at some point. Some of the snippets did not get updated to match.


The numbers in the headline seem odd. They imply that each (fake|fraudulent) worker only nets $5000 per year for Kim. I know the system has some inefficiencies where people behind the scenes are helping the "employee" with the work and there are cost of living expenses, taxes etc. but that seems like a pretty low take.


This might include people working in lumber camps in places like Siberia, "mercenaries" in Ukraine, people in NK-managed restaurants in China, Laos etc, or similar efforts that have been reported on, where the average revenue per worker is likely a lot lower.


I had the same thought - I guess there's additional overhead in paying the in-country proxy and probably also a lot of churn (being found out and fired, and then taking a long time to find another position).


5k a year could be 2 weeks of onboarding or waiting out a bureaucratic PIP process.

Its also possible that its a numbers game and only 2/3 succeed at getting hired.


Maybe some of them don't remain employed for very long.


It would be ironic if the DPRK just passes on more of the money than most contract software companies.


And the reason they were modeled after the dollar bill size is because there were already many types of systems for storing and organizing them. That came in handy for the census.

The old BBC Connections series has a segment with James Burke using the old census tabulators.

https://www.youtube.com/watch?v=z6yL0_sDnX0&t=2640s


Of course since the old syntax is merely deprecated and not removed, going forward you now have to know the old, bad form and the new, good form in order to read code. Backwards compatibility is a strength but also a one-way complexity ratchet.

At least they managed to kill `auto_ptr`.


I doubt it will be a problem in practice.

Regular variadic arguments in general aren't used very often in C++ with exception of printf like functions. Not rare enough for majority of C++ programmers to not know about them, but definitely much more rare than their use in python. Main reason people know about it at all is printf. The "new" C compatible form has been supported since the first ISO standardized version of c++ if not longer. There haven't been a good reason to use the "old" form for a very long time. Which means that the amount of C++ code using deprecated form is very low.

Being deprecated means that most compilers and linters will likely add a warning/code fix suggestion. So any maintained project which was accidentally using C incompatible form will quickly fix it. No good reason not to.

As for the projects which for some reason are targeting ancient pre ISO standard c++ version they wouldn't have upgraded to newer standard anyway. So if new standard removed old form completely it wouldn't have helped with those projects.

So no you don't need to know the old form to read C++ code. And in the very unlikely case you encounter it, the way for accessing variadic arguments is the same for both forms through special va_list/va_arg calls. So if you only know the "new" form you should have a pretty good idea of whats going on there. You might lookup in references what's the deal with missing coma, but other than that it shouldn't be a major problem for reading code. This is hardly going to be the biggest obstacle when dealing with code bases that old.


The “new” form has been valid since the original 1998 C++ standard, where it was added for compatibility with C. “You now have to know” has therefore already been the case for the past 27 years. Back then the old pre-standard form was kept for backwards compatibility, and is only now being deprecated.


The old-style variadics are rarely seen in C++ these days, never mind this particular edge case. If you working in a vaguely modern version of C++ this largely won’t impact you. You can almost certainly ignore this and you’ll be fine.

Unless you have a massive legacy code base that is never updated, C++ has become much simpler over time. At a lot companies we made a point of slowly re-factoring old code to a more recent C++ standard (often a couple versions behind the bleeding edge) and it always made the code base smaller, safer, and more maintainable. It wasn’t much work to do this either.

To some extent with C++, complexity is a choice.


PyCuda 2024, used fairly often in certain industries, still contains `auto_ptr` ;-;


I think Rust has shown a way to remove deprecated interfaces while retaining back compat - automated tooling to migrate to the next version and give a few versions for a deprecated interfaces to stick around at the source level.


If you're talking about editions, this isn't how they work at all; every edition continues to be supported forever. (The part about automated migration tooling is true, and nice.)

There've been a few cases where code was unsound and should never have compiled, but did due to compiler bugs, and then they fixed the bugs and the code stopped compiling. These were handled through deprecation warnings with timelines at least several months long (Rust releases a new version every six weeks), but usually didn't have automated migration tooling, and didn't fracture the language mostly because they were rare edge cases that most programmers didn't encounter.


Editions are still allowed to remove old syntax or even remove APIs - they only can’t break ABIs. So the code is still there once removed from an edition in previous editions, but such symbols don’t even get linked if they’re unused supporting progressive removal. And similarly, I could see editions getting completely removed in the future at some point. Eg rather than indefinitely maintaining editions, in 20 years have a checkpoint version of a compiler that supports the previous 20 years of editions and going forward editions older than 10 aren’t in the build (for example, assuming a meaningful maintenance burden, which is hard to predict when that happens and what a specific policy looks like).


Editions never remove APIs.


Have not yet. There’s nothing stopping them though and from talking with the std folks it seems like they will likely at some point experiment crossing that bridge.


C++ almost never removes features because of the ABI compatibility guarantees. Programs compiled with older versions of the standard can be linked against newer versions.

This is allegedly because in the 80s companies would write software, fire the programmers, and throw the source code away once it compiled.


Fixing syntax by definition does not affect the ABI. And Rust has shown that both ABI and API compatibility can be achieved in the presence of several "versions" (editions) of the language in the same build.


Rust has shown that it’s yet another language that kind of sort of addresses 3% of the issues c/c++ has, tops.


Probably because like 95% of C++'s issues are self-inflicted and don't need to be addressed if you use a different language in the first place, and 1% of them are fundamentally unsolvable by any language.


I really don't like C++ but it's hard to come up with thirty-odd times as many other terrible problems as the ones Rust addresses.


Do you actually know Rust or were you just talking out if hour ass? I’d like you to enumerate even thirty problems of C or C++ that Rust doesn’t fix, never mind hundreds (because Rust fixes a metric shit ton of C/C++ problems!)


lol. A functions module system that’s easy to use and adopted? A package manager? A well implemented hash table? Fast compile times? Effectively no segfaults? Effectively no memory leaks? Comparatively no race condition bugs? A benchmark and unit test framework baked into the language? Auto optimization of the layout of structs? No UB?

I don’t know what you’re counting as “3% of the issues” but if those are the 3%, they sound like massive productivity and safety wins that’s not existed in a language with a similar performance profile to C/C++.


Is Rust faster to compile than C++?


Different (though related) things make compiling Rust slow. In both cases the compiler can spend a lot of time working on types which you, as programmer, weren't really thinking about. Rust cares about types which could exist based on what you wrote but which you never made, whereas C++ doesn't care about that, but it does need to do a lot of "from scratch" work for parametrised types that Rust doesn't have to because C++ basically does a lot of textual substitution in template expansion rather than "really" having parametrised typing.

If you're comparing Clang the backend optimiser work is identical in both cases it's LLVM.

People who've never measured often believe Rust's borrowck needs a lot of compiler effort but actual measurements don't agree - it's not free but it's very cheap (in terms of proportion of compiler runtime).


For most day to day cases, rust will actually compile faster because the build system will do good incremental builds - not perfect, but better than c++. Also clean builds are still “perfectly” parallelized by default.

And yes, while rust has a reputation for being slow, in my experience it’s faster for most projects you encounter in practice because the c++ ecosystem is generally not parallelized and even if it is many projects have poor header hygiene that makes things slow.


Rust is a single vendor. It's not really the same situation.


Having multiple compiler vendors is a problem IMO not a feature. It fragments the ecosystem - the code compiles fine with this compiler but not this other one. The maintenance of portable Rust code is significantly easier.

I think the way forward is multiple backends (LLVM + GCC) to improve platform support, but a single unified frontend that works correctly on all platforms is a good thing.


There is a single standard committee though. There is really nothing stopping them from shipping tooling that can do the conversions for people. The number of vendors isn't really the problem here. The problem is that the committee shifts that responsibility onto the vendors of the compiler rather than owning it themselves.


[flagged]


Several times now C++ enthusiasts and indeed the committee have been told the way forward is the "Subset of a superset" that is, Step 1. Add a few new things to C++ and then Step 2. Remove old things to make the smaller, better language they want.

Once they've obtained permission to do Step 1 they can add whatever they want, and in a few years for them it's time to repeat "Subset of a superset" again and get permission for Step 1 again. There is no Step 2, it's embarrassing that this keeps working.


Can someone explain why helium is used for these purposes, as opposed to some other noble gas? I think there's more argon (it's about 1% of the atmosphere) than helium so is helium somehow special, or is it just cheaper, despite being rarer and non-renewable?


Helium has the second highest [1] specific heat capacity (after hydrogen); it's significantly higher than that of even water. It's damn efficient at cooling or heating. With that, it's chemically inert, unlike hydrogen or ammonia. There's no reasonable substitute.

[1]: https://en.wikipedia.org/wiki/Table_of_specific_heat_capacit... (Sort by the third column.)


Heat capacity is irrelevant -- argon and helium have exactly the same heat capacity per liter of gas, which would be the figure of merit in this context.

Heat conductivity, on the other hand, is an order of magnitude higher for helium, compared to argon, because its atoms are moving faster due to their lower mass.

When the gas is used for cooling, heat conductivity is important because it determines the conductivity through the boundary layer near surface, where the velocity of the flow drops to zero at the surface itself, and all the heat transport is through conduction rather than advection.


It's about the thermal conductivity.

Helium has 150mW/mK vs Argon ~18mW/mK so you can't replace it.

The only alternatives to Helium are Neon, which is 3x worse and much more expensive, and hydrogen. However, hydrogen is flammable so it's a very bad idea to use it in a fab which has extremely poisonous gases and needs a cleanroom environment. A fire would ruin your whole factory and kill your engineers.


Cool, but I don't see how it's sorting anything. It just seems to play a randomized arrangement of the slices. You can re-randomize as much as you like but there's no sort option as far as I can see.


It randomizes slices of the sample and begins to play the slices in the random order. Meanwhile it begins the bubble sort algorithm at a pace that matches the tempo, sorting the slices into their chronological order. Throughout, it only plays the unsorted slices. (I was kinda hoping it would play the sorted sample at the end.)


I actually wanted it to play them as it went, so that it would be <unsorted><sorted> each time through, with the former shrinking and the latter growing.


The idea is that it slices the Amen Break into however many slices you specify, and the list being sorted is the indices for those slices. At each step, it plays the slice the pivot is being compared to.

Because it only plays the samples being compared, it never plays the sorted chunks, so it's missing a "punchline" of sorts.


I was surprised at how frustrating it was to not hear the sorted result at the end.


You're right. It doesn't play the sorted parts, which is strange. I expected to have a series of random-then-controlled slices with the random part getting shorter and the controlled part getting longer, but it really is just a shortening loop of random beats.


Would have been cool if it played the sorted ones at the end as a final run through victory lap


Did you play it to the end? It's absolutely sorting from smallest to largest. Unless you have a confused understanding of a bubble sort, it's doing a bubble sort


Not the OP but I stopped listening pretty quickly because I was confused about how it was sorted.

It wasn’t until I read your comment that I realised the sorting happened while you were listening rather than before hand.


Same! thanks for saving the experience for me :)


So it's sorting from earliest to latest, really?


The value that is being sorted isn't obvious to me. It's obvious that it is sorting it. I'm guessing maybe some dB level of each of the hits/notes. If that was the case, I'd expect the initial unsorted view to line up with the pattern of the waveforms which is not the case. Maybe it's just an unsorted list of values sorted in sync to the rhythm. It's weird though that the segment corresponds to a segment of the audio. I just don't see how they are linked.


It's sorting by index of the slice. Pressing "shuffle" jumbles the slices up. So it puts the slices of the break back in the correct order. You never hear the result.

Set it to 8 slices and it becomes easy to see what it's doing: look at the waveform and the now-playing highlight jumping around.


I was confused at first at what the different "levels" mean. But they're not levels, they're just indices.

I would suggest the author changes the UI to just show a number instead of a bar, to make this clearer.


Give it a minute or two.


It’s sorting by time


It uses the built-in one. But as discussed in the article they ran into the problem where even when you try to force using the internal mic, iOS will silently switch to the mic on a pair of AirPods if there's a pair connected.


I don't think it would work because the accelerometer updates are at too low a frequency. Apple's developer info says:

``` Before you start the delivery of accelerometer updates, specify an update frequency by assigning a value to the accelerometerUpdateInterval property. The maximum frequency at which you can request updates is hardware-dependent but is usually at least 100 Hz.

```

100Hz is way too slow. Presumably some devices go higher but according to the article the peak signal is in the 3kHz to 15kHz range.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: