Hacker Newsnew | past | comments | ask | show | jobs | submit | explodingwaffle's commentslogin

just because it is digital doesn't mean it has to be a microcontroller though, right?? i see no reason this wouldnt just be a state machine or whatever out of plain old logic.


Well, I've been working with a number of these devices, different brands but the same or slightly more functionality and they all have little controllers in them. Some are documented and you can talk to them directly (usually I^2C) others are 'black boxes', you can tell there is something living on the other side of the nominally 'NC' pin but not what and you don't have control over it.

I also have a couple of very fancy ones that you can compensate and whose NV memory you can write to directly. Those are pretty expensive, $100 or thereabouts but the precision is unreal for a non-governed device.


this is great, thanks for pointing to it! been looking for this ideal sort of RSS reader


This is awesome! Just so you know, you are legally obligated to do Bad Apple when interest dies down.


By my estimation, it would only take 1/3 of a year to render


y'know, I've been excited / feared that Bad Apple would show up. The good news is a lot of frames would probably just be a few pixels to change from the previous frame, so some might draw really quickly.

Basically you want to avoid keyframes on this thing, they'll kill you


Some of the ports of Bad Apple have had to deal with this and they narrowed it down to the few changes needed for each frame. When there were too many pixels to change all at once, they would make fewer changes in exchange for a loss of quality.

https://trixter.oldskool.org/2014/06/19/8088-domination-post...

https://trixter.oldskool.org/2014/06/20/8088-domination-post...


Certainly not a "Rust UX _expert_", but I do find GPUI interesting. Some nice examples here of not-"just text/button widget programs": https://github.com/longbridge/gpui-component


I tested with 1.1.1.1 first, didn’t get anything, and gave up for the night. Maybe I should put a different provider as DNS backup? (any DNS gurus to say that that’s a bad idea?)


This is coming in the next release of Kicad: https://forum.kicad.info/t/post-v9-new-features-and-developm...

The rate of development since V6 is crazy fast IMO. Very much an OSS success story.


It's absolutely insane. Kicad v5 was usable, if you wanted to make simple projects and were willing to deal with frequently running into annoyances. Kicad v6 took forever to release, but it suddenly went from "an option for hobbyists who can't afford EAGLE / Altium" to "viable tool for not-too-complicated professional products". Ever since then every release has been filled with quality-of-life improvements - both huge improvements and fixes for small annoyances.

We saw something similar with Blender. At a certain point it becomes good enough that for some professionals it becomes a viable alternative to its obscenely expensive proprietary competition. If those companies are willing to donate $500 / seat / year to OSS instead of spending $1500 / seat / year on proprietary licensing, they can get some developer to fix the main issues they run into. This in turn means the OSS variant gets even better, which means even more companies are willing to consider switching, which means even more budget for development. Let this continue for a few years, and the OSS alternative has suddenly become best-in-class.


Thoughts on Zephyr? Especially wrt code size. I've heard all kinds of things regarding the requirement for it with Nordic chips


Not OP, but I’ve dabbled in nRF91 recently and found that once your application starts doing anything interesting (MCUboot, OTA, softSIM, etc.) the code size explodes. It is particularly difficult to keep TFM down to a manageable size. 1 MB of flash really doesn’t go that far these days.

Years ago I worked on the nRF52 series with the “old” SDK and felt I had much greater control. I understood the build system. Maybe I’m just grumpy and don’t like change…


That's a Zephyr thing. Same on STM32, add an otherwise trivial driver for some peripheral that has a bunch of dependencies, and then your codesize explodes by 60KB.


Nordic is one of the largest contributors to Zephyr though. I get the feeling that they are pushing hard to make it the de-facto RTOS for embedded stuff.

I feel like the whole Zephyr ecosystem is geared towards reducing "time to blinky on every devkit you have" at the expense of mid- to late-stage development efforts. Driver maintainers want their stuff to work out of the box, so they enable _everything_ and code size becomes the end customer's problem.

grumble grumble, I don't like where this is heading.


Who says you can’t make a library that does both? Rust makes it pretty easy to conditionally compile code based on architecture.

It could even be possible to make some sort of “ABA primitive” and use that for these sort of data structures. This could well exist: I’ve not looked. These sorts of things really aren’t that common in my experience.

On LR/SC: to any atomics experts listening, isn’t it technically “obstruction-free” (as per the Wikipedia definitions at least) rather than lock-free? (though in practice this makes basically no difference and still counts as lock-free in the C++ (and Rust) sense) Just something that stuck out last time I got sucked into this rabbit hole.


Compare-and-swap and LR/SC are not per se "obstruction-free" or "lock-free".

They are the primitives with which you can implement shared data structures that are "lock-free" or "obstruction-free".

Anything that can be implemented with compare-and-swap can be implemented with LL/SC, and vice-versa.

The only difference between compare-and-swap and LL/SC is how they detect that the memory word has not been modified since the previous reading.

Compare-and-swap just compares the current value with the old value, while LL/SC uses a monitoring circuit implemented in the cache memory controller, which records if any store has happened to that memory location.

Therefore LL/SC is free of the ABA problem, while the existence of the ABA problem has been recognized already since the first moment when compare-and-swap has been invented.

Compare-and-swap has been invented by IBM, who has introduced this instruction in IBM System/370, in 1973. Simultaneously with compare-and-swap, IBM has introduced the instruction compare-double-and-swap, for solving the ABA problem by using a version counter.

Intel has added compare-and-swap renamed as CMPXCHG to 80486 in 1989, and compare-double-and-swap, renamed as CMPXCHG8B, to Pentium in 1993. On x86-64, CMPXCHG8B, i.e. compare-double-and-swap, has become CMPXCHG16B.

LL/SC has been invented in 1987, in the S-1 Advanced Architecture Processor, at the Lawrence Livermore National Laboratory. Then it has been added in 1989 to MIPS II, from where it has spread several years later to most RISC ISAs.

Using either compare-double-and-swap or LL/SC is equivalent, because both are free of the ABA problem.

However there are many cases when the optimistic access to shared data structures that can be implemented with compare-and-swap or LL/SC results in lower performance than access based on mutual exclusion or on dynamic partitioning of the shared data structure (both being implemented with atomic instructions, like atomic exchange or atomic fetch-and-add).

This is why the 64-bit ARM ISA, Aarch64, had to correct their initial mistake of providing only LL/SC, by adding a set of atomic instructions, including atomic exchange and atomic fetch-and-add, in the first revision of the ISA, Armv8.1-A.


> Who says you can’t make a library that does both?

Of course you can. I just meant that the linked article didn't.

> On LR/SC: to any atomics experts listening, isn’t it technically “obstruction-free” (as per the Wikipedia definitions at least) rather than lock-free?

The better criterion IMO is loop-free, which makes it a little easier to understand. Consider the following spin-locking code (with overabundant memory barriers):

   do { p = *a; } while (p == 0x1 || !atomic_compare_and_swap(p, 0x1, a));
   memory_barrier();
   // do stuff that looks at *p
   q->next = p;
   memory_barrier();
   atomic_store(q, a);
Here's the equivalent LL/SC version:

   do {
     p = ll(a);
     memory_barrier();
     // do stuff that looks at *p
     q->next = p;
     memory_barrier();
   } while (!sc(q, a));
The pointer-tagging version is also obviously not loop-free. Which is faster, in which cases, and by how much?

The oversimplified answer is that LL/SC is probably slightly faster than spin-locking on most platforms and cases, but pointer-tagging might not be.


My understanding is that all architectures that matter do idiom recognition for ll/sc to guarantee forward progress when ll/sc is used to implement CAS and other common lock-free and wait free patterns, at least as a fallback.



TIL this github "list" feature. neat


> TIL this github "list" feature

Same.


What would you consider the “RISC-V equivalent” of TrustZone? Last time I was curious I didn’t find anything.

(FWIW I agree with the other commenter that these ""security"" features are useless, and feel to me more like check-box compliance than anything else (Why does TrustZone work with function calls? What’s wrong with IPC! Also, what’s wrong with privileged mode?). Just seems like a bit of a waste of silicon really.)



MultiZone, OpenMZ, Keystone, maybe Penglai or ProvenCore, I can't really keep up. That answer goes a long way to explaining the appeal of TrustZone.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: