I guess we only see the ones that don't in the news. Makes sense. I have yet to see one of these where the data is encrypted and they M'dITM to get it, but I'm sure it's happened.
Exactly this. Burning in a shared secret works; alternatively you could do something with private keys burned into each device, signed with some PKI scheme whose public keys are known to the other entity.
Notably both of these turn it into a 'microscope' problem, alternatively if the key leaks somewhere…
At the end of the day, if the system is to process the data, it needs to access it. (Homomorphic encryption nonwithstanding.)
I thought security chips put (extra?) metallization over top the logic to prevent the microscope problem. Do they not or can that still be defeated? I guess if you're careful enough you can strip off that extra layer
People are very creative in defeating those mechanisms. It's mostly a question of time. Also doesn't help if there's some side channel or software leak.
The only "truly" 'safe-ish' thing is active battery powered intrusion detection. It's done for high end HSMs… which easily sell for 5 or 6 digit prices.
I didn't know the impetus for the graffiti writing was actually hardware limitations, that's fascinating:
> Gaffiti power writing software was another design decision
affected by the battery selection. During the design of the
first Palm handhelds, users were clamoring for natural
handwriting recognition. However, natural handwriting
recognition would require a more powerful processor and
more memory, which together required bigger batteries.
Adding all these things to a handheld would have weighed it
down and made it cost too much for the market. Instead, the
Palm designers bet that users would settle for good-enough
handwriting recognition if the result was long battery life.
This is an on-path attacker. In end-user DNS configurations, attackers can simply disable DNSSEC; it's 1 bit in the DNS response header ("yeah, sure, I verified this for you, trust me").
To check the DNSSEC signatures on the client, you have to do a full recursive lookup. You've always been able to run your own DNS cache, if you want your host to operate independently of any upstream DNS server. But at that point, you're simply running your own DNS server.
It's not necessarily equivalent to a recursive lookup, you can ask a cache for all the answers because you already know the root keys a priori. But yes, it does follow the entire chain of trust, that's the entire point of dnssec: if you don't do that the whole exercise is utterly pointless.
It's explicitly not the point of DNSSEC, which has for most of its entire existence been designed to be run as a server-to-server protocol, with stub resolvers trusting their upstream DNS servers.
Not true, RFC4035 says all security aware resolvers SHOULD verify the signatures. It's far from pointless when actually implemented. Don't dismiss a whole protocol just because some historical implementations have been half assed.
I'm guessing I do. Anyways: no question that there are a variety of experimental setups in which you can address the problem of on-path attackers trivially disabling DNSSEC, freeing you up to work on the next, harder set of DNSSEC security and operational problems.
> Essentially everyone with the SSID on multiple access point MAC addresses can get pwned
You still have to be able to authenticate to some network: the spoofing only allows users who can access one network to MITM others, it doesn't allow somebody with no access to do anything.
In practice a lot of businesses have a guest network with a public password, so they're vulnerable. But very few home users do that.
I run a website, video game servers, and Nextcloud. I have the nextcloud set to only allow access from my IP. It has to be open to the world with a domain name so I can use LetsEncrypt certs so it cannot only use private ip addresses which cannot be easily configured and trusted for https.
I have been relying on EAP TLS via wifi so my phones could upload their photos and videos to Nextcloud.It was way cheaper than doing it via AWS, which is what I used to do and used ethernet LAN connections only. If this works asynchronously across time to allow authentication to my network which uses EAP TLS, will knock me out of being able to use Nexctloud on my mobile devices since plugging an ethernet in after I take photos is too cumbersome to do very often.
I love Nextcloud, but do not want to pay Amazon for EC2 etc.
My read is this allows them to mimic both client and access point to assemble the handshake and obtain radius authentication. Rather than have to verify a certificate on the client or crack complex passwords, they pretend to the client sending the response it sends when the certificate is verified. Then they switch MAC to the SSID MAC and send the next part to the client. Previous evil twin attacks were one sided rather than basic frame assemblers.
I read that paper as describing a successful reconstruction of the Radius authentication handshakes at layer 2 after the fact for use later rather than caring about actual certificate validations. Basically handing a three letter agency quality tool to the Kali Linux fan club.
> I have the nextcloud set to only allow access from my IP. It has to be open to the world with a domain name so I can use LetsEncrypt certs so it cannot only use private ip addresses which cannot be easily configured and trusted for https.
I would put that nextcloud instance on a private/vpn IP and not expose it. For the letsencrypt you can use DNS based approval. Cloudflare DNS is pretty easy to configure for example, they also support setting DNS records for private IPs which I understand is not standard. (If it's on a private IP you don't strictly need HTTPS anyway). Wireguard is ideal for this kind of thing and it works well on mobile as well.
If the above quoted piece is the entirety of your requirements there are a lot of other ways to solve the same issue. Tunnels, reverse proxies etc.
EDIT: Letsencrypt just recently add a new authentication method which uses a one time TXT entry into your DNS record.
I admittedly don't have practical experience with RADIUS, but I read it as a more narrow attack:
> We verified that an attacker, having intercepted the first RADIUS packet sent from the enterprise AP, can brute-force the Message Authenticator and learn the AP passphrase.
I thought RADIUS fundamentally negotiates based on a PSK between the AP and the RADIUS box, which the attacker doesn't have? They're saying this gives you the ability to brute force that PSK, but if the PSK isn't weak (e.g. a dictionary word) that's hopeless.
> I thought RADIUS fundamentally negotiates based on a PSK between the AP and the RADIUS box, which the attacker doesn't have?
Are you talking about the secret shared between the NAS and the RADIUS server? It's only used to scramble some attributes (like MS-MPPE-Send-Key), but not all of them. Message-Authenticator is one that's not scrambled. Looking at this FreeRADIUS dictionary file I have, I see 42 out of ~6000 attributes that are scrambled.
Anyway, yeah, if you have a bigass shared secret, it's going to be infeasible to guess. I'm pretty sure that the long-standing very, very strong suggestion for operators has been something like "If you don't co-locate your RADIUS server and your NAS, then you really need have a bigass shared secret, and probably want to be using something like IPSec to secure the connection between the two." [0][1]
This is a big deal: it means a client on one wifi network can MITM anything on any other wifi network hosted on the same AP, even if the other wifi network has different credentials. Pretty much every enterprise wifi deployment I've ever seen relies on that isolation for security.
These attacks are not new: the shocking thing here that apparently a lot of enterprise hardware doesn't do anything to mitigate these trivial attacks!
Yes, if they host the guest network on the same hardware, same transmission path etc. Network "hygiene" will obviously differ from one place to the other.
Yes, though do all of these wifi devices actually have a formal assurance (as in written specification) of network L2/L3 isolation between virtual APs?
I have some of those wifi APs that do not even provide any sort of isolation besides just implementing multiple SSID on the same wifi radio aka Guest SSID. No guarantee, no isolation.
I am flabbergasted you were able to download working source code from the manufacturer's website for a 20yo device... I'd have bet you a thousand dollars you'd never find it!
Isn't "oops we made a mistake" actually a valid defense to libel in most US states? I thought you had to prove it was intentional to some extent? Or reckless/negligent IANAL
Google takes no action to review the reports that their warnings are false until you sign up for Google products (namely - registering the site in their search console).
I reported a falsely flagged site repeatedly for weeks with absolutely no action from them.
Mozilla and Microsoft both did actually remove the warnings after the reports (Edge and Firefox stopped displaying the warning). Google did not. Google strong armed me into registering for google products, like a fucking bastard of a company.
This was the moment I went from "I don't love google anymore" to "Google can get fucked".
I wish them bankruptcy and every damn legal consequence that is possible to enforce.
"I believed it to be true" is a defense. But negligence isn't. In fact, that is usually what you want to prove, that they acted on things that a reasonable person (or a person that is supposed to be skilled in that field) can see is not true.
> What I would really like is the ability to change defaults for all mutexes created in the program, and have everyone use the same std mutexes.
Assuming you're building the whole userspace at once with something like yocto... you can just patch pthread to change the default to PTHREAD_PRIO_INHERIT and silently ignore attempts to set it to PTHREAD_PRIO_NONE. It's a little evil though.
That is a great terrible idea (I really have to think a bit more on that). Won't help for Rust, since the mutexes there use futex directly, so you would have to patch the standard library itself (and for futex it is more complex than just enabling a flag). Seems plausible that other libraries and language runtimes might do similar things.
The implementation for the rust std mutex when targeting fuchsia does implement priority inheritance by default, but the zircon kernel scheduler and futex implementation are written with priority inheritance in mind as the default approach rather than something ad hoc tacked on. Unfortunately on Linux it seems like there is a large performance tradeoff which may not be worthwhile for the common case. It does seem like it would be nice to set an env variable to change the behavior through rather than require a recompile of libstd. A lot of programs use alternatives to the std library as well like parking_lot, which is indeed a pain.
Sometimes I feel like trying to use Linux for realtime is an effort in futility. The ecosystem is optimized for throughput over fairness, predictability, and latency.
> Unfortunately on Linux it seems like there is a large performance tradeoff
Implementing transitive priority inheritance is just inherently algorithmically more expensive: there's no avoiding that.
> Sometimes I feel like trying to use Linux for realtime is an effort in futility.
If you're not actually using an RT kernel, yeah, it's futile. But if you are, the guarantees are pretty strong... on x86 PCs, the hardware gets in the way much more than the software in my experience. There's a lot of active work upstream.
Requiring C23 for a library header is a great way to guarantee nobody will use your code for long time.
I still write nearly ANSI compliant C for simple embedded things. Because somebody might need to figure out how to rebuild it in twenty years, and making that person's life harder for some syntactic sugar isn't worth it.
Even C99 can be a problem: for example, C99 designated initializers are not supported in C++. If your header needs to support C++ you can't use them (C++ forces you to initialize every field in the structure in order if you do it that way).
Based on a lot of real world experience, I'm convinced LLM-generated documentation is worse than nothing. It's a complete waste of everybody's time.
The number of people who I see having E-mail conversations where person A uses an LLM to turn two sentences into ten paragraphs, and person B uses an LLM to summarize the ten paragraphs into two sentences, is becoming genuinely alarming to me.
> The number of people who I see having E-mail conversations where person A uses an LLM to turn two sentences into ten paragraphs, and person B uses an LLM to summarize the ten paragraphs into two sentences, is becoming genuinely alarming to me.
I remember in the early days of LLMs this was the joke meme. But, now seeing it happen in real life is more than just alarming. It's ridiculous. It's like the opposite of compressing a payload over the wire: We're taking our output, expanding it, transmitting it over the wire, and then compressing it again for input. Why do we do this?
> But, now seeing it happen in real life is more than just alarming. It's ridiculous. It's like the opposite of compressing a payload over the wire: We're taking our output, expanding it, transmitting it over the wire, and then compressing it again for input. Why do we do this?
The guess I've had was that most people were genuinely not reading emails/messages much or paying much attention to what they wrote to begin with, bolstered by the number of times I've tried to convey simple instructions or ask simple questions just to have most of it entirely ignored. I tried all kinds of organization, tried making them very short and direct, but still fruitless. So they don't experience the same loss
We do this because using AI makes you immediately lazy in a way that is difficult to put in words but that anyone who tried can relate to.
We do this because we were impressed that one time the stars aligned and the output was decent. So we write just one more prompt bro in the hope it'll will be better than the latest 10, which ended up a waste of time.
We do this because $boss has been successfully spitting out 7 PowerPoints a day with it, which nobody reads but makes them feel productive, therefore this must be the future, therefore AI use shall be mandated until team productivity improves.
> Based on a lot of real world experience, I'm convinced LLM-generated documentation is worse than nothing. It's a complete waste of everybody's time.
I had a similar realization. My team was discussing whether we should hook our open-source codebases into an AI to generate documentation for other developers, and someone said "why can't they just generate documentation for it themselves with AI"? It's a good point: what value would our AI-generated documentation provide that theirs wouldn't?
It isn't valuable if you generate and toss it over the fence. Where the value comes in is when the team verifies the content. Once that's done and corrections made, the words have the assurance that they match the code.
If you aren't willing to put in the time to verify it works than it is indeed, no more useful than anyone else doing the same task on their own.
Having used AI to write docs before, the value is in the guidance and review.
I started out with telling the AI common issues that people get wrong and gave it the code. Then I read (not skim, not speed, actually read and think) the entire thing and asked for changes. Then repeat the read everything, think, ask for changes loop until it’s correct which took about 10 iterations (most of a day).
I suspect the AI would have provided zero benefit to someone who is good at technical writing, but I am bad at writing long documents for humans so likely would just not have done it without the assistance.
LLM-generated documentation is great for LLMs to read so they can code better and/or more efficiently. You can write it manually, but as I've discovered over the decades, humans rarely read documentation anyway. So you'll be spending a lot of time writing good for the bots.
I guess I can understand that, but please at least put a warning on it that says "if you actually take the time to read this you'll have spent more effort on it than its author did" so I know not to waste my time.
This is odd to hear. All the best programmers I know are actually adamant readers of documentation, how else could you be a good programmer without reading the docs? I will say devs admitting to not reading docs does definitely explain how shit current software is from big tech.
Yesterday my manager sent LLM-generated code that did a thing. Of course I didn't read it, I only read Claude's summary of it. Then I died a little inside.
It was especially unfortunate because to do its thing, the code required a third party's own personal user credentials including MFA, which is a complete non-starter in server-side code, but apparently the manager's LLM wasn't aware enough to know that.
reply