Hacker Newsnew | past | comments | ask | show | jobs | submit | mechanicalpulse's commentslogin

“Life imitates art far more than art imitates life.” — Oscar Wilde, The Decay of Lying (1889)

I do and it does.

    $ ls -al /dev/std*
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stderr -> fd/2
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stdin -> fd/0
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stdout -> fd/1
    $ ls -n /dev/fd/[012]
    crw--w----  1 501  4  0x10000000 Feb 27 13:38 /dev/fd/0
    crw--w----  1 501  4  0x10000000 Feb 27 13:38 /dev/fd/1
    crw--w----  1 501  4  0x10000000 Feb 27 13:38 /dev/fd/2
    $ uname -v
    Darwin Kernel Version 24.6.0: Mon Jan 19 22:00:55 PST 2026; root:xnu-11417.140.69.708.3~1/RELEASE_ARM64_T6000
    $ sw_vers
    ProductName:  macOS
    ProductVersion:  15.7.4
    BuildVersion:  24G517
Lest you think it's some bashism that's wrapping ls, they exist regardless of shell:

    $ zsh -c 'ls -al /dev/std*'
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stderr -> fd/2
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stdin -> fd/0
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stdout -> fd/1
    $ csh -c 'ls -al /dev/std*'
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stderr -> fd/2
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stdin -> fd/0
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stdout -> fd/1
    $ tcsh -c 'ls -al /dev/std*'
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stderr -> fd/2
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stdin -> fd/0
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stdout -> fd/1
    $ ksh -c 'ls -al /dev/std*'
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stderr -> fd/2
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stdin -> fd/0
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stdout -> fd/1
I tried the install example that you provided and it worked on macOS as well as Linux.

It's possible that he's taking "hope for the best, prepare for the worst" to its logical if unhealthy extreme by interpreting every ambiguous 802.11 frame as one with ill intent. However, just because he's paranoid doesn't mean there aren't misaligned people, devices, and applications out there probing networks.

It's probably a good idea for anyone to check themselves every now and then by playing Angel's Advocate just as much as they might play Devil's Advocate, but I don't think rejecting his premises out of hand with a drive-by diagnosis is all that helpful.


Fair enough, but in this case there are several massive red flags that OP was experiencing a variant of "targeted individual" delusion. (The confidence without evidence that their neighbor was a determined hacker group, using a complex zero-day to attack them at home personally, tie-ins to fear/belief of this being a widespread phenomenon).

I had a stretch of a year or so a decade ago where I was going through something very similar, down to the belief a hacker group was targeting my WiFi network despite the great lengths I was going to secure it during the setup process inside an RF shielded area, yet they still kept "getting in" somehow... so I recognize the signs.

If OP can re-read their comment later on in a different mindset, they may start to notice things that felt so certain at the time don't actually add up logically in retrospect, that's how I ended up breaking out of it eventually.


Modern 802.11 implementations are wildly complex. The output from `iw list` on a Linux system with a modern WiFi radio, a trip through the example configuration that ships with `hostapd`, or perusing the lengthy list of standards, amendments, and extensions on Wikipedia will reveal it, too.

Given the complexity of modern 802.11 protocols and the prevalence of WiFi radios in devices of all kinds, I find it well within the realm of possibilities for anyone to observe 802.11 traffic that is sufficiently ambiguous to create the confidence necessary to be a mentally workable substitute for evidence of a targeted attack. There may be a lot of evidence that could be found to refute that very same premise, though, if one knows what to look for.


This happened to someone I used to know. Rare side effect of medication.

Oh my... This is how the code could look indeed. Which LLM did you use to generate this?


> As I'm sure you're aware, glyphosate is usually only appropriate as a weed killer on your property if you're looking to kill all vegetation in/around where you spray it.

> It's a non-selective herbicide in this context, it kills everything.

It is a non-selective herbicide, but it's not a systemic herbicide. It functions by interfering with photosynthesis, but since it is minimally absorbed via root systems, it must be applied directly to the foilage. You can spray it on the ground around a plant and that plant will happily ignore it. This is why the instructions are explicit about applying directly to the foilage during sunny days when the wind is light.

As a homeowner, I loved glyphosate. It was cheap, simple, effective, and could be applied in a selective manner. It's not the best choice for getting rid of broadleaf weeds in a lawn, but I used it all the time in my gardens to kill weeds and keep the bermudagrasses out.


Roundup makes a product that looks like roll on deodorant. You literally roll it onto the leaves of the things you want to kill, and everything else remains unharmed.

I'm also a fan of glyphosphate. Nothing else works nearly as well. People who are critical of "chemicals" to control weeds have never had to deal with a weedy pavement before.


Yes! I also used glyphosate to kill things growing in and around my sidewalk, driveway, steps, and curb. I've also used a propane torch for the same purposes, but it requires more effort and cannot be applied quite so selectively. It works, though, and is a good choice for anyone who would rather use a petroleum product than an herbicide.

I looked up the product you mentioned and you're right -- it does look like deodorant! It's a gel that contains glyphosate and isopropylamine salt. Neat!


Carbon Robotics sells a weed burner that works via a laser, if you’re dead set against both petrochemicals and glyphosate.

Sadly: no consumer model yet.


Normal propane weed burners work pretty well against weeds in areas where it's reasonable to use something like that. But they aren't good if there's anything nearby you want to protect.


Hey, I really like the idea! There are various palm trees around here, I keep fighting the unwelcome guests that show up. Unless caught really early they are basically impossible to pull and almost all of them show up in places I don't want to dig them out. A contact-only killer sounds like just the right thing.


Well that's certainly a take. Solid state relays using optoisolated MOSFETs have been around for fifty years. Mechanical relays are overkill for signal switching as in HVAC thermostats, IMHO, but you do you.

Anecdotally, I have a first generation Nest and haven't had a problem. Maybe some of the earlier hardware had fewer protection against misuse (e.g., with non-24VAC systems or otherwise incorrect installation), but that's generally the case with most new things.


Sounds like something Nest engineers would have said.

It's not "signal switching", you see.

HVAC equipment is as old and varied as you can imagine, and there is higher current than you think running through those terminals, powering all sorts of nasties, oil burner relays, damper motors, crude AC contactors causing voltage spikes etc. HVAC low voltage power is as dirty as can be.

No one took this into account, they were more concerned with making the thermostat pretty.


Nest is hardly the only thermostat out there using solid-state relays. Have you considered the possibility that they did take it into account and they deliberately chose to use SSRs instead of electromechanical relays? Have you considered the possibility that they were concerned about the impact that mechanical relays may have on the RF, especially if "there is higher current than you think running through those terminals"? Have you considered the possibility that they were worried about making the first one fatter than it already was?

In my heat pump, none of the thermostat wires directly control the contactors. They all run into a logic board that applies logic like time delays, temperature-controlled defrost cycling, and active protection lockouts for the compressor. I mean, there's a seven-segment LCD on the logic board for system troubleshooting. The air handler has a variable speed blower as well.

I understand that HVAC equipment varies wildly, but if you try to solve every possible problem or scenario and target every possible customer, you'll never make it to market.

I also understand that I am the target demographic.


I also went about looking at the difference rather than the order. In the hexadecimal case, the difference is 15 (0xEF vs 0x12). I thought, then, that for any base B with ascending digits A and descending digits D, (D-(B-1))/A=B-2.

For binary, it looks like (1-(b-1))/1=b-10 or (1-(2-1))/1=2-2=0 in decimal.

For trinary, it looks like (21-(b-1))/12=b-2 or (7-(3-1))/5=5/5=1 in decimal.

For quaternary, it looks like (321-(b-1))/123=b-2 or (57-(4-1))/27=54/27=2 in decimal.

Essentially and perhaps unsurprisingly, the size of the slices in the number pie get smaller the bigger the pie gets. In binary, the slice is the pie, which is why the division comes out to zero there.


> Did you live at a time where Internet was not a thing?

You must be relatively young. Software existed before the widespread adoption of the Internet.

> I remember very clearly buying software on physical media and never, ever "receiving" a single patch.

You had to take action to receive them. They weren’t automatic updates like they are today.

> I don't even know how that would have looked... "buy this floppy disk, it's a patch for a bug in the other floppy disk you bought recently"?

That’s exactly what it looked like. That’s still the process today for some systems —- avionics updates for Boeing 747s are provided on 3.5” floppies.


> You had to take action to receive them.

What did that look like? Remember, back then, developers and users often had no after-sale communications at all. It was a technical impossibility more than anything. There was paper mail. There were telephone networks. That's about it.

I suppose you could occasionally call the developers of every software product you're using to ask if there is an update. I doubt anyone ever did that.


> Remember, back then, developers and users often had no after-sale communications at all.

They often had no pre-sale communications either, indeed no communication of any kind. It was just like buying a spatula or a pair of shoes. You went to a retail outlet and bought the software; the developer wasn't involved in the transaction at all. It was just the consumer and the retailer.

Sometimes there was a postcard you could send to "register" your purchase with the developer, and they'd send you mail about new versions or the like, but many people never registered.


  > but many people never registered.
Which leads to things not getting patched, more bugs, and more computers getting hacked. A great system...

I'll also add that if it was a big enough bug that it'd end up on the news and that's how people got informed. Otherwise, like you suggest, good luck. But it was possible.

It is baffling to me that we are having this conversation on Hacker News of all places. Aren't we a community of programmers? How in the world does any programmer think for a hot second that code is bug free? Last I checked formally verifying your code was 1) very rare and 2) still impractical if not impossible for anything of sufficient complexity. Unless we're formally verifying our code, I absolutely guarantee it has bugs. I know we have big egos, but egos so big that we think we're omniscient?


> How in the world does any programmer think for a hot second that code is bug free?

If you stop bloating the scope of your product by endlessly adding features no one ever asked for, you'll eventually run out of bugs.

Also, while it does not make you "omniscient", working with a known stack instead of following fashion does help a great deal with preventing bugs.


The problem is that the scope changes based on circumstances outside your control.


I agree that not expanding scope makes things easier but it doesn't solve the problem.

I also agree that knowing the stack goes a long way, but again, doesn't go all the way.

Omniscience is required, by definition. Even if just omniscience about the software you are building. MEANING you know not just all your lines, but all the lines of all the dependencies, the compiler, and the system it is operating on. I have yet to meet anyone that comes anywhere near approaching this knowledge, including many gray beards.

It is utterly foolish to proclaim your code as "bug free". Since you don't seem to be aware of sayings like "software rot" allow me to introduce to you to another one

  There's two types of programs:
  1) Those with bugs
  2) Those that nobody uses
In case it isn't obvious, the joke implies that all programs have bugs, it is just that developers are less likely to be aware of them when few people use them. This is, of course, because there are too many variables for any developer to account for, even in simple programs.


I won't deny that all software is going to have bugs. But I think there has been a real shift in mindset over time. When it was harder to patch and, there was greater incentive to make each release a well-tested, coherent product that offered clear advantages over the last one. As it's become easier to patch, it's become more tempting to make each release just a sort of snapshot of what's more or less ready at a certain time, or alternatively a tiny increment. In other words, users are now the testers.

I'm not saying things were perfect in the era of physical-media software. I'm just saying there were some good practices that were made necessary by the constraints of that era that still can be beneficial today, even though we don't have those same constraints.


  > But I think there has been a real shift in mindset over time.
With this I'm in full agreement. We've moved even further now to where we're selling products that do not yet even exist. It was bad enough we were selling stuff that wasn't fully tested, but worse that we're selling things on a promise.

I'd even go so far as to say that the selling of hype creates an environment where we almost certainly will have worse products. The business people are most interested in the sale, not maintaining the customer. The incentives to fix things or bring them out of alpha or beta release disappears. Even if this is harmful to the longevity of the company. But that doesn't matter either if you're only thinking one quarter at a time...

The point was never that it is easier to patch now and back then it wasn't possible. The point of this conversation was that we can't begin to solve the actual problems if we can't recognize why they happened in the first place. To base our premise on products being finished in the past will only lead to us cycling back to where we are. Someone just has to come up with the /brilliant idea/ of "what if instead of mailing patches, we send them over the internet!" It is good intentioned and will result in more users getting patches. We should not throw out the baby with the bathwater!

But the abuse of the environment is an entirely different problem. You're right that the ease of shipping patches lubricates this abuse of shipping to prod to early. But it isn't a causal variable. The causality here is the business people being uncaring about the quality of the product. The causality here is engineers not taking enough pride in their work to push back against the business people. The causality is that we've structured our work environment to reinforce this behavior and promote those who fall in line instead of those who do quality work. (quantity over quality) The causality here is that customers cannot differentiate a well designed product from a half baked idea and a promise. The causality here is that we call product vision a product demo (demonstrating what we want the product to be, not what the product is).

There's more causal variables, but these are clearly part of the chain of problems. The problem is that the situation is complex! But we can't fix complex problems by oversimplification and denial of their complexity. We have to break them down into simpler parts and address those smaller and more manageable problems. I mean we use this same procedure every day to write code and do numerous complex tasks!

But we can't solve complex problems if we deny the existence of their complexity.


I basically agree, but I would say that that "lubricating" effect is still causal. I mean if something is stuck and lubricant is added and then it starts to move, well, the lubricant was at least one of the causes of its moving.

It's true that the major factor is the ideological shift away from caring about doing a good job. I'm not sure how to address that though.


A lubricant doesn't cause something to move, but it makes the movement easier. It doesn't apply the force, even if it reduces the force required to create the movement.

Causality is the force, not the modifiers.

Think about it this way:

  Lubricant exists, force doesn't =/=> movement

  Lubricant doesn't exist, force does => movement
We can make things less likely to move by removing the lubricants but we can add as much lubricant as we want and it won't start making things move.

This is why I say "don't throw the baby out with the bath water" because we actually still want the baby. But we do want to address the root issues.

  > that *the* major factor
I want to be clear, there is not a singular factor.

Normally we solve problems by breaking them down into little ones. But the harder version is finding what little ones result in the big one. It's like working from the top of a graph to find a node and it's children vs starting at a leaf.

  > I'm not sure how to address that though.
Start with the little things. Start where you can. If you have the chance to make something better or hold someone to higher standards, try that. If you see someone else trying to, lend them a hand. Often we don't stick or necks out because we're afraid we're alone. But we're not. The "first follower" is one of the most important people in creating a group. They're a lubricant ;) You don't have to be first or take the most risks, but if you can make it easier for people to do so then that's still very helpful


I think I'm taking a more broad reading of causality than you. A lubricant can cause something to move, if the thing was previously stuck. Causality is not just "the" force, it is the totality of contributing factors to the event. If a dam bursts, then the weight of the water above it, the weakness of the sluice gate (or whatever), and the unseasonably warm weather that induced bolts to expand can all be causes.

> Start with the little things. Start where you can. If you have the chance to make something better or hold someone to higher standards, try that. If you see someone else trying to, lend them a hand.

I'm increasingly convinced that this isn't even close to sufficient. I mean, not to say it shouldn't be done, but I don't think that doing that is going to turn the tide against people doing the wrong thing. There needs to be more deliberate and forceful action to actually stop people doing the wrong things, not just encourage people doing the right things.


> You must be relatively young.

Did you read my comment at all? :-)

> You had to take action to receive them. They weren’t automatic updates like they are today.

Are you saying I was doing it wrong?

> updates for Boeing 747s

Oh I get it. Maybe we just weren't playing with the same toys :D


  > Did you read my comment at all? :-)
Did you read *MY* comment at all?!

Everything @mechanicalpulse said was accurate.

To answer @grishka's question (because it seems you also don't know)

  > What did that look like? 
Well I literally answered that in my comment!

  >>> Back when software came on physical media we still had patches. 
      We had patches that came through the internet AND WE HAD PATCHES THAT CAME THROUGH PHYSICAL MEDIA.
      THE ***LATTER*** MAKING IT ***HARDER TO PATCH.***
I broke it up and emphasized the key parts.

If you are going to accuse someone of not reading your comment you damn well better be reading the comments you're responding to.

  > Oh I get it. Maybe we just weren't playing with the same toys
Considering it was "harder to patch", yes, it does also mean "things often went unpatched." Mind you, this doesn't mean patches didn't exist nor does it mean, as you suggest, patches don't matter.

But again, I already addressed that in my original comment, so I'm not going to repeat myself again...


I didn't say it was impossible to put a patch on a physical media.

I was saying that in my experience as a user, I never, EVER received a patch or got any mean to request one.

My point being that the expectation was that what I was buying was "finished". When there was a bug, FOR ME, it was there forever.

With modern software, I encounter so many bugs everyday that I don't even realise anymore. Look at someone using something that depends on software for a while (not very long), see how they work around bugs (by restarting the app, or retrying the button, or going through a different path). When they do one of those things (like retry), if you ask them "wait, what did you just do?", chances are that they won't even know that they had to retry because of a failure. Why? Because modern software fails constantly.

Code is never perfect, that's for sure. But back when it was hard to update, the code had to be a lot more stable than today.


  > I didn't say it was impossible to put a patch on a physical media.
You never said those exact words but you heavily implied it. You cannot tell me that it was an unreasonable interpretation.

  > Did you live at a time where Internet was not a thing?
You came out swinging. You can't throw out punches and expect to not have one thrown back.

  > My point being that
My point was

  > When there was a bug, it was there forever.
I stated this quite clearly

  >>>> Software isn't "ever finished" because we are not omniscient writers who can foresee all problems, fix all bugs, and write software that is unhackable.

  > With modern software, I encounter so many bugs everyday that I 
I encounter so many bugs it drives me crazy.

Look, we don't disagree on this fact. I'm not encouraging the shipping of low quality or untested software. But patches coming through online was a good thing. We were finally able to fix those bugs effectively, not leaving tons of users stranded and vulnerable. This feature is not going to go away because it provides such high utility.

But shipping low quality software is a completely different issue. The ability to patch easily is not the cause of shipping low quality work. It is the abuse of this high utility feature. It is based on the greed and lack of pride in the product. There are so many little things that add up and create this larger problem. But pretending that software was ever finished is ignoring these problems. It oversimplifies the reasons we got to this point. We won't actually solve the problem *that we are both concerned about* if we oversimplify. We need to understand why things happened if we're going to stop it.


> You came out swinging. You can't throw out punches and expect to not have one thrown back.

I was not throwing punches. One can be 25 years old now and never have lived in a world without smartphones or social media.

> But pretending that software was ever finished

I'm not saying it was perfect (or bug-free). I'm saying that when you shipped, in many situations there was no way to patch the bugs. And even when there was a way, it was painful. So when you shipped, it was finished, as in "fully functional". Doesn't mean there wasn't any bad software or that good software did not have bug. But the teams shipping a product had to finish it before.

Nowadays, the norm is to ship unfinished software, with the expectation that there will be plenty of bugs, and those that are deemed worth fixing will be fixed.

And I do believe that it became like that precisely because it's easy to send patches. It's now economically viable to ship bad software, because people are used to having to wait for bugfixes. I'm guessing that back then, people would not have bought twice from the same company if the first time had ended up with unusable software.

> if we're going to stop it.

There is no stopping it. The quality of software is going down because it's economically viable, and I don't see that changing anytime soon (especially with LLMs).


  > I'm saying that when you shipped, in many situations there was no way to patch the bugs.
This was never in disagreement.

  >>>>>> We had patches that [...] that came through physical media. ***The latter making it harder to patch.***

  > And I do believe that it became like that precisely because it's easy to send patches.
Look, we aren't going to go back to a setting where we don't patch software. That's a WORSE place to be. It leaves people vulnerable for long times. Devices carry more valuable information now and the threat model is much more sophisticated. We do not want to do this.

Besides that, I just don't believe you can blame the ability to patch over the internet as the reason for shoddy work.

Is Linux full of bugs and shipping half built products? I don't think so.

  > There is no stopping it. The quality of software is going down because it's economically viable, and I don't see that changing anytime soon (especially with LLMs).
The great thing about the future is that it is in our hands to control.

The bad thing about the future is we need foresight and to work together to avoid pitfalls.

Luckily humans are quite adept at having foresight. I mean here we are talking about likely future problems. But we're also often feeling helpless to address those issues. But this is an observation bias. Look at the Y2K bug, it is a perfect example. The average person brushes it off as if we made too big of a deal about it. But the thing is, it was a big deal. The thing is... we solved it before it created major issues. We also had similar success in big problems like fixing the ozone layer. We've done this countless times. We just have a tendency to focus on problems that are still problems and not look back and use our success as motivation to keep going.

Every big problem can be broken down into many smaller problems that are much more manageable. "They" win by making us believe that the little things don't matter. "They" win because it means we aren't taking the first steps or making progress, killing any momentum. The worst thing that can happen is to make us feel like the problem is too big to be solved. But that's a lie. We've created this mess and frankly I would like to try to fix things before it becomes an even bigger mess. Personally, I'm a big fan of not doing unnecessary and avoidable work.

So the question is are you with me? Are you going to help try to fix this problem? Or are you going to just sit by and let it grow worse? Hoping that it just solves itself or someone else solves it? Frankly, we need as many people in on this as we can. You don't need to do a lot of work. All I ask is that you speak up and question when the teams you work for are trying to push unfinished products. All I ask is that you help encourage others to do quality work, and not let slop just slip by. I'm not asking you to change the world, certainly not over night. I'm asking if you will make just a modest attempt to address the problem in your own sphere of influence.


> Look, we aren't going to go back to a setting where we don't patch software.

And I never said we should. I was just describing the situation.

> Look at the Y2K bug [...] We also had similar success in big problems like fixing the ozone layer

That's an optimistic point of view :-). I would argue that both of those were infinitely easier to solve than, say, the current mass extinction, energy problem and climate change. We've past what, 7 of the 9 planetary boundaries? We've pretty much lost the Amazon, we've pretty much lost coral reefs, we've definitely failed at the 1.5C goal and are now moving forward to failing the 2C goal. With the inertia in that system, once you fail there is no coming back in the next thousand years (unlike the ozone, BTW).

Those are real problems that we are not only not solving: we're making them worse. All of them.

> All I ask is that you speak up and question when the teams you work for are trying to push unfinished products.

Most software is part of the problem. The problem is that we do too much in general. Doing requires energy. The more we do, the more energy we use. The more energy we use, the more we screw up the planet. You want to help? Do less. But at the end of the day, you still need to get paid, right? And for that you need your company to be profitable, right?


At infinity, the shape becomes a sphere and all orientations of it are identical. It is no longer a convex polyhedron and, thus, not subject to consideration.


Podman is daemonless while docker is a client/server pair. Podman also shipped with support for rootless containers, though Docker now has that capability as well.

The podman CLI is nearly a drop-in replacement for docker such that `alias docker=podman` works for many of the most common use cases.

If you don't care about the security implications of running containers as root via a client/server protocol, then by all means keep using Docker. I've switched to podman and I'm happy with my decision, but to each their own.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: