Hacker Newsnew | past | comments | ask | show | jobs | submit | rstuart4133's commentslogin

I got a similar response. It looked wrong on several levels. So I asked it: if it knew the current time, and if it hard learnt when I retire.

It claimed it didn't know either.


My own version of this is that I wanted bank transactions in CSV format. However, the transactions were more than a year old, and the bank only provides recent transactions in a downloadable form. They do, however, provide statements in PDF format going back indefinitely. But the objects in the PDF are arranged in a way that made pdftotext output near-indecipherable.

I thought I'd give Gemini a go. When I uploaded the 18-page PDF, it complained the output exceeded some limit. So I used pdftk to break it up into 4-page chunks, which seemed to work - the output looked very good and passed a couple of spot checks. But I don't trust these things as far as I can kick them.

There was a transaction column and a running balance column, so I did a quick check to see if every new balance equalled the previous one plus the transaction. And it almost always did. There were a couple of errors I put down to transcription errors. I was wrong. I eventually twigged that these errors only happened where I had split the PDFs. After tracking where the balance first went wrong, it became evident it had dropped chunks of lines, duplicated others, and misaligned the transaction and balance columns. It was complete rubbish, in other words.

So why did my balance check show so few errors? I put that down to it knowing what a good bank statement looked like. A good bank statement adds up. So it adjusted all the balances so it looked like a real bank statement. I also noticed these errors got more frequent in later pages. I tried splitting the PDF into single pages and loading them into the model one at a time. That didn't help much for the later pages, but the first one was usually good. So then I loaded each page into a fresh context, with a fresh prompt. If that didn't produce something that balanced, the second go always did.

I'm not sure it saved time over doing it manually in the end. It's a tired analogy now, but it's true: at their heart, these things are stochastic parrots. They almost never produce the same output twice when given the same input. Instead, they produce output that has a high probability of following the input tokens supplied. If there is only one correct output but the output is small enough, the odds are decent they will get it right. But once the size grows, the odds of it outputting complete crap become a near certainty.


> “good” API design (highly subjective)

A good API does two things. Firstly it DRYs the code out. Many APIs start life doing only that, as a collection of routines you get tired of writing over and over again. Secondly, the functions are designed in a way that reduces the need to share information. Typically they do that by hiding a whole pile of details in their implementation so knowledge of those details is all in one place rather than scattered across a code base. A term often used for APIs that don't do that well is "leaky", or we say it's a "leaky abstraction".

Perhaps a good API has other more subjective attributes but it must have those two. LLMs suck at both. You can see that in the comments here, when people say they write verbose code. It's verbose because the LLM didn't go looking for duplicated functionality - if it needed it, it just put the code where it was focused on at the time.

If they are bad at DRY then I need a better superlative to describe how they fare at respecting the isolation boundaries that underpin good module design. As far as I can tell, they have no idea about the concept or how to implement it. Let loose, they are like a bull in a china shop, breaking one boundary after another.


> I suggest reading up on wifi and RF before going further.

I'd suggest neither matter in the face of how the problem is solved in the consumer cards the OP was talking about. They solve it by locking down the firmware that controls the radios.

The reality is most routers do that too. You can replace the firmware in most of them with OpenWRT or something similar. You still can't exceed regulatory limits because of the signed blobs of firmware in the radios.

Nonetheless, here we are getting comments like yours, which imply all firmware in the device must be behind a proprietary wall because a relatively small blob of firmware in them must be protected. It has its own protections. It doesn't need to be protected by the OS or the application that runs on top of it.

Yet it's in those applications where most of the vulnerabilities show up. Making them consumer replaceable would help in solving the problem. Protecting the firmware is not a good reason to not do it.


I was responding to the original post about open standards. My point is that anything with an RF transceiver will never be as open as a standard PC with replaceable components. The radio portion will always be blocked off. That relatively small blob will always limit how much control you can exert over the device.

We don't have to look far. The embedded space with Arduinos, ESP32s and even RPis is a hacker's paradise. Yet the radio stack is restricted in all of them. For instance, it's not possible to take an ESP32 board and turn it's single antenna into a MIMO configuration, even if you make a custom PCB with trace antennas.


My point is that anything with an RF transceiver will never be as open as a standard PC with replaceable components. The radio portion will always be blocked off.

sure, but again, why would the RF transceiver on my desktop PC or in my laptop be any different than the one in my router?


this topic about how to turn anything into a router is tangentially related: https://news.ycombinator.com/item?id=47574034

Your take is far better than the OP. I didn't figure out what point the OP was trying to make.


> Seems like everyone, everywhere is overworked, underpaid, and under supported. How much longer can we frogs survive the boiling?

I'm Australian. In Australia, if you are forced to work overtime the rate of pay goes up, by 50% or if it's extreme, double. As a consequence "underpaid" isn't a common complaint of people working lots of overtime.

This has some negative consequences of course. If labour is plentiful you can have lots of people on hand and pay them on an hours-worked basis. The same deal applies - if you go beyond 40 hours a week their rate of pay goes up, but that shouldn't happen if labour is plentiful and management is on the ball.

But if, as in this case labour isn't plentiful, then they are going to have to fix it some other way - like paying to train more staff. What the employers can't do is offload the problem entirely onto their employees, so there are forces compelling them to get their act together.

The OP makes it sound like the dynamic is very different in the US.


The USA has time and a half overtime above 40 hours as well under the FLSA. This applies to ATC.

Unfortunately, this is now priced into certain government jobs in the USA and people rely on it. Americans see the obscene amounts of money and hours as a challenge until they actually burn out.

ATC isn't even the worst offender. Law enforcement and prison guards can pull 100+ hours a week on a regular basis. This is how prison guards can pull $400k/year.


> ATC isn't even the worst offender. Law enforcement and prison guards can pull 100+ hours a week on a regular basis. This is how prison guards can pull $400k/year.

There's definitely elements of that - but part of that is that many pensions are based on the two highest earning years of your career, so it's "common" among cops when they are planning to retire to spend two years working every possible piece of OT available, to maximize their pension income.


Sounds like a weird incentivization for sure. Why not base the pension on the average over all the years worked as in many other countries? When you offer such incentives, people will naturally work in such a way.


Because you'll loose half a career's worth of inflationary salary rises that way. Also, women might work part time after having children which would skew the average annual salary down. Over a 40 year career, just from inflation alone, you'd be getting about half your final salary that way, even ignoring any increases later on from being better qualified or taking on more responsibility.

Mind you, in the UK, defined benefits pension schemes are very rare nowadays, but where they exist they are defined as a percentage of the final year salary with that company, so the highest 2 year thing seems a bit weird to me but for a different reason.


Highest 2 year is an attempt to address the edge cases around 1 year (especially final year).


You can adjust for inflation and only exclude year where you don't work full time.


In the US, social security is based on the 35 highest paying years. If that system is good enough for social security, I don't see why we don't do the same for government pensions.


Much more obvious solution is to not include overtime pay in the pension calculation.


But wouldn't it be cheaper for them to just hire more people to do the same amount of hours so that no overtime was used? And they would get better work output as well, since people would be rested.


Yes, but it's a local maximum since hiring more people is going to be expensive/difficult until overtime is fixed.

Some state prisons have escaped the overtime pit by offering huge sign-on bonuses and doing a hiring surge. But it takes longer to train ATC than a CO.


It would, yes. There's large worker/union pressure in many of these fields to not take away overtime by reducing hours, though, since it is such a huge part of total compensation.


It would be cheaper.

But then you don't get to go on stage with a chainsaw and bragging about how you're downsizing government.


Workers in these jobs in the US have less protections than the private sector as they are deemed imperative to operating the country. As such it is illegal for them to strike for better wages, but they do receive 1.5x wages during their mandatory overtime work, and have a base wage over twice that of the annual median income, before their significant overtime income. I think the burn out is a bigger cause.


> The OP makes it sound like the dynamic is very different in the US.

The obvious reason that US air traffic control has been understaffed for "a while now" is that, roughly a decade ago, the FAA caved in to political pressure to stop having so many white controllers by decommissioning any hiring practices that posed a risk of hiring white controllers.

This meant the size of the workforce froze, stressing the system.

Tracing Woodgrains went into a good amount of depth on the scandal: https://www.tracingwoodgrains.com/p/the-full-story-of-the-fa...


That scandal exacerbated the problem, but there would still be a severe shortage had it never happened. The core issues, pay and grueling hours, predate that scandal by decades.


I've met truck drivers in the US that were driving 16 hours per day. I'm not sure if it is legal or not but it certainly wasn't considered exceptional. It's insane the kind of pressure some jobs put you under. Now ATC has obviously more potential for misery than a truck driver, still a passenger bus / truck collision isn't a small thing either.


16 hours is generally not allowed unless there are severe adverse conditions, but it's only recently with ELD (Electronic Logging Device) mandates that these rules are being forced to a degree. Before that, many drivers would simply go as many hours as they humanly could to keep moving.

See: https://www.fmcsa.dot.gov/regulations/hours-service/summary-...


They would keep multiple overlapping logbooks so they could always present a "legitimate" log to DOT.


It's mostly around engineering whether you have enough downtime to "move" your "driven" hours into.

For long-haul it's probably a bit different, but other routes have a lot of annoying delays.

E.g. waiting at a port, waiting for a trailer replacement, waiting for receiving, etc.

Afaik, these are all classified as driving hours for logbook purposes.

It creates a situation where you legally have to park a truck on the side of the road when you hit your cap, even though 1/2 of your day might have been waiting around for something.

Imho, that's a bit ridiculous, and I'm sympathetic to shadow logbooks there.

For the 16 hours straight cross-country pounders, less-so. But long-haul is what autonomous trucking will likely eat first.


The toll it takes on your sleep schedule is also brutal, because the rule is 10hr on / 8hr off. If those 8 "off" hours happen to coincide with sleeping hours you might get some rest but that won't be frequent, or enough. It would be better, smarter, and safer to just drive 16hr and then sleep for 8hr. But the rules are the rules, they don't have to make sense.

I forgot about this, you're right. I remember some of my family members talking about this. (much of my extended family was in trucking)


much of my extended family was in teh trucking industry one way or the other. Before the electronic books you had manual log books. Lying in your log book was a very big deal, i want to say you could get in trouble with the law in addition to getting fired. Before that though it was even more the wild west than it is now. My step-father knew my grandfather's "outfit" and he would joke that if they had a chain long enough to go around it they would haul it no questions asked.

This is from a popular 90s country song:

sleep would be best

but i just can't afford to rest

have to be in Denver at morning light

- much too young to feel this damn old


This was a while ago and I was absolutely shocked. In Europe they'd impound your truck.


Truck drivers and the hours they're on the road need to be logged per law. Most of this is done (or perhaps MUST be done) electronically.


Things are quite DOGEy in the US.


> The rest of us do not have the upfront capital to purchase these trucks.

You don't need any upfront capital. Do it when the trucks become due for refurbishment a truck. Then it's almost a no-brainier, as its cheaper convert it to an EV: https://www.januselectric.com.au/


> What a fantastic company HP used to be,

The company you are thinking of still exists. It was split from HP in 1999. It is called Agilent Technologies. HP kept the name and went into the business of flogging commodity computer products, Agilent continues to design and sell low volume high end gear and kept the engineering culture that requires.

HP later split again into consumer and corporate. To put the result into perspective HP Inc's (consumer) revenue is $55B/yr, HP Enterprise is $37B/yr, and Agilent is $7B/yr.

Given the crap being thrown here you would think the splits were a disaster. I don't know if the engineering culture of Agilent would have survived if it hadn't happened.


> Our neighbors are exactly the ones to blame.

This is a bad road to go down.

If you start blaming people rather than processes, the obvious fix is to disenfranchise the people (or worse). If you blame the process and then change it to get a better outcome, everyone wins.

There is a lot of low-hanging bad fruit in how the USA runs it's democracy. You allow gerrymandering, you allow politicians to make it difficult for people to vote. The small voter turnout means the fringe single issue voters get a disproportionate say. You use first past the post, which means candidate the majority think is the "least worst" may not get elected. (No voting system is perfect, but FPP is by far the worst.) Your political donation laws favour corporates, who by definition have no interest in voter welfare.


> Not learning from new input may be a feature.

Learning is OpenClaw's distinguishing feature. It has an array of plugins that let it talk to various services - but lots of LLM applications have that.

What makes it unique is it's memory architecture. It saves everything it sees and does. Unlike an LLM context its memory never overflows. It can search for relevant bits on request. It's recall is nowhere near as well as the attention heads of an LLM, but apparently good enough to make a difference. Save + Recall == memory.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: