For whatever it's worth, which is probably not much, I'm in my late 40s and I never really liked sourceforge either. Too many clicks to do anything (still true), and I didn't like cvs (also still true, but thankfully now irrelevant).
(My SF account dates from June 2004. I expect I was thinking about using it as version control for a FOSS project I was working on at the time, though I don't know why, as it seems SF didn't support svn until 2005. Maybe I couldn't find any better options? The pre-GitHub ecosystem was pretty bad! But, luckily, I ended up not having time for any FOSS stuff from about autumn 2004, so: problem solved. And when I next looked, in early 2010, everything seemed to be git+github, and all the better for it.)
CVS was the best option when SourceForge began, and Subversion was barely an improvement. SourceForce was critical to the growth of Open Source and Free Software in the 00s. Projects no longer needed to maintain their own revision control server, file server, forum, issue tracker, etc. SF.net wasn't great compared to any of the current generation of hosting services. And, most Open Source projects were in an uncomfortable state of looking around for alternatives by the time Github arrived in 2008, because it was slow to adopt newer technologies and was running on a skeleton crew. Most of my projects had their own forums/issue trackers, and were self-hosting git, by then. Ads stopped being a usable revenue strategy, so SF.net stopped being able to keep up with what developers wanted.
But, it had a few years where every OSS developer I knew had nothing but positive feelings toward SourceForge. It gave one of the projects I work on thousands of dollars worth of transit over the years. It's hard for folks who've only ever worked on an "everything for small developers is a loss leader" internet to understand that we used to pay for and manage our own servers. I had a $200/month bill for just my Open Source projects on a couple of colocated servers.
Yes, SourceForge went through a lot of shitty stuff. The overtly hostile stuff (adware inserted in OSS projects) happened after it changed hands. But, when the revenue of their original model dried up and they couldn't stay on top of new development (being slow to offer a good git experience was a fatal mistake).
Anyway, it's not great now (though it is now owned by seemingly decent folks, who haven't really been able to find a way to make it work), and it went through a period where it was a borderline criminal enterprise, but it started out as a genuinely helpful part of the OSS community.
My parents used to measure us in feet and stones. I still know my height in feet, because it hasn’t changed in decades. My weight, unfortunately, I could only tell you in kilograms.
Is it also their policy to botch the significant digits? 300 mph is obviously a crude estimate. Converting to 483 km/h implies an unreasonable degree of precision.
Are you sure you're not an LLM? There is no way anybody writing 6502 would do anything else, because there's no other way to do it.
(You can squeeze in a cheeky Txx instruction afterwards to get a 2-or-more-for-1, if that would be what you need - but this only saves bytes. Every instruction on the 6502 takes 2+ cycles! You could have done repeated immediate loads. The cycle count would be the same and the code would be more general.)
I suppose using Txx instructions rather than LDx is more of an idiom than intended to conserve space. Also, could an LDx #0 potentially be 3 cycles in the edge case where the PC crosses a page boundary? (I'm probably confused? Red herring?)
I don't know how the 6502's PC increment actually worked, but it was an exception to the general rule of page crossings (or the possibility thereof) incurring a penalty, or, as was also sometimes the case, just ignored entirely. (One big advantage of the latter approach: doing nothing does take 0 cycles.)
The full 16 bits would be incremented after each instruction byte fetched, and it didn't cost any extra if there was a carry out of the MSB.
I want to make clear to the US folks here that there's about 2 or 3 cafes that still sell traditional eels, and it's explicitly a London food, not wider British cuisine. From the number of videos and articles I see about them though, you'd think the country was covered in Eel cafés. Honestly, covering them at all is tabloid ragebait content at this point.
Runs nicely on my M4 Max Mac Studio - which, going by the PassMark numbers, is about the same speed as an iPhone 17. Testament, I think, to how well this site is optimised for the sort of underpowered device, hopelessly inadequate for modern workflows, that many sites would not bother to cater for.
This doesn't apply here - I don't think? The article claims X; so it is surely no sin for the post rebutting it to straight up state that X is, in fact, not the case.
The LLM tic, by contrast, has a noticeable tendency to be deployed even when X has never been previously mentioned. It is a valid rhetorical technique, and I assume that's why the LLMs have picked up on it - but it has to be deployed judiciously. Which is something LLMs appear absolutely incapable of doing. And that is why people notice it, and think it sucks.
reply