It used to be the case with intel macs and their atrocious confluence of cooling system, thermals, and power supply system (the CPU actually was not really to blame).
But when RAPL and similar tools to throttle CPU are used, the CPU time gets reported as kernel_task - on linux it would show similarly as one of the kernel threads.
The process described is literally an attempt at canceling benefits in "frog boiling" method. If Tories went straight to canceling benefits, they would end up in trouble, by making worst possible process they could put it in terms of "verifying eligibility and that benefit funds are not scammed out".
Similar approaches are utilized in other areas of british government, unfortunately.
Around the time of K8 being released, I remember reading official intel roadmaps announced to normal people, and they essentially planned that for at least few more years if not more they will segment into increasingly consumer-only 32bit and IA-64 on the higher end
They were trying to compete with Sun and IBM in the server space (SPARC and Power) and thought that they needed a totally pro architecture (which Itanium was). The baggage of 32-bit x86 would have just slowed it all down. However having an x86-64 would have confused customers in the middle.
Think back then it was all about massive databases - that was where the big money was and x86 wasn't really setup for the top end load patterns of databases (or OLAP data lakes).
In the end, Intel did cannibalize themselves. It wasn’t too long after the Itanium launch that Intel was publicly presenting a roadmap that had Xeons as the appealing mass-market server product.
Yeah they actually survived quite well. Who knows how much they put into Itanium but in the end they did pull the plug and Xeons dominated the market for years.
They even had a chance with mobile chips using ATOM but ARM was too compelling and I think Apple was sick of the Intel dependency so when there was an opportunity in the mobile space to not be so deeply tied to Intel they took it.
I think the difference was that replacing Itaniums with Xeons on the roadmap didn't seriously hurt margins (probably helped!)
The problem with mobile was that it fundamentally required low-margin products, and Intel never (or way too late) realized that was a kind of business they should want to be in.
> and thought that they needed a totally pro architecture (which Itanium was).
Was it though ? They made a new CPU from scratch, promissing to replace Alpha, PA-RISC and MIPS, but the first release was a flop.
The only "win" of Itanium that I see, is that it eliminated some competitors in low and medium end server market: MIPS and PA-RISC, with SPARC being on life support.
The deep and close relationship of Compaq with Intel meant that it also killed off Alpha, which unlike MIPS and PA-RISC wasn't going out by itself (Itanium was explicitly to be PA-RISC replacement, in fact it started as one, while SGI had issues with MIPS. SPARC was reeling from the radioactive cache scandal at the time but wasn't in as bad condition as MIPS, AFAIK)
I never used them but my understanding is that the performance was solid - but in a market with incumbents you don't just need to be as good as them you need to be significantly better or significantly cheaper. My sense was that it met expectations but that it wasn't enough for people to switch over.
Merced (first generation Itanium) had hilariously bad performance, and its built in "x86 support" was even slower.
HP-designed later cores were much faster and omitted x86 hardware support replacing it with software emulation if needed, but ultimately IA-64 rarely ever ran with good performance as far as I know.
Pretty sure it was Itanium that finally turned "Sufficiently Smart Compiler" into curse phrase as it is understood today, and definitely popularized it.
> It’s as if they actually bought into the RISC FUD from the 1990’s that x86 was unscalable, exactly when it was taking its biggest leaps.
That's exactly what was happening.
Though it helps to realise that this argument was taking place inside Intel around 1997. The Pentium II is only just hitting the market, it wasn't exactly obvious that x86 was right in the middle making its biggest leaps.
RISC was absolutely dominating the server/workstation space, this was slightly before the rise of the cheap x86 server. Intel management was desperate to break into the server/workstation space, and they knew they needed a high end RISC cpu. It was kind of general knowledge in the computer space at the time that RISC was the future.
Exactly! But this was not just obvious in retrospect, it was what Intel was saying to the market (& OEMs) at the time!
The only way I can rationalize it is that Intel just "missed" that servers hooked up to networks running integer-heavy, branchy workloads were going to become a big deal. OK, few predicted the explosive growth of the WWW, but look around at the growth of workgroup computing in the early 1990's and this should have been obvious?
Well, TBH it wasn't all FUD - hanging on to x86 eventually (much later) came back to bite them when x86 CPUs weren't competitive for tablets and smartphones, leading to Apple developing their own ARM-based RISC CPUs (which run circles around the previous x86 CPUs) and dumping Intel altogether.
It is interesting how so much of the speculation in those days was about how x86 was a dead end because it couldn’t scale up, but the real issue ended up being that it didn’t scale down.
Well, it turns out that it could scale up, it just needed more power than other architectures. As long as it was only servers and desktop PCs, you only noticed it in more elaborate cooling and maybe on your power bill, and even with laptops, x86 compatibility was more important than the higher power usage for a long time. It's just when high-performance CPUs started to be put in devices with really limited power budgets that x86 started looking really bad...
Interesting, apparently it did scoreboarding like the CDC6600 and allowed multiple memory loads in flight, but I can't find a definite statement on whether it did renaming (I.e. writes to the same registers stalled). It might not be OoO as per modern definition, but is also not a fully on-order design.
Most of BeOS IPC is in mainline Linux kernel [1] - the difference here seems to be implementing some of the services that are supposed to be available related to filesystem etc and the user land side of it (raw IPC does very little without another layer on top)
[1] - there's a reason why a bunch of BeBook reads the same as some of the oldest parts of Android documentation
I am distrubting an svg file. It’s a program that, when run, produces an image of mickey mouse.
By your description of the law, this svg file is not infringing on disney’s copyright - since it’s a program that when run creates an infringing document (the rasterized pixels of mickey mouse) but it is not an infringing document itself.
I really don’t think my “i wrote a program in the svg language” defense would hold up in court. But i wonder how many levels of abstraction before it’s legal? Like if i write the mickey-mouse-generator in python does that make it legal? If it generates a variety of randomized images of mickey mouse, is that legal? If it uses statistical anaylsis of many drawings of mickey to generate an average mickey mouse, is that legal? Does it have to generate different characters if asked before it is legal? Can that be an if statement or does it have to use statistical calculations to decide what character i want?
The SVG file is a representation of mickey mouse thus possibly touches Disney copyright (depends on exactly what form of Mickey it represents, as I believe some went public domain equivalent recently). It's not capable of being something else without substantial rework. Therefore it is a derivative work.
Generally, to pass the test of not being a derivative work it would need to be generic enough that it creates non-copyrighted works as well, then the responsibility shifts over. Can the program exist without a given copyrighted work (not general idea, specific copyrighted works)? Then it's quite probably not derivative.
Anything Microsoft lacking V6 is configuration issue - ever since Vista, Windows networking (in corporate) treats v4-only as somewhat "degraded" configuration (some time ago there was even a funny news post about how Microsoft was forced to keep guest WiFi with enabled v4, having switched everything else to V6 only)
This is core to plan9's "everything is a filesystem", a generalisation of Unix "everything is a file" and surprisingly a direct analog of Sun Spring OS RPC+Namespace model
The anti-nuclear position in Germany is very old, and core to the existence of Greenpeace and green parties on DACH region (down to firing RPGs at reactors).
Does Russia benefit and probably fund it? Sure.
But DACH environmentalism grew from antinuclear protests, not the other way around, and thus will boycott nuclear even when it goes against their modern stated goals.
But when RAPL and similar tools to throttle CPU are used, the CPU time gets reported as kernel_task - on linux it would show similarly as one of the kernel threads.
reply