am I crazy for thinking that the 16GB Pi 5 is just there to absorb money from people who purchase the most expensive version of things? Like really nobody needs that much RAM on a Pi?
I am running a bunch of stuff on my 8GB Pi and I've run out of memory to put more stuff on. I use it as a low power server running a bunch of Docker containers. Some of these require at least 200mb and some use 2G of memory.
I was going to buy a small nuc and load it up on memory but I've acquired an old Mac Mini with 16GB of ram, which will do.
Yes, you are crazy for thinking that. The extra ram is useful for small LLMs and also running lots of dock containers. The very low power consumption makes it ideal for a low end home server.
I use the 16GB SKU to host a bunch of containers and some light debugging tools, and the power usage that sips at idle will probably pay for the whole board over my previous home server, within about 5 years.
Docker is about containerization/sandboxing, you don't need to duplicate the OS. You can run your app as the init process for the sandbox with nothing else running in the background.
"just as well"? lmao sure i guess i could just manually set up the environment and have differences from what im hoping to use in productio
> 1GiB machine can run a lot of server software,
this is naive
it really depends if you're crapping out some basic web app versus doing something that's actually complicated and has a need for higher performance than synchronous web calls :)
in addition, my mq pays attention to memory pressure and tunes its flow control based on that. so i have a test harness that tests both conditions to ensure that some of my backoff logic works
> if RAM is not wasted on having duplicate OSes on one machine.
I think that on linux docker is not nearly as resource intensive as on Mac. Not sure of the actual (for example) memory pressures due to things like not sharing shared libs between processes, granted
Only Java qualifies under your arbitrary rules, and even then I imagine it's trying to catch up to .NET (after all.. blu-ray players execute Java).. which can run on embedded systems https://nanoframework.net/
I listed some popular languages that web applications I happened to run dockerised are using. They are not arbitrary.
If you run normal web applications they often take many hundreds of megabytes if they are built with some popular languages that I happened to list off the top of my head. That is a fact.
Comparing that to cut down frameworks with many limitations meant for embedded devices isn't a valid comparison.
It’s an incredibly lopsided machine. The Pi 5 is decently powerful, but you really really should not be attempting to use one as a desktop replacement. While theoretically possible you are so much better off with a $50 used SFF PC.
Browsers treat RAM as infinite, if you want to for whatever reason open LinkedIn, you might wanna get a bigger model. I’d personally rather buy more ram than I need rather than deal with the cost of fixing / working around the issue in future
No you are not crazy. It's silly to try to use a raspberry pi 5 16GB (or equivalent priced product) as a desktop workstation with a GUI on it when much better actual x86-64 based workstations exist. Ones with real amounts of PCI-E lanes of I/O, NVME SSD interfaces on motherboard, multiple SATA3 interfaces on motherboard, etc. In very small form factors same as you'd see in any $bigcorp office cubicle.
Old web stuff is still around. RSS feeds are out there. Some parts of masto are generally chill and filled with people having interesting convos.
You don't have to give up on everything to participate, but it can be a space to go to if you're tired of every social interaction being mediated by (I'm being glib) hustlers
I'll bite: how do we take advantage of ZFS layering if not via the docker-style layering?
I find dockerfile layering to be unsatisfying because step 5 might depend on step 2 but not 3 or 4... the linearisation of a DAG makes them harder to maintain and harder to cache cleanly (with us also having monster single-line CMDs all in the main of image results).
Proofs tend to get generated upstream of people trying to investigate something concrete about our models.
A computer might be able to autonomously prove that some function might have some property, and this prove is entirely useless when nobody cares about that function!
Imagine if you had an autonomous SaaS generator. You end up with “flipping these pixels from red to blue as a servis” , “adding 14 to numbers as a service”, “writing the word ‘dog’ into a database as a service”.
That is what autonomous proof discovery might end up being. A bunch of things that might be true but not many people around to care.
I do think there’s a loooot of value in the more restricted “testing the truthfulness of an idea with automation as a step 1”, and this is something that is happening a lot already by my understanding.
I really don’t! I switched it all of months ago - autocomplete, autocaps, all of
it. I reached a point where the constant frustration had to be worse than any productivity gain it was hoping to offer.
A few months on… I like
it! Frustration is all gone, any errors are just on me now, and it forces me to slow down a bit and use the brain a bit more!
Not having to use stuff like whiteout and having undo is quite nice. Getting layers "for free" is nice. I've given myself permission to even do some digital manipulation like resizing on the fly rather than redrawing some eye.
But watching some pros go at it on paper + pen, I do get this feeling that when you don't have the undo button you really do gotta force yourself to get good at the nitty gritty. Really you need to get good at drawing lines nicely the first time when you're inking to paper.
Also, when going through this stuff slowly and annoyingly, or tracing other people's art, you really start internalizing things like how some visual effect is gotten by just a handful of lines. 6 well placed lines gives you a notion of very voluminous hair for example.
it does feel like touching the lower level parts of a craft can help so much with having good fundamentals at a higher level.
Who hasn't, as a kid, thought "Oh I can draw bubble letters" and then realize that it's actually kinda tough, and then after mastering it have some new appreciation for spacing lines out properly and knowing where the pen goes?
Seems like a useful way to get a feel for things. Everyone "knows" how perspective work, yet a lot of people can't commit it to a page. There's clearly some understanding for how things work hidden in being able to do the thing, isn't there?
> But watching some pros go at it on paper + pen, I do get this feeling that when you don't have the undo button you really do gotta force yourself to get good at the nitty gritty. Really you need to get good at drawing lines nicely the first time when you're inking to paper.
Totally! A lot of artists recommend to young folks that before they dive into Procreate / Illustrator - still get good at pen and paper and ink by hand. The lack of undo button forces you to make choices and commit to them. You also hear a lot of artists talking about how, past a certain point of creating a piece, you are now "solving problems" to finish it.
I highly recommend the Draftsmen podcast as a wonderful resource to learn. Marshall Vandruff is a master teacher and has many thoughtful things to say.
The precision and concentration, also forces you to slow down and think about the part once again. Is it correctly dimensioned and size. Is the material the correct one. Can it be machined and assembled that way. How can it be inspected? Etc.
> But watching some pros go at it on paper + pen, I do get this feeling that when you don't have the undo button you really do gotta force yourself to get good at the nitty gritty. Really you need to get good at drawing lines nicely the first time when you're inking to paper.
Often you envision what the line will look like in your head before placing. And then you have the motor skills/experience to recreate that line well. They're just some of the micro-skills that encompass "drawing".
I think on the first point, we have to start calling out authors of packages which (IMO) have built out these deptrees to their own subpackages basically entirely for the purpose of getting high download counts on their github account
Like seriously... at 50 million downloads maybe you should vendor some shit in.
Packages like this which have _7 lines of code_ should not exist! The metadata of the lockfile is bigger than the minified version of this code!
At one point in the past like 5% of create-react-app's dep list was all from one author who had built out their own little depgraph in a library they controlled. That person also included download counts on their Github page. They have since "fixed" the main entrypoint to the rats nest though, thankfully.
> entirely for the purpose of getting high download counts on their github account
Is this an ego thing or are people actually reaping benefits from this?
Anthropic recently offered free Claude to open source maintainers of repositories with over X stars or over Y downloads on npm. I suppose it is entirely possible that these download statistics translate into financial gain...
I'm completely apathetic about spicy autocomplete for coding tasks and even I wonder which terrible code would be worse.
The guy who wrote is even/odd was for ages using a specifically obscure method that made it slower than %2===0 because js engines were optimising that but not his arcane bullshit.
from a security perspective this is even worse than it looks. every one of those micro packages is an attack surface. we just saw the trivy supply chain get compromised today and thats a security tool. now imagine how easy it is to slip something into a 7 line package that nobody audits because "its just a utility." the download count incentive makes it actively dangerous because it encourages more packages not fewer.
> There is a user in the JavaScript community who goes around adding "backwards compatibility" to projects. They do this by adding 50 extra package dependencies to your project, which are maintained by them.
I remember seeing this one guy who infiltrated some gh org, and then started adding his own packages to their dependencies or something to pad up his resume/star count.
As usual, there's a cultural issue here. I know it's entirely possible to paste those seven lines of code into your app. And in many development cultures this will be considered a good thing.
If you're working with Javascript people, this is referred to as "reinventing the wheel" or "rolling your own", or any variation of "this is against best practice".
I think the fact that everyone cites the same is-number package when saying this is indicative of something though.
Like I legit think that we are all imagining this cultural problem that's widespread. My claim (and I tried to do some graph theory stuff on this in the past and gave up) is that in fact we are seeing something downstream of a few "bad actors" who are going way too deep on this.
I also dislike things like webpack making every plugin an external dep but at least I vaguely understand that.
Even there the "problem" was left-pad being used by one or two projects used in "everything".
So the problem isn't that everyone is picking up small deps, but that _some_ people who write libs that are very popular are picking up small deps and causing this to happen.
This is different because it doesn't really say that all JS developers are looking to include left-pad. But I _do_ think that lots of library authors are too excited to make these kinds of dep trees
The point isn't that everyone needs to write the same code manually necessarily. It's that an author could easily just combine the entire tree of seven line packages into the one package the create-react-app uses directly. There's no reason to have a dozen or so package downloads each with seven lines of code instead of one that that's still under under a hundred lines; that's still a pretty small network request, and it's not like dead code analysis to prune unused functions isn't a thing. If you somehow find yourself in a scenario where you would be happy to download seven lines of code, but downloading a few dozen more would be an issue, that's when you might want to consider pasting the seven lines of code manually, but I honestly can't imagine when that would be.
The article and (overall) this comments section has thankfully focused on the problem domain, rather than individuals.
As the article points out, there are competing philosophies. James does a great job of outlining his vision.
Education on this domain is positive. Encouraging naming of dissenters, or assigning intent, is not. Folks in e18e who want to advance a particular set of goals are already acting constructively to progress towards those goals.
People aren't criticizing the development philosophy in this subthread. This has been done by the article itself and by several people before.
What people are criticizing is the approach in pushing this philosophy into the ecosystem for allegedly personal gain.
The fact that this philosophy has been pushed by a small number of individuals shows this is not a widespread belief in the ecosystem. That they are getting money out of the situation demonstrates that there is probably more to the philosophy than the technical merits of it.
As usual, he's copying someone else who's been doing this for years:
https://www.npmjs.com/package/is-number - and then look and see shit like is odd, is even (yes two separate packages because who can possibly remember how to get/compare the negated value of a boolean??)
Honestly for how much attention JavaScript has gotten in the last 15 years it's ridiculous how shit it's type system really is.
The only type related "improvement" was adding the class keyword because apparently the same people who don't understand "% 2" also don't understand prototypal inheritance.
That's a good point, it's only been around for 30 years, and used on 95% of websites. It's not really popular enough for a developer to take an hour or two to read how it works.
The word "used" is doing some heavy lifting there. Not all usage is equal, and the fact that it's involved under the hood isn't enough to imply anything significant. Subatomic physics is used by 100% of websites and has been around for billions of years, but that's not a reason to expect every web developer to have a working knowledge of electron fields.
Let's compromise and say that whoever is responsible for involving (javascript|electron fields) in the display of a website, should each understand their respective field.
I don't expect a physicist or even an electrical engineer or cpu designer to necessarily understand JavaScript. I don't expect a JavaScript developer to understand electron fields.
I do expect a developer who is writing JavaScript to understand JavaScript. Similarly I would expect the physicist/etc to understand how electrons work.
The issue with this framing is that understanding something isn't a binary; you don't need to be an expert in every feature of a programming language to be able to write useful programs in it. The comment above describing prototypical inheritance as esoteric was making the point that you conflated the modulus operator with it as if they're equally easy to understand. Your responses don't seem to indicate you agree with this.
It sounds like you expect everyone to understand 100% of a language before they ever write any code in it, and that strikes me as silly; not everyone learns the same way, and some people learn better through practice than by reading about thinks without practice. People sometimes have the perception that anyone who prefers a different way of learning than them is just lazy or stupid for not being able to learn in the way that they happen to prefer, and I think that's both reductive and harmful.
Given that they literally changed the language to support the class keyword, I think we can safely assume it isn't just the beginners who never bothered to learn how prototypical inheritance works.
I don't exactly know the system for which restaurants pull out of the disposable chopsticks but I think that for example "normal" tempura, katsudon, or like soba restaurants will tend to be those.
I almost associate the cheapo reusable plastic chopsticks with some food courts or Matsuya at this point.
reply