Hacker Newsnew | past | comments | ask | show | jobs | submit | api's commentslogin

I remember one take I had in 2024 after the election.

We're all familiar with some of the "defund the police" experiments that went too far in places like Portland and San Francisco and resulted in things like epidemics of casual shoplifting.

Well, what we just did is basically the white collar crime equivalent. We now have a wide open free for all for all forms of white collar crime. You can just insider trade, launder money, commit investment fraud, anything you want, the way you saw random people just walking into CVS drug stores years ago in SF and grabbing stuff and walking out.

But as usual when someone steals $100 worth of stuff on the street that's a national crisis and those people are scum, but when people steal billions that's fine cause they're wearing suits.


> You can just insider trade, launder money, commit investment fraud, anything you want, the way you saw random people just walking into CVS drug stores years ago in SF and grabbing stuff and walking out.

Something I'd disagree with is... enforcement will not help against what causes people to turn out and steal in stores. Fix widespread poverty, get people out of homelessness, help people legitimately get off of drugs, help them get jobs even when they have convictions on the book, and then they won't need to become members of what is, essentially, small and hyperlocal crime networks.

In contrast, insider traders and billion-scale fraudsters - they do not have the need for survival pushing them to do crime. It is just pure unchecked greed that drives them.


It’s a myth that petty shoplifting is something done by poor people. The people doing it are usually part of organised crime (that is not “hyperlocal”) and generally are doing better than actual poor people.

The idea poor people are somehow criminal is a myth that needs to be eradicated.


I listened to a podcast a while back (human authored I'm pretty sure) about low-quality gutter level streamer content and how popular it is, speaking of personalities like asmongold and a vast number of even worse imitators.

This content is made by humans but is pointless grindingly stupid filler spiced with a dash of obviously performative offensiveness. You're basically listening to a complete loser (or someone LARPing as one) telling you about their boogers and then being racist and then playing video games for 6 hours.

But it's wildly popular. Millions of people stream this kind of shit for hours every day.

There's a lot of people out there who just want to numb their brains, and there seems to be no floor. You can just keep making it dumber. The stuff people stream (and doom scroll) on the Internet makes 1980s daytime soaps look like high art from a lost golden age.

So it's not at all surprising that millions of people listen to low-quality un-curated AI slop podcasts.

I actually unsubbed from the podcast I heard. Meta discussion of crap like this isn't much better than the content itself. Keep driving. Do not look at the car accident.

I had kind of an epiphany like that in the last year. The Information Age means information is free. It costs $0 and is produced to infinity. That means you are not missing anything. Your attention is actually 100% yours, and if you choose to ignore the car wreck that's fine. There are infinity car wrecks. There are infinity everything. Keep driving.


The problem is I want to live in the "correct information age" - that qualifier is hard to find. I suspect that correct will cost money. Unfortunately I don't know how to pay for it. Many of the major publishers are also using AI with questionable fact checking. Where I most need correct information is my local small town news, and there isn't even a newspaper anymore. (there is the nearby big city newspaper, but they don't cover my local issues well)

The correct Information Age is the one you get when you let your prefrontal cortex play DJ and pick your media based on what will enrich your life or teach you something.

If you let your brain stem drive you’ll spend your life scrolling political rage bait and slop.

Whether the slop is made by humans or machines doesn’t much matter. I kind of think the AI thing is a red herring, though AI does make it possible to make a lot of slop. So maybe AI is the thing that forces the issue.


"No one ever went broke underestimating the intelligence of the American people."

--H. L. Mencken (or at least attributed so.)


One of the real costs of the end game attention economy is that when your "car" crashes, noone is going to stop to help. When the market you engage in gets swallowed up, everyone will buy the swill that outcompetes you on perceived surface level value. Communities get fractured. Organizations that used to be community pillars (church) become self serving. All these things create a positive feedback loop of intellectual degradation.

The vast majority of people tuning into those kinds of slop streams are not really active listeners. It's more akin to turning on the radio while you work/clean/perform some other task that doesn't require strict focus or attention, with the added benefit that you can personally interact with the streamer (chat) when you have attention to spare. But I'd wager most viewers never directly interact or even pay much active attention to the stream at all.

I wouldn't be surprised if the same dynamic is playing out with these AI slop podcasts.


"It is simple to use, easy to organize and manage, and very robust."

This is why nobody uses it. Cloud stuff has to be as baroque as possible.


To misquote Douglas Adams:

There is a theory which states that if ever anyone discovers exactly what Kubernetes is for and how it works, it will instantly disappear and be replaced by something even more bizarre and inexplicable.


>if ever anyone discovers exactly what Kubernetes is for

This part is easy, Kubernetes is for your CV. /s


It's simple to use only for toy use cases, that's why nobody uses it. The article everyone in this thread seems to like only goes as far as 'I pushed to git so it must be ok' which is laughable and I'm not even DevOps.

What happens if it errored on deployment or after that? you wanna write custom (bash? :D) hooks for that? What about upgrading your 'very vertically scalable' box? What if it doesn't come up after the upgrade? your downtime is suddenly hours, oops.

The k8s denial is strong and now rivals frontend frameworks denial. Never fails to amuse.


Fair points, and yes, failed deploys need to be handled explicitly.

In our case, the answer is not "hope and bash". We deploy versioned images, use health checks, monitor the result, and keep rollback simple: redeploy the previous known-good image/config. Host upgrades are also treated as maintenance events, with backups and a recovery path, not as something Compose magically solves.

But I think there is an opposite mistake too: assuming every production system should be operated like a high-scale tech company.

Many production workloads are boring, predictable, and business-critical. They do not need aggressive autoscaling, multi-node orchestration, or constant traffic-spike handling. They need reliable deploys, backups, monitoring, health checks, and a clear rollback path.

That is where Compose can be a good fit: simple operational model, understood failure modes, low moving parts.

Kubernetes becomes much more compelling when you actually need automated failover, rolling deploys, autoscaling, multi-node scheduling, and stronger deployment primitives.

Not needing Kubernetes is not necessarily denial, it is just choosing the complexity budget that matches the problem.


Definitely not a one-size-fits-all choice, but Kubernetes can be so easy and there are so many benefits that get you from one small app to a medium sized business that it seems like a no-brainer for someone starting out. Spinning up k3s is pretty minimal overhead, but right away you can handle storage and backups very easily, automatic certs for all your apps with cert-manager is pretty much a one-and-done, traffic management for external and internal tools is easy, and even logins for websites is just an annotation in a yaml file. You can spin up and try out any software you want without spending time configuring it or setting up additional servers- and when you do need more hardware, it's one command on a virtual server, and just about as easy with physical hardware.

2-3 miniPCs, cloudflare, tailscale, and k3s can save (possibly tens of) thousands on SaaS products, and would probably scale you to a company of dozens AND host your product.


>2-3 miniPCs, cloudflare, tailscale, and k3s can save (possibly tens of) thousands on SaaS products, and would probably scale you to a company of dozens AND host your product.

outline a simple real world system to illustrate?


Sure!

Get a few Beelink SER5 or SER9's, install Nextcloud to cover the files, document editing, communications (to save on Microsoft 365). Then you can have Gitea (and gitea actions) for your source code and building (skipping github enterprise), Harbor to host and scan your containers, frappe for HR, etc. Pretty much anything you pay enterprise rates for, you can self-host a version that will get your company from 1 to 100s with minimal extra work. If it's not on https://github.com/awesome-selfhosted/awesome-selfhosted, you can probably vibe code it in a couple hours.

I just started to run a k3s cluster with an almost enterprise grade software factory and a few (light) production workloads on a single cheap minipc.

https://scottyah.com/cluster


The concept totally works but I would worry about using a beelink in a business context where I had to support it.

For up to low hundreds of users I think you're better off just with 1 vertically scalable box for all the officey / web server workloads.

You mitigate the hardware failure stuff with a vendor contract where you can get someone on-site and overnight you parts, and by keeping things super boring. Volume replication is not boring, avoid at all costs. NAS or SAN if you have to but all disks in the main box for as long as you can.

For 20 person SME maybe a 2-bay Synology or similar, for a heavier company a low end 2U with hardware support. Proxmox under the OS for reduced worry snapshots, rollback, backup etc. Proxmox is there for operational flexibility, resist the temptation to create a network of VMs, you just need 1 CT or VM with all the workload inside it.

For container workloads on 1 host Portainer works as well as k8s IMHO, it gives you the key property you want - you can IaC everything declaratively with terraform + compose over an API.

Caveat that if CI gets heavy you might need to scale that out but you can keep it stateless.


I checked your page. Wanted to ask, are you using longhorn with k3s for replicated volumes? How beefy a box do you need for that (CPU/MEM/Disk speed)?

I have several VMs in clouds with similar k3s architecture as yours and am wondering if there are any benefits to installing longhorn vs sticking to logical (postgres, mimir, whateveritis) replication instead.


I think people are using different meanings of “production environment.”

I agree with gear54us and upvoted their comment, but I also understand what the author of the root comment is saying.

I have also delivered systems using Docker Compose that are actually running in production. The point I want to make is that people may define “production” differently depending on the number of active users, operational requirements, and risk level.

To me, this debate feels similar to the broader monolith vs. microservices debate.


"Not just for my own projects but for $500 million dollar companies and more."

Seems reasonable to assume these are serious production environments, no?!


Not necessarily. When you get to those numbers you're seeing dozens of teams with their own silos and deployment methods. So they might be responsible for the core business that's running 30 nodes and serving 100MM users a day, or they might be working on some internal portal or a WordPress site.

When I mentioned that, it was for a company that got acquired by a bigger company. I can't give specifics with revenue / profits but it is a 10+ year old online SAAS business and all of their web apps are being served by Docker Compose with a non-trivial amount of direct customer facing traffic.

Lots of data, caching, web apps, background workers and lots of various API integrations. No fancy React front-end, no fancy crazy system architectures. Just a typical LAMP stack but running in Docker Compose, cranking away serving value to customers with very good uptime and a very low cloud cost relative to revenue. With that said, a managed database was involved but all of the web traffic was served by apps running through Docker Compose with a simple git push model of deployment that handled thousands of deployments over the years without much fuss.


That as well as different definitions of scale. I've done small bits of consulting work for a research company for the past four years, deploying and managing Kubernetes clusters for them as well as helping get some of the main applications up on it. This is all internal tooling, though. Their customer-facing sites are just Drupal instances running on bare EC2.

Internally, though, they wanted to self-host a chat server, Apache airflow, Overleaf for collaborative editing of research proposals, three separate Git servers, a container registry, many other things, all with extremely strict multi-tenancy isolation requirements for storage and networking because they're handling customer data and their own customers audit them for it. That was a hell of a lot easier to do with Kubernetes than trying to figure out some giant universe of barely related technologies with vastly different APIs, having to buy specialized appliances for network and storage that probably also need their own control plane software hosted somewhere else.

But if you just look at "scale" as number of http requests a particular URL gets per some unit of time, the customer-facing sites have far greater scale. If you're trying to attribute revenue, beats me. They wouldn't sell anything without the customer-facing sites, but they wouldn't have anything to sell without the internal tooling. Solo web devs get into this tunnel vision view of ops because, to them, often the web site is the product. That's not the case for most businesses.

And, of course, they'd probably just use someone else's SaaS for tooling. But if you're in a heavily regulated space where that isn't possible and you have to self-host most of your business systems, then what?


The post receive hook provides you real-time feedback as it runs in the terminal where you did the git push. If something broke during the deployment you'd get notified by looking at the output. If it's running in CI, you'd see a CI failure and get notified using whatever existing mechanisms you have in place to get notified of deployment pipeline failures.

Zero downtime server upgrades are easy. You could make a new server, ensure it's working in private and then adjust DNS or your floating IP address to point to the new server when you're happy. I've done this pattern hundreds of times over the years for doing system upgrades without interruption and safely. The only requirement is your servers are stateless but that's a good pattern in general.


Define production. Docker compose is fine for running a small internal service in production for dozens of users (i.e. not for developing said service, but for using it). I would assume it isn't fine to run a hyperscaler (but I wouldn't know). Those are extremes, and there are going to be a ton of situations in between.

I can't personally speak to what the limit of docker compose is, as I have only worked on the lower end of this: self hosting for personal use and for small internal services serving maybe 20 users.


From my personal experience if deployment strategy is thought through then Docker running through Compose can handle few hundreds of thousand of users per day without an issue and probably could handle more with proper hardware upgrades.

Most people/apps only need the toy cases... If you're writing an internalized tool for a company that will only have a handful of users, then doing much more than compose for deployments is a violation of KISS and YAGNI.

Are you really going to try to get 4+ 9's of uptime for a small, one-off app? Do you really need to use a cloud distributed data store that only slows things down for no real gains in practice? Do you really think the cloud services are never down, and you're willing to spend a f*ck-ton of money to create a distributed app when historically an Access DB or VB6 app would have done the job?

I've moved applications deployed via compose pretty easily... compose down -t 30 then literally sftp the application to a backup location then to the new server which only needs the Docker Engine community stack installed.. then compose up -d ... tada! In terms of deployment, you can use github action runners if you want, or anything else... you can even do it by hand pretty easily.


I can manipulate space-time. Enter the on ramp to the highway and accelerate. I am now aging slightly slower than you are.

I can also manipulate gravity by charging my phone. Since E=mc^2 my phone weighs slightly more when charged.


Hey, I saw your comment on my suggestion of switching to drama instead of switching animals on the cocaine bear thread and just wanted to correct an apparent misunderstanding of meth. I can't comment over there any more because I missed it for too long so I'm doing it here, which I hope isn't so against the rules as to annoy anyone. Meth is both excreted and metabolized and then the metabolites excreted in urine but the metabolites are pretty psycho active as well.This is done primarily with the liver enzyme CYP2D6 which primarily metabolized it to amphetamine and 4-hydroxymethamphetamine, which are both psycho active. The end result is 30-50% methamphetamine excreted unchanged and the rest the metabolites of which most are the two mentioned and psychoactive themselves. The ratio is primarily determined by the urine PH.The more you know (tm)

Uhm... Hm...

This person is a meth scientist? What . Okay. Don't want to know.


comedian making a joke about cocaine bear movie sequel addressing factual inaccuracies in a reply to the joke. Not a meth scientist, just fascinated by biology and chemistry.

Hey I am not shaming you. You do you okay. If the apocalypse ever happens we will need people like you.

I've got a time machine! It only goes forwards right now, reverse is currently broken, and it only goes at one second per second right now.

Mine's under the seat.

(Futurama: https://www.youtube.com/watch?v=v0Ns8EtR7RA)


TBH Meshtastic's code isn't great either. It's neat to play with but not robust.

It sucks how everything feels like a toy. I think meshtastic is the closest thing to a “product”. They made a bunch of bad architectural decisions that are haunting them now like how nodes broadcast its info.

It doesn't surprise me. This is a deep networking problem and very few CS people know anything about networking or how to design clean, fast, low-overhead network protocols and systems.

If IP were designed today the packets would have 500+ bytes of plain text JSON as headers and the spec would support hundreds of extensions.


Is there a better designed mesh project like those two getting built that you know of? Reticulum?

It's a fundamentally really hard problem that looks easy on the surface. There is no solution that works well beyond the small scale. Many people have tried. It's the same kind of thing that draws people to try to write IPv8.

Yeah, openmanet with reticulum seems the most “professional” right now

Heh nice, I have 4 openmanet nodes on HaLow right now

Have you seen that IPvwhatever proposal from a handful of weeks back that has OAuth/OIDC in packet spec

7 OSI layers were too many. What if we ONE BOG ONE!

Because they are toys. For real work it makes so much more sense to use the internet. With the new satellite tech you can reach the internet everywhere.

Mesh radio is a fun way to chat with radio nerds in your area. Not a serious infrastructure.


So what’s the real solution for when Starlink is too expensive and too high power? I really want to solution for remote mountaineering communication that’s not just GMRS. And what about remote weather sensors? I really don’t need a full internet connection just to send a tiny payload every 5 minutes.

Meshtastic should be the obvious answer for this but in my limited experience the app(s) and code are buggy on even the most typical hardware. Wish it wasn’t the case but it is.


How remote is "remote"?

If you're talking about a few miles/KMs between nodes, plain old LoRaWAN might be more than sufficient, esp. for the sensor use case. The nice thing about using LoRaWAN is that's it's literally providing an IPv6 overlay so you can run e.g. MQTT or a text-based messaging protocol designed for regular TCP/IP use. UDP is preferable to avoid frequent session resets and keepalive traffic chewing up your available bandwidth.

Meshtastic and MeshCore can theoretically provide "infinite" range so long as there are peers between the nodes you want to connect. Theoretically, mobile peers can also serve as store-and-forward nodes so that reachability doesn't need to be constant, just frequent enough to handle the messaging you want to do.

I would absolutely not rely on either for a safety-critical application, though. If you want emergency comms in case something happens while you're out on the mountain, use a satellite communicator. There are a ton of these marketed for outdoor/portable use, and they have much more robust "SOS" capabilities (up to and including direct dispatch of search-and-rescue).


LoRaWAN seems interesting but the documentation and availability of is either "Crypto hobby project from Seedstudio" or "Strange telecom companies selling $900 base stations that still expect an internet connection (for licensing?)". Maybe I'm missing something but the LoRaWAN doesn't see to sell itself very well when half the vendors are behind "Contact for quote" pages.

Of course, for real emergencies I have a Garmin SOS device. It would just be "nice" to have something for local 2-5 km communication that doesn't need a clear view of sky, works partially underground, etc. GMRS is "fine" but from a physics perspective a digital signal with Chirp encoding should go further and be more reliable.

Seems like JS8Call or Packet radio might more in line with what I want. It's just surprising that something like Meshtastic hasn't replaced them.


You said it yourself:

> Of course, for real emergencies I have a Garmin SOS device.

that's why the mesh radio/LoRaWAN-type ecosystems suck. I don't mean to be rude or snarky; just to point out a very contextually-relevant example against your argument.

For the average consumer who needs this functionality seriously, there's a proprietary (and often costly) solution. Subtract those mission-critical-remote-comms devices and you're left with hobbyist needs, so you get hobbyist-quality ecosystems.


Is there any implementation of the store and forward for mobile nodes?

From what I recall, meshcore de duplication only tracks like the last 256 messages so that could quickly fail to de duplicate.


Meshtastic supports store and forward for ESP32 nodes that have a few MB of RAM, but not for the nRF52 devices that can't practically buffer much. I've only used the latter class of devices, so I don't have any experience with how well Meshtastic's store and forward works in practice.

Depends what exactly it is you want. But phones these days can communicate with satellites for emergency messaging.

I think people need to think more about what the actual scenario they have in mind is because it seems most people think of mesh radio as some backup for the government shutting the internet down. When in reality it’s almost useless for that since it’s so easy to jam or flood mesh radio.


For emergency communication? Iridium, zoleo, JS8Call, packet radio.

Not LORA.


We may see a day when the internet is not available, or when interacting with it represents an unacceptable risk. It's a good idea to know how to set up your own.

In that day whatever is jamming starlink will just jam mesh radio too. It'll likely be even easier.

Probably not short range connections. The application layer will have to change but we can still have an internet that operates when we pass each other on the street or share an elevator--the primary bandwidth carrier being devices being physically moved through space, and cross-device chatter being opportunistic.

Also, it might not be jamming. It might be that whoever is operating the satellites at the time denies access unless you enable inspection, and then sells that info to somebody who would hurt you--or whatever other can't-trust-the-middleman dystopia you care to imagine.


It's a different jamming scenario however. Starlink is comparatively centralised, and reliant on both terrestrial (ground stations) and satellite communication. While the terminals themselves are sparse and widely distributed, the backbone infrastructure is far less so. It's possible to target the satellites, ground stations and critical service dependencies (e.g. GPS) rather than needing to target the hundred of thousands/millions of terminals directly.

The mesh networks are dealing with, by definition, a sparse and widely distributed set of devices which are independently configured and controlled, and in their current widely available form are only dealing with terrestrial communication. Without that point of centralisation you would need to focus on targetted regional jamming, as from a practical standpoint you cannot perform wideband RF jamming over an entire country - signal jammers don't scale that well, and geographic features come into play. As an example you might effectively block mesh networks from operating reliably in a given city, but if people were to move outside of that area then the mesh would operate again. Geography is both a strength and a weakness here: a mountain range will impede direct communication with someone on the other side, but it will also have the same effect on jammers which will vastly increase the cost to deploy them in a ubiquitous fashion.


I suspect jamming LoRa could be a lot easier than most radio though. LoRa signals are incredibly weak and long range. A jammer which jams at a massively higher power level could cover a massive area. You can also just flood the network with messages that nodes will happily relay further for you.

That's a DoS attack, not "jamming". RF jamming usually relies on flooding frequencies with garbage which doesn't get interpreted as valid protocol traffic but does "crowd out" legitimate use.

The protocol-aware class of attack you describe does require some knowledge of the radio parameters being used, since LoRa runs on very narrow bands and uses both time and frequency-hopping to avoid congestion on any one virtual channel. They even apply (very basic) encryption to messages to prevent unknown senders from flooding the channel.

Unfortunately, both systems come preconfigured out of the box to use a default configuration which most users never override. So like cheap FRS/GMRS walkie talkies, all it takes is a few jerks who don't care about common use to overwhelm everyone with bogus messages. If you fire up a new device running the default Meshtastic firmware in any kind of dense urban environment, odds are it will more or less immediately get inundated with spam: "ping", "test", "hello from <neighborhood>", etc.

And since MT + MC both flood the shared channels to push messages across intermediary nodes, they pretty much self-DDoS by doing...nothing.


That’s really the killer for survivalist mesh ideas. It’s trivially easy to jam, and if it’s open it’s also easy to DDOS.

Jamming is done in military scenarios too, but in that case it’s limited by the fact that a jammer is a big transmitter painting itself with a big sign that says “fire missile here.” Civilian mesh doesn’t have that fallback.


Neglect is a bigger killer than active denial. If the Internet goes down it will likely be because a few execs decided to replace competent network admins with AI, or because all the competent network admins decided to quiet-quit because they aren't being paid jack compared to the folks hawking AI vaporware.

Battlestar Galactica opened my eyes to this problem more than electronic warfare in games of the day did. It's freaky (read: terrifying) that we're getting to a point that people are starting to take "embedded information (and decision)" systems serious enough to deploy them into meat space.

True. But look at the situation in Iran. As much as internet seems like an essential part of daily life, there is the possibility for the governments to shut it down.

> not a serious infrastructure

I've been tinkering with the tech to make city-wide flrc meshes joined together over the internet, my estimates are that it should be at least able to support thousands of users per region.


This has been tried with mqtt bridges in Meshtastic. But it’s ultimately kind of pointless because if you are planning some kind of internet alternative, you don’t want to build something that falls over the moment the internet goes down.

I know, I'm not too worried that I can't reach Billy in Ottawa, but you should still be able to text your mother six blocks away. /shrug

That works with just basic mesh radio. The internet bridges thing is tempting but ultimately a bit useless and doesn't push people to extend the mesh natively.

Don't get me wrong, I like the mesh/* ideas around everyone being able to prop up a router/repeater, but I've seen what that can do in an urban environment... unfortunately for some, I don't plan on letting every tom dick and harry to set up their own towers.

Usability wise Meshcore is better due to static routing and enabling (far) longer paths.

And also them calling out Andy for they key? Stupid.

The official Android app (blessed by the "community") still has in-app purchases up. It gates the remote repeater management, afair one of the things Andy's MeshOS app for TDeck is gating.

The underlying protocol is open source, but the companion app isn't.

Yes, in the current version of Meshcore app it's possible to manage the repeater without the key, after a wait period, but that changed recently and they still nudge towards in-app purchase.

Similarly Andy's firmware* can be used for free, without purchasing a key, unless the user wants the full functionality.

*is it even his, considering it's been AI-generated?

A big mess. Also the network is a big mess, now I understand why.


This is not a Rust issue but an inherent issue with dependencies in all languages. External dependencies rot.

For Rust code for serious industrial use cases or firmwares, it's always best to minimize dependencies as much as possible to avoid this. Making local copies of dependencies is also a thing for certain use cases.


There is a difference in C and Rust culture. Embedded C projects rarely have external dependencies, and in rare cases when there are dependencies (e.g. most projects use vendor SDKs nowadays), they are pinned and there is an expectation of API compatibility anyway

Rust on the contrary incentivises using dependencies, and especially embedded software is hard to write without using external packages (e.g. cortex-m-rt, bytemuck and many others)


in what way is it incentivized by Rust?

imo it's just so much easier


Well, ease is one incentive, yes :)

Another is the complexity of the language when it comes to low-level programming. E.g. bytemuck I've mentioned before solves a problem that is hard to even explain to a C developer.


I think a big difference is that the less unsafe you want in your own code, the more you rely on crates to provide a safe abstraction for unsafe code in a centralized place where soundness holes are likely to be found.

Of course it was always understood that you could have bugs in C libraries and some of them may include memory unsafety, but the culture is very different when there's no explicit way to demarcate the parts of the code most deserving of scrutiny.


I think that was true maybe 5-10 years ago.

We have Rust code in a living code base that is more than 5 years old and it's required maybe one touch in the last 5 years to fix some issues due to stricter rules. It was simple enough it could have been automated.


The EU has the talent to ramp local production of panels and batteries in years, which as the parent said is how long a panel or battery embargo would take to really cause a crisis.

I mean the EU has ASML, the Large Hadron Collider, and ITER, among other things. There is no engineering talent problem.

If they couldn’t do it it’s a political problem.


I’m more concerned that we do not have the supply chain. Like, sure, we have people who can build solar panels, but are the components local? I wouldn’t expect so, we would very likely import from china. Developing effective supply chains takes decades, it’s not really something you can do right away with the level of precision required by modern technology

Look at how fast various nations ramped up advanced (for the time) military production before and during WWII, or the Manhattan Project, or the Apollo program, or China's rapid rise.

Engineers who know how to build factories, batteries, and solar panels could sit down and create a "war plan" to build out and scale infrastructure quickly if you asked them to do it and then got out of the way.

The EU has plenty of talent with the know-how to do this. If it couldn't be done even in a crisis situation, that's a political problem.


it’s my understanding that inefficient bureaucracy is the biggest stumbling block for rapid infrastructure or technological growth. engineers can get it done but the bottle neck will likely be to do with how fast government bodies can move

it’s my understanding that inefficient bureaucracy has always been significant stumbling block for infrastructure

panels themselves are highly simplified chip-like production. silicon crystals and some dopants. anyone can make extruded aluminum. anyone can build power electronics, make copper or aluminum wire.

the only interesting parts here from a supply chain perspective are power transistors. europeans have been known to design these, but idk how easy it would be to start producing them locally. they have macroscopic feature sizes though.

it would take several years of iteration to get a functioning pipeline that ran at volume, but none of this is hugely complicated. certainly not decades.

the real problem is financialization. you have to float that plant with the understanding that its not going to be competitive.


China copied the US. Now the US should copy China. At least with some things, like industrial policy.

The US had copied lot of British technology in late 18th and 19th centuries.

https://en.wikipedia.org/wiki/Industrial_Revolution_in_the_U...

Chinese industrial policy: dominate world of manufacturing (consumer goods, light industry, heavy industry, hardware, software , everything), aquire technology and know-how by any means necessary (buy technology, companies, joint-ventures, espionage, send students abroad and return them), move supply chains as much as possible to China (buy raw minerals, mines, mining rights, ship ores back to China for refining and processing), become independent of other countries as much as possible (prefer domestic coal, gas, oil, domestic synthetic fuels, in the long term minimalize all imports).


That was one of my thoughts years ago after playing with early ChatGPT and local llama1: this proves that intelligence and consciousness do not necessitate one another and may not even be directly related.

I’ve kind of thought this for many years though. A bacterium and a tree are probably conscious. I think it’s a property of life rather than brains. Our brains are conscious because they are alive. They are also intelligent.

The consciousness of a bacterium or a tree might be radically unlike ours. It might not have a sense of self in the same way we do, or experience time the same way, but it probably has some form of experience of existing.


But why? A roomba has senses, and can access them when it has power and respond to stimulation. When it runs out of power it no longer experiences this sensation and no longer responds to stimulus.

How is that different than a cell?


You simply defined consciousness as life, which seems like an unusual but also not very useful definition.

> an unusual ... definition

I don't think it's that unusual. It seems to me just to be a narrower version of panpsychism:

https://en.wikipedia.org/wiki/Panpsychism


Someone that has recently dies has pretty much the same biology as when they were alive. The conciseness is the main difference, I would say.

I think this gets to the conflation we naturally have with consciousness and a sense of self. Does a tree have a sense of self? I imagine probably not, a tree acts more like a clonal colony than a single organism.

It may be helpful here to think about, at what point does a sense of self, of varying degrees, become evolutionarily advantageous?

An animal that doesn't have some kind of pair bond or social arrangement, and doesn't raise its young, has a lot less need for some of this emotional hardware than we do.

Whereas K-selected species that raise their kids have broadly the same need for it as humans.

That doesn't categorically mean it evolved with the first pair-bonding K-reproducer, or that birds have parallel-evolved emotional hardware like ours, but there's plenty of behavioural evidence there - the last common ancestor of birds and humans was small-brained and primitive, but investing in individual children probably evolved around the time of amniote eggs, just because they were so much more biologically expensive to produce than amphibian or fish eggs.


Is someone tripped out on mushrooms experience ego death and total disruption of sense of self still conscious? They may even contend they are more conscious than normal life, what with all the communing with the universe and whatnot.

Trees react to the world around them in many ways.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: