Great intro that quickly explains the reasoning for the proposed new measure:
> Virtually everyone would agree that a 20-meter tree is twice as tall as a 10-meter tree. Conversely, everyone would agree that the 10-meter tree is twice as short as the 20-meter tree. There is no threshold or “shortness line” above or under which these relationships cease to hold: a 5-meter tree is twice as short as a 10-meter tree, a 1-meter tree is twice as short as a 2-meter tree, and so on. This reasoning remains valid when considering other multiples: a 1-meter tree is three times shorter than a 3-meter tree. To be sure, when assessing the height of a single tree, different people may disagree whether it is short or tall, as their judgment will depend on the benchmark they use for their assessment. However, when comparing two different trees, virtually everyone would make similar cardinal comparisons. In mathematical terms, shortness is the reciprocal of tallness. [...] In this paper, I apply the same logic to define a new poverty measure
And it's silly. A person earning $100 a year is not "twice as poor" as a person earning $200 in any meaningful sense; both are extremely poor and will require essentially the same amount of public support. But this metric treats the difference as so huge (80 hours to earn $1 vs 40) that it drowns out any differences in the rest of the income distribution.
If they truly did, there wouldn't be a huge amount of humans whose role is basically "Take what users/executives say they want, and figure out what they REALLY want, then write that down for others".
Maybe I've worked for too many startups, and only consulted for larger companies, but everywhere in businesses I see so many problems that are basically "Others misunderstood what that person meant" and/or "Someone thought they wanted X, they actually wanted Y".
> Our name for this new CMS is EmDash. We think of it as the spiritual successor to WordPress. It’s written entirely in TypeScript. It is serverless, but you can run it on your own hardware or any platform you choose. Plugins are securely sandboxed and can run in their own isolate, via Dynamic Workers, solving the fundamental security problem with the WordPress plugin architecture. And under the hood, EmDash is powered by Astro, the fastest web framework for content-driven websites.
To me this sounds of the polar opposite of the direction CMS's need to go, instead simplify and go back to the "websites" roots where a website are static files wherever, it's fast, easy to cache and just so much easier to deal with than server-side rendered websites.
But of course, then they wouldn't be able to sell their own "workers" product, so suddenly I think I might understand why they built it the way they built it, at the very least to dogfood their own stuff.
I'm not sure it actually solves the "fundamental security problem" in actuality though, but I guess that remains to be seen.
I love building static (or statically generated) websites, but all too often, customers want dynamic content. And what's worse, they don't tell you up-front, because they don't really understand the difference.
"I need a website for my bakery". "What's supposed to be on it?" "Our address, opening times, a few pictures". I build them a static website.
"Now I need a contact form". Ok, that doesn't really fit into a static website, but I can hack something together. "Now I need to show inventory, and allow customers to pre-order". A static website won't cut it anymore.
When you develop for clients, especially those that you don't know very well, it's a bad idea to back yourself into a corner that's not very extensible. So from that perspective, I really get why they give plugins such a central spot.
a friend of mine owns a very popular psych/stoner label
until 3 days ago the website was a bunch of static pages, updated by the "webmaster", no shopping cart, no search, no contact form, just the email on the website
he and his employers have been living out of selling records and band merchandising for more than a decade, before he even created a real company
wanna buy a record?
press a button that sends you to the paypal cart
wanna pre order?
there is a preorder product on paypal, were you can put your shipping address and when it's ready, it'll be shipped to you
he's been selling in Europe and overseas in the US since the day he started
Now it got to the point where he needed to put different currencies for different regions, taxes, tariffs (UK, USA) so he built a new website that (automatically I guess) show the prices in the local currencies and stuff like that
This is the main reason why WordPress is so popular still to this day. You can cache the crap out of the frontend to the point that it’s basically a static site at that point but then it’s still all running on top of a dynamic platform if you need that flexibility in the future.
I got my start in webdev slinging WordPress sites like a lot of self taught devs and I definitely see the pain points now that I’ve moved on to more “engineering” focused development paradigms but the value proposition of WP has always been clear and present.
Given how WP leadership is all over the place at the moment, I can see how Cloudflare sees this as an opportunity to come in and peel away some market share when they can convince these current WP devs to adopt a little AI help and write applications for their platform instead.
This is the most underrated point about CMS choice. The cost of migrating from static to dynamic is way higher than starting with a dynamic platform and caching aggressively. WordPress won precisely because you could start with a $5 blog and end up with a full e-commerce site without a rewrite. The question is whether EmDash can match that upgrade path while actually keeping the security promises
I think this is true, however, when it comes to non-coding clients I've worked with they really do like the ability to make minor edits to a site with a UI rather than having to continually ping a developer.
The problem with WordPress (and it looks like this solution largely just replicated the problem) is that it's way too cumbersome and bloated.
It really is unlike any modern UI for really any SaaS or software in general.
It's filled with meaningless admin notices, the sidebar is 5 miles long and about 98% of what the user sees is meaningless to them.
Creating a very lightweight, minimal UI for the client to edit exactly what they need or like you said, just static files really is the best solution in most cases. The "page builders" always just turn into a nightmare the clients end up handing over for a dev to "fix" anyways.
Not sure why so many people feel the need to continue on the decades of bloat and cruft WordPress has accumulated, even if it's "modernized."
There are two types of WordPress sites from my perspective as someone who got their start in webdev in that ecosystem.
The first and arguably largest is exactly what you describe. Little sites for small businesses who just want an online presence and maybe to facilitate some light duty business development with a small webshop or forum. These sites are done by fly by night marketers who are also hawking SEO optimization and ads on Facebook and they’ll host your site for the low low price of $100/mo while dodging your phone calls when the godaddy $5/mo plan they are actually hosting your site on shits the bed.
The second, and more influential group of WordPress users, are very large organizations who publish a lot of content and need something that is flexible, reasonably scalable and cheap to hire developers for. Universities love WP because they can setup multisite and give every student in every class a website with some basic plugins and then it’s handsoff. Go look at the logo list for WordPress VIP to see what media organizations are powered by WP. Legit newsrooms run on mostly stock WP backends but with their own designers and some custom publishing workflows.
These two market segments are so far apart though that it creates a lot of division and friction from lots of different angles. Do you cater to the small businesses and just accept that they’ll outgrow the platform someday? Or do you build stuff that makes the big publishers happy because the pay for most of the engineering talent working on the open source project more generally? And all that while maintaining backwards compatibility and somewhat trying to keep up with modern-ish practices (they did adopt React after all).
WordPress is weird and in no way a monoculture is what I guess I’m trying to say.
4) Don't use a stack of plugins, if you must use any keep them as dumb as possible and stick to those with a longstanding reputation.
A basic instance, set to auto-update, installed on a shared webhost where OS/web server updates are someone else's problem is pretty foolproof. A VPS running a long-term distro set to auto update is almost as good.
---
That said I personally dropped Wordpress for static site generation years ago because I realized I didn't actually need any of the dynamic features and wasn't using the WYSIWYG editor. Now I write Markdown in to a file in a git repo and then trigger a regeneration whenever I update it.
I wrote my own CMS, as the core WordPress functionality wasn't too much to replicate.
But eventually the WordPress ecosystem was too strong, and the real value proposition was plugins and familiarity. That continues to be true to this day, which is why no CMS has de-throned WordPress in spite of significantly better UX, architecture and developer experience. None of it matters when the client has a suite of plugins they have been using for 10+ years, that are now core to their business.
> To me this sounds of the polar opposite of the direction CMS's need to go, instead simplify and go back to the "websites" roots where a website are static files wherever, it's fast, easy to cache and just so much easier to deal with than server-side rendered websites.
To me this wording is strange, since traditional web frameworks do render pages server-side. The specific functions of their templating engines are often even called "render" (https://jinja.palletsprojects.com/en/stable/api/#jinja2.Temp...) or "render_template" or similar (https://docs.djangoproject.com/en/6.0/topics/templates/#djan...). I guess "server-side rendered" is being coopted by the JS ecosystem for some time now, as if they had come up with the very idea of rendering pages on the server side.
It would do the world some good, if people could just look at a technical term, understand its meaning by its components, and then not go: "Ah yes, I will use the same term, but no, no, no, I mean something different by that!"
For this example:
(1) "server-side": happening on the server
(2) "rendering pages": various meanings in different contexts, but on the web meaning filling in information and creating parts of the HTML tree, to get a full HTML document.
This has been done for decades and the result are usually, for the browser, static web pages. Static as in the opposite of dynamic. Dynamic meaning that the pages react to user interaction, meaning scripting, meaning JS.
If it uses Astro, then it's a literal static website generator. But with modern React components if you need anything on top of this. The same with plugins, I assume people don't have to use those but the important thing is that you can if you want to.
Distribution of the content as static html or in any other format is a very tiny aspect of managing content and mostly a solved problem for any CMS nowadays. Focusing on that minimal aspect seems grotesque as there are much bigger challenges in making potentially large amounts of content actually manageable by a potentially very heterogeneous group of content creators with varying skills, responsibilities and relationships.
But "back to CMS roots" is absolutely not what the WordPress ecosystem is about. It's about the absolute galaxy of plugins that provide you with an entire digital experience "in a box". You can just install whatever plugins for ecommerce, CRM, forms management, payments, event calendars. They will all plugin to both the template system and the MySQL database. There are a lot of well-known and reputable plugins with huge installed bases (woocommerce, gravity forms, yoast seo) but there's a ton of shady ones that can infect your install. Cloudflare is directly addressing the shortcomings of the existing plugin architecture indicating they intend for EmDash to fill a similar niche as an All-in-One digital experience and not just a simple CMS.
The question is then they'd be building some brand new thing not compatible with wordpress. Supposedly the proposition is to steal people away from wordpress. Not just get people building something from scratch looking for a new framework. I'm guessing the recent lawsuits also provide some momentum.
It's not compatible with WordPress, though. It slurps a WordPress export, which is quite literally static data. They expect you to code up anything dynamic using their agent skill.
You either write them by hand, or use a tool that generates it locally, upload everything and you're done. Perfect security. Great performances.
It's in this sense that static generators go back to the source, the simply produce dumb HTML files that you upload/publish to a web server that doesn't need to run any code. Just serve files.
Imho CMS is just a tool that generates static html files on the server. The distinction is a bit artificial. CMSes have static html cashing and CDNs will allow you to "one-click" firewall the dynamic administration and cache the static html for you.
Static website generators are cool way for programmers to do that work on their machine but in the end the distinction of what gets served is very small (if you set up the basics).
CMSs allow non-technical people to update the site - that's why WordPress, Drupal, and all of the shambling corpses of "digital experience platforms" still command the dollars and eyeballs that they do.
Go ahead and give your content people access to a static site builder and see how quickly the process falls apart. Static site generators are perfect for engineers but terrible for the marketing people that are the actual "customers" of your public-facing website.
I used Hugo, told the marketing people to send me a markdown file and I'd load it up to Hugo. That was clearly too painful for them. So I told them to send me a Word doc and I'd convert it to markdown and load it up. That was too painful. I told them to send me an email with the words and images and I'd work out the rest. That was too painful.
They got some marketing agency to rewrite the entire marketing site in Wordpress, and then we had to implement some godawful kludges to get our backend to redirect to their shitty WP host for the appropriate pages. It was awful.
But the marketing folks were finally happy. They could write a blog post (that no-one read) themselves in the actual CMS and see it go live when they pushed the button.
We spent thousands, in a cash-strapped startup, dealing with this bullshit.
Reminds me of Vercel and NextJS, where a popular framework design is constrained by, or optimally runs, on their infra, but then comes with pains or unusualness if self-hosted (eg. middleware). Vendor lock-in plays are a big red flag
It looks like they rolled it so you can plug in local components of your choice, though? The security model does assume you have MAC containerized environments available at your fingertips though, so having something like DHH's once is probably a soft minimal dependency if you want to do-it-yourself.
> it’s disturbing to see people clamoring to deny others their freedom in a FOSS context
How does "allow building Linux to be IPv6-only" somehow "deny others their freedom" exactly? I'm willing to wager most distributions will still be dual v4+v6, but if they aren't, isn't that something for you to bring up with your distribution rather than that the kernel just allows something?
Coupling this patch with language about “legacy IP”, along with the follow up comments from the person who submitted the patch, it is clear that the submitter is hostile towards IPv4. I also see hostility towards IPv4 in the comments here and other similar discussions.
I have no problem with allowing optional IPv4 or IPv6 only builds as long as both are kept well-maintained.
> it is clear that the submitter is hostile towards IPv4
But so what? It still doesn't remove v4, in any shape or form, and if that was proposed to the kernel, I'm again fairly confident it'd be rejected.
> I also see hostility towards IPv4 in the comments here and other similar discussions
Ah, yeah that might be. I just saw your comment first, with no context of what you were actually answering, so it kind of looks like you're replying "to the submission", which really isn't denying any freedoms, I guess I was confused about that, my bad. Still, wouldn't it be better to answer directly to those comments, rather than "replying" to an argument/debate that is actually happening elsewhere?
Somehow IPv4 versus IPv6 has become one of those noxious political-technical debates like Android versus Apple or GPL versus BSD/MIT, in which both sides are dug in and think that the other side must be destroyed.
The reason that I don’t like seeing patches like this, even as a “joke”, is that there are real people who would like to see IPv4 removed (possibly by government intervention) in order to achieve their dream of an IPv6 only internet. The whole idea is preposterous, but here we are. It’s about as realistic as banning cars but that doesn’t stop the endless flame wars about it.
Someone has to step in to point out that v4 and v6 were designed to coexist, this is fine, please don’t remove common standards for your personal preferences.
The website mentions "giving you full control over performance", what are those knobs and levers exactly? What does those knobs and levers influence, and what sort of tradeoffs can you make with the provided controls?
Unlike other UI libraries, I would say Sycamore has a very clear execution model. If you've used something like React before, there is all this thing about component lifecycles and hook rules where the component functions run over and over again when anything changes. This can all end up being fairly confusing and has a lot of performance footguns (looking at you useRef and useMemo).
In sycamore, the component function only ever runs a single time. Instead, Sycamore uses a reactive graph to automatically keep track of dependencies. This graph ensures that state is always kept up to date. Many other libraries also have similar systems but only a few of them ensure that it is _impossible_ to read inconsistent state. Finally, any updates propagate eagerly so it is very clear at any time when any expensive computation might be happening.
Dioxus originally was more like ReactJS and used hooks. However, they have since migrated to using signals as well which makes Dioxus and Sycamore much more similar.
One remaining major difference is that Dioxus uses a VDOM (Virtual DOM) as an intermediary layer. This has a few advantages such as more flexible rendering backends (they also support native rendering for desktop apps), at the cost of an extra layer of indirection.
Creating native GUI apps should also be possible in Sycamore, and something I'm interested in although there is currently no official support. However, I think one of the big differences with Dioxus would be that Dioxus supports "one codebase, many platforms" whereas I think that is a non-goal with Sycamore. Web apps should have one codebase, native apps should have another. Of course, it would still be possible to share business logic but the actual UI code will be separate.
How does it compare to leptos? Leptos is roughly based on Solidjs and uses signals, to enable fine grained reactivity and avoid a vdom. Why sicamore over leptos?
With Tauri you also get the freedom of choosing frontend frameworks and can reuse existing frontend code/skills. Yes React has issues, for example Svelte handles reactivity in a much better way. I don't see real benefits of re-implementing the whole thing in Rust.
A word to the wise: similar to how foam is mostly air, Tauri is mostly marketing. Most of those 15MB "lightweight" bundles expand to 2 GB+ RAM in practice. Of course, devs still shamelessly (ignorantly, in all likelihood) call the apps "lightweight", while taking up, say, 6 GB of RAM for a 5 button UI. Tauri have also proven reticent [0] to correct the record. One supposes the sole advantage of sharing memory with other Tauri apps is not a sufficient sell to overcome Electron's single-browser-engine advantage.
A pure Rust app takes up ~60 MB for the same UI, with a large portion of that going towards graphics (wgpu table stakes).
You can't fit browser JS ergonomics into Rust and expect zero friction, because once you wire up a stateful UI with the kind of component churn you get in React, you spend more time satisfying the type system, and you also give up hot reload plus a decade of npm junk for odd corner cases.
> What I am most concerned about is the maintainability of the project and how we will get this live.
I'm not sure if it's something that got "lost in translation" or whatever, but are you really saying this project has been under development for more than a year, yet no one attempted to deploy this to a live environment yet? If so, it's understandable you're concerned about it. A lot of the times when I jump on projects that got stuck in development hell in order to unblock them, this is a huge thing that gets in the way for teams. My approach usually is to focus on getting the whole "Know we want a change -> Implement change -> Deploy to test -> Testing -> Deploy to Production" process down first, before anything else, together with practicing at least one rollback.
It really ties into everything you do when working on a project, as this process itself basically decides how confident you can be about changes, and how confident you can be about that some bad changes can easily be rolled back even in production.
Besides that, having non-technical people trying to contribute to a technical project, is a great way for those people to unintentionally damage how well technical people can actually work on the project. I think, explaining to them exactly what you said here, that it isn't feasible long-term, that it's hard for you to have a clear mental model if they're just chucking 10K PRs at you and that you need to understand the code you deploy, should be enough to convince them. If it doesn't, you might want to ask yourself if that's the kind of environment you want to work in anyways.
The project is deployed to a test and "live" environment, but since it is a rebuilt of a very old project that is currently running their business, we don't have to build in production. They needed the rebuilt because the project that is currently in production is not maintainable anymore because of (ironically) technical debt. I agree it is still a weakness that it is not in production, and it needs a strong vision from their side to invest for one or two years into a project without seeing any revenue. However, the environment does not feel right now, I've not very often felt such a misalignment when it comes to a software project.
> If you're not using comments, you're doing agent coding wrong.
Comments are ultimately so you can understand stuff without having to read all the code. LLMs are great when you force them to read all code, and comments only serve to confuse. I'd say the opposite been true in my experience, if you're not forcing LLMs to not have any comments at all (and it can actually skip those, looking at you Gemini), you're doing agent coding wrong.
You're wasting context re-specifying what the code should already say, defining an implementation once should be enough, otherwise try another model that can correctly handle programming.
"DSLs" can both mean "Using the language's variant of 'arrays' to build a DSL via specific shapes" like hiccup in Clojure does, and also "A mini-language inside of a program for a specific use case" like Cucumber is its own language for acceptance testing, but it's "built in in Ruby" in reality.
Clojure favors the "DSLs made out of shapes" rather than "DSLs that don't look/work like lisp inside of our programs".
Yes, maybe that's the sort of DSL you're talking about, the other person mentioned "Clojure style discourages building DSLs" which I'm fairly sure to be about the other DSL and is also true, hence the whole "you're talking/reading past each other".
> Clojure style discourages building DSLs and the like and prefers to remain close to Clojure types and constructs
This to me, seems to indicate they're talking about "DSLs not built with Clojure types and constructs", I'm just trying to have the most charitable reading of what people write and help you understand why it seems you're not actually disagreeing, just talking about different things.
> Virtually everyone would agree that a 20-meter tree is twice as tall as a 10-meter tree. Conversely, everyone would agree that the 10-meter tree is twice as short as the 20-meter tree. There is no threshold or “shortness line” above or under which these relationships cease to hold: a 5-meter tree is twice as short as a 10-meter tree, a 1-meter tree is twice as short as a 2-meter tree, and so on. This reasoning remains valid when considering other multiples: a 1-meter tree is three times shorter than a 3-meter tree. To be sure, when assessing the height of a single tree, different people may disagree whether it is short or tall, as their judgment will depend on the benchmark they use for their assessment. However, when comparing two different trees, virtually everyone would make similar cardinal comparisons. In mathematical terms, shortness is the reciprocal of tallness. [...] In this paper, I apply the same logic to define a new poverty measure
reply