I’ve been using OpnSense/pfsense [0] for years and would highly recommend it. It has a great automatic update experience, config backups, builtin wireguard tunnels and advanced features like packet filtering options via suricata.
When I am doing network management on my weekends, I’m so glad I’m not stuck in the Linux terminal learning about networking internals and can instead just go to a webui and configure my router.
I agree on principal, but I often find that the GUI abstractions don't always map to the linux tooling/terminology/concepts, which often ends with a head bashing against the wall thinking "this is linux, I know it can do it, and I can do it by hand, but what is this GUI trying to conceptualize?!?!"
I was recently introduced to a Barracuda router, and bashed my head against the wall long enough to discover it had an ssh interface, and linux userland, and was able to solve my immediate problem by directly entering the commands to get it to [temporarily] do what I needed. (Of course, using the GUI to reapply settings wiped my manual configuration...)
I've used pfsense, OpenWRT, Barracuda, Verizon's OEM router (Actiontec) and they all represent the same functionality wildly differently.
> I've used pfsense, OpenWRT, Barracuda, Verizon's OEM router (Actiontec) and they all represent the same functionality wildly differently.
Worth noting that pfSense (and OPNsense) are not Linux-based, they're based on BSD, specifically FreeBSD. While it's possible to have standard router OS web UIs that are cross platform, the underlying technology is different, so it's not really a surprise that there will be differences in how the devices running these OSes are configured.
The primary reason I stick to iptables instead of nft is that I already learned iptables decades ago, and some software I interact with still defaults to iptables and/or does not have full support for nft.
Why do you doubt the sanity of people sticking to iptables? What makes nft compelling?
My main reason is that nft applies configs atomically. It also has very good tracing/debugging features for figuring out how and why things aren't working as expected.
That said, I think many distros are shipping `iptables` as the wrapper/compatibility layer over nft now anyways.
As someone who recently switched over from iptables to nftables on one of my machines, the only thing that's better with nftables are sets and maps...
And, like, maybe I'm missing something, but I've found that sets are insufficiently powerful and maps are insufficiently well-documented. You can't have nested sets... that is sets that are defined (partially or completely) in terms of other sets. You also can't share sets across tables (or have "global" sets)... so that list of interfaces that'd be really good to apply to all of your rules? Yeah, you've gotta duplicate it in every damn table. And maps? My big beef with them is that the documentation makes two things very unclear:
1) What part of the nftables rule is going to do a lookup of the key in the map and what part will get the value. Like, seriously. Check out the nft(8) man page and look at their mapping examples. The k:v selection and insertion logic is clear as mud. I can guess a couple of possible interpretations, but if they explicitly state the logic, I must have skipped over it.
2) If it's even possible to have a multi-component key, to -for example- cook up a "verdict map" that fills out the statements:
You also lose the really nice tabular status display that 'iptables -L -n -v' provides you... instead you get a nested abomination that (on the one hand) thankfully isn't fucking JSON, but (on the other hand) isn't JSON, so you have to cook up a processor if you want to transform it. You also lose the really nice, well-thought-out CLI help text for doing basic shit, like, suchas "List the goddamn rules in the fucking ruleset". Even the nft(8) man page takes its sweet time getting around to telling you how to do that really fundamental task.
"The CLIs are much less nice to use" is kind of a theme I've noticed with some of these replacement networking-management tools. 'bridge' is way less nice to use than 'brctl' [0], 'ss' is quite a bit more obnoxious than 'netstat', etc, etc.
Though, to be clear, I find 'ip' to be a much better tool than 'ifconfig'... at least the Linux version of 'ifconfig'. Maybe the BSD version is great.
[0] It doesn't help at all that you have to use both 'ip' and 'bridge' to manage bridges.
Are they? I recently had to learn nftables and they seem to be iptables but with a slightly nicer syntax and without pre-defined chains. But otherwise, nftables directly maps to iptables and neither of them seem similar to pf.
I guess I'm different. I typically want my router/firewall/network services box to Just Work. I've made a career in deep-in-the-weeds system administration and engineering. Having to hunt down man pages, examples, tutorials, etc for the dozen or so fiddly bits make up a modern Linux- (or BSD-) based router was fun the first time, not so much the 10th. Been there, done that, got the t-shirt.
I will concede that the OpnSense UI is far from perfect. I would really like to see a device-centric view that lets me set all the things related to that device from one screen (or possibly one screen with multiple tabs). For example, if I add a Roku device to my network, I want to enter in the MAC address and then be taken to a screen where it will let me set the hostname, pick a static IP address, hand it a specific DNS resolver IP, see all of the traffic going to/from the device, only allow it access to the Internet between during certain hours, etc. All of this currently requires jumping around between multiple disconnected parts of the OpnSense UI.
I feel almost exactly the same as you on the subject. When I was young and starry eyed I built my own router out of a PC running openBSD, all by hand. Nice learning experience, interesting OS, but definitely not maintenance free especially around system updates as back then openBSD packages and sys upgrades required recompiling everything. Now I do the same mini-PC thing as the OP's article but I just put OpnSense on it. Agree the UI can be maddening at times but the thing is rock solid, and has very polished update and upgrade mechanisms. Built-ins/plugins are great - unbound, wireguard, openvpn suricata, backups to git etc. Also I like that it is BSD based, my network experience was learned on Cisco's and Junipers in an ISP setting and Linux networking has always driven me crazy
I've been running OpenBSD as a router for almost 20 years I think? These days, the only ongoing maintenance it requires of me is running `syspatch` and `pkg_add -u` periodically to keep things up-to-date, and then `sysupgrade` when a new release comes around. It's way more hassle-free than in the old days.
I had a similar experience with FreeNAS (now called TrueNAS): I'm sure it's great for some people, but I ended up fighting the abstraction layer way more than I benefited from it. I personally found it easier to just run Samba on plain FreeBSD/OpenZFS.
I'm at a stage where I don't want to be doing network management on my weekends. I have a Ubiquiti router that's pretty good, and for my router I'd like something like TrueNAS for my NAS, a distribution that completely turns the hardware into an appliance I can configure once and forget about.
Pfsense/opnsense would be one option (based on FreeBSD). For Linux there is OpenWRT, which you can either run as an alternative firmware on quite a few consumer routers/access points, or install on a PC or Pi or similar.
Caveat: I have only used OpenWRT on a high end consumer router (GL.inet MT6000) out of those. That works well, anything else is based on reading about people using those options.
For all of those, once you set it up you don't really need to do much except install updates a couple of times per year, or if you want to forward a new port or such.
I recently dumped opnsense because they took a stand against a few things I was trying to do (ex, webUI on wan port IIRC) which make sense at a high level. But I _HATE_ devices that think they know better than me. I was trying to configure it on a _LAN_ such that the identified WAN side was actually my local lan, and I spent an hour hacking it to work and was like "you know if they can't get this shit right i'm out". There are a lot of places in the technology world where someone who thinks they understand my use case makes a decision based on some narrow world view because they can't understand that not everyone trying to use their product is some idiot home user using it for their home network.
I've been a fan of opnSense for a few years now - I'm actually using it as the WAN device for our office, as well as a VPN concentrator in other contexts.
Some recent changes are driving me up the wall though - their new UIs for configuring VPNs (IPSEC and OpenVPN) are far less intuitive than what they've termed the 'legacy' UI and I note that recent versions have introduced a firewall rule migration feature that I'm not touching with a 9-ft barge pole.
These changes are making me wary about using opnSense in future, which is a pity because other than pfSense there isn't really a fully-featured, open-source firewall OS that comes close to matching it (and pfSense has its own issues). Linux is great and all - and I do use it for routing/firewall/VPN in places on our network - but there doesn't seem to be a dedicated network appliance distro that bundles in a comprehensive Web UI. Apart from OpenWRT and its ilk, but I'm not convinced that that's suitable for enterprise deployment.
Yep, this is the way. You will learn loads using Linux but this is not something you want to go wrong.
I used a lower power Intel Atom mini PC with an additional NIC as a router for years. I tested it and found it could route around 300Mb/s which was plenty.
But then I got gigabit internet. So I bought an Intel 4 port GigE card from eBay and now run OPNSense as a VM. If you get the right Intel card you can pass through ports to VM individually, which is nice for playing (don't know the exact details but look for cards with virtualisation support, mine is an 82575GB I think).
To be fair, my setup still probably has too much to go wrong, due to the VM thing, but I just haven't got round to getting dedicated hardware, and it's worked fine for a couple of years now.
I think that's the problem. I used to find it far superior to google. Now, there are a lot of queries where I am unimpressed with the results and end up trying google just to get better results. (like I used to do with DDG)
I've had a few experiences now where someone is standing over my shoulder asking me to look something up, and I search kagi, find nothing, then search google and find what they asked me to look up. Then when they ask "what was that other search engine you used first?" I don't feel compelled to vouch for kagi :(.
Cool project! How do language servers work with this system? Suppose I am developing PyTorch+cuda code on a remote machine, do I need to have that same PyTorch version installed locally?
If you run the language server remotely, how do you sync the file before it has been saved so that the user gets autocomplete?
Good question. To quickly answer, no you don't need it installed locally but you will benefit from having the source available.
Just so we have a common reference, look at https://github.com/edaniels/graft/blob/main/pkg/local_client.... The main idea is that we are always matching the local current working directory to the corresponding synchronization directory. Using that idea, we serve an LSP locally that rewrites all json-rpc messages that utilize URIs (https://github.com/edaniels/graft/blob/main/pkg/local_client...) from local to remote, and back. The local LSP and the remote LSP we launch are none the wiser. Because of this proxy, when you go to definition, you are going to load the local source definition; when you go an lsp format tool, it runs remotely and the file sync gets you results locally.
The lsp functionality is pretty barebones but has been working for me in sublime when I open a project of mine in a graft connected directory. I've tested it on golang and typescript. I believe python should work but I suppose dependencies could be funky depending on how you synchronize dependencies (uv, pip, etc.).
For go, I used this on my lsp settings and it worked great. What doesn't work great is if you get disconnected :(. Making the LSP very reliable is another story for another task for another day.
Crash? The software, or physically? A 200Hz as a min control loop rate seems on the fast side as a general default, but it all depends on the control environment - and I may be biased as I've done a lot more bare silicon controls than ROS.
Physically crash. When we would block the control loop at all (even down to 100hz), we would get errors and then occasionally the arm would erratically experience massive acceleration spikes and crash into its nearby surroundings before e-stopping.
Re: Other comment. Yes, this was with ur3e s which by default have update rates at around 500hz.
I'd love to develop some MCP servers, but I just learned that Claude Desktop doesn't support Linux. Are there any good general-purpose MCP clients that I can test against? Do I have to write my own?
(Closest I can find is zed/cody but those aren't really general purpose)
In many ways this has already started happening. TS has enums, Svelte has runes, React has jsx. None of these features exist in JS, they are all compile-time syntax sugar.
While it is admittedly confusing to have all these different flavors of JS, I don’t think this proposal is actually as radical as it seems.
Recently gpt-4-turbo started rejecting writing some tests because it 'knows' it would exceed the max context. (This frustrated me deeply -- It would not have exceeded the context)
When I am doing network management on my weekends, I’m so glad I’m not stuck in the Linux terminal learning about networking internals and can instead just go to a webui and configure my router.
0: https://opnsense.org/
reply