I find it a bit odd how much people talk up the Rust aspect of Tauri. For most cases you'll be writing a Typescript frontend and relying on boilerplate Rust + plugins for the backend. And I'd think most of the target audience would see that as a good thing.
I working on a project using tauri with htmx. I know a bit uncommon. But the backend part use axum and htmx. No Js/Ts UI. It's fast, reliable and it work well. Plus its easy to share/reuse the lib with the server/web.
I am considering a Tauri app, but still wondering about architecture design choices, which the docs are sparse about. For instance the Web-side may constitute a more full-blown, say NextJS, webapp. And include the database persistance, say SQLite based, on the web side too, closest to the webapp. That goes against the sandboxing (and best-practice likely), where all platform-related side effects are dealt with Platform-side, implemented in Rust code. I wonder if it is a valid choice. There is a trade-off in more ease of use and straightforwardness vs. stricter sandboxing.
At least with Tauri it's easy to both make the choice and change it later if you want to. I think the docs are sparse because it's your decision to make. I've done it both ways and there are pros and cons. If you use the sqlite plugin and write the actual statements on the JS side then you don't need to worry about the JS<->Rust interface and sharing types. Easier to just get going. If you write your own interface then you probably want to generate TS types from Rust. I think a big advantage to the Rust interface way is that it makes it easier to have the web side be dual purpose with the same code running on the web and in Tauri - the only difference being whether it invokes a tauri call or an API call.
I'll note that I have gone a slightly different path for the main app I wrote: I've written adapters on the js side that generate SQL or API calls depending on where the code is running and I wrote my own select/insert/update/delete tauri commands. The reason I ended up with what seems like a hybrid of the approaches I suggested above is that the js side knows more about what it wants and therefore generates SQL/api calls with the appropriate joins and preloads. On the tauri side I wanted to intercept at the data layer for a custom sync engine, which the frontend doesn't need to know about. However, I've ended up at that solution maybe because I added the tauri side after writing for the web.
It may be interesting with event sourcing, having the message bus + eventstore be on the rust side, and SQL projections be exposed in a sqlite db on the web side.
I built a vibe-coded personal LLM client using Tauri and if I'm being honest the result was much worse than either Electron or just ditching it and going full ratatui. LLMs do well when you can supply them an verification loop and Tauri just doesn't have the primitives to expose. For my personal tools, I'm very happy with ratatui or non-TUI CLIs with Rust, but for GUIs I wouldn't use it. Just not good dev ex.
Not related to the tech bits of this, but I finally got around to watching Aftersun a couple of days ago. It's a great, sad film about somebody watching home video from their childhood and reevaluating what was going on.
In the context of the Epstein files, I think Schmidt's actual quote looks pretty good ("If you have something that you don't want anyone to know, maybe you shouldn’t be doing it in the first place").
The problem is that even if Schmidt didn't do anything wrong (I don't know but all the link says is he may have been invited to a dinner but probably didn't attend), he nevertheless had something to fear.
There are shops elsewhere in Europe with Arabic signs. You can go there and buy things. They're not outside of the ordinary statistical distribution of shops.
Apparently you can turn it on with about:config / dom.webgpu.enabled
But personally, I'm not going to start turning on unsafe things in my browser so I can see the demo. I tried firefox and chromium and neither worked so pfft, whatever.
I'm fairly agnostic to the headline question of whether social media should be banned for under 16s. The part that seems interesting to me is whether this will entail linking online activity to real world identity for the rest of us. It doesn't have to, but in practice I guess that's probably what'll happen. Unfortunately all the debate is "but freedom of speech" vs "but think of the kids" vs, and nobody will be lobbying for a better (or less worse) implementation.
> Is this not the job of the operating system or its supporting parts, to deal with audio from various sources
I think that's the point? In practice the OS (or its supporting parts) resample audio all the time. It's "under the hood" but the only way to actually avoid it would be to limit all audio files and playback systems to a single rate.
I don't understand then, why they need to deal with that when making a game, unless they are not satisfied with the way that the OS resamples under the hood.
You cannot avoid it either way then, I guess. Either you let the system do it for you, or you take matters into your own hands. But why do you feel it necessary to take matters into your own hands? I think that's the actual question that begs answering. Are you unsatisfied with how the system does the resampling? Does it result in a worse quality than your own implementation of resampling? Or is there another reason?
I don't feel it necessary to take matters into my own hands. If you read my original message again:
> Either my game has to resample from 44.1kHz to 48kHz
> before sending it to the system, or the system
> sound mixer needs to resample it to 48kHz, or the
> system sound mixer needs to resample the other software
> from 48kHz to 44.1kHz
I expressed no preference with regard to those 3. I was outlining the theoretically possible options, to illustrate that there is no way to avoid resampling.
I got a different impression, because you also wrote:
> If only it was that simple T_T
Which to me sounded like _for you_ it's not simple because reasons, which led me to believe, that you _do_ want to take it into your own hands, making it not simple, ergo not being able to let the OS do it, for reasons. Now I understand what you mean, thanks!
I suppose the option you're missing is you could try to get pristine captures of your samples at every possible sample rate you need / want to support on the host system.
reply