(I'm kidding, but I'm sure someone has a pie-in-the-sky geoengineering startup
about to disrupt topography using either AI, blockchain, or both.)
Well, there was that plan to use scores of nuclear bombs to alter the geography of Egypt in such a way that the Mediterranean could be drained into the Qattara Basin [1]. I think the story is somewhat well-known now, but it proves, at least, that pie-in-the-sky geoengineering startups are not a phenomenon unique to the 21st century. And given that nuclear bombs essentially were the blockchain of the 1950s, that is altogether unsurprising.
That's my approach to recreate a soft drink (ClubMate), like OP is trying to recreate Coke (etc.). Would love to also learn something about the traditional recipe
For the use cases outlined in the OP, a 36% performance gain for an optimization that complex would be considered a waste of time. OP was explicitly not talking about code that cares about the performance of its hot path that much. Most applications spend 90% of their runtime waiting for IO anyway, so optimizations of this scale don't do anything.
Adding a few borrows and annotations is not "an optimization that complex"; use Arc at first but then find those bottlenecks via profiling then fix them.
> Most applications spend 90% of their runtime waiting for IO anyway, so optimizations of this scale don't do anything.
Again, depends on what you are doing. If you are doing web servers, electron apps or microcontrollers, sure. If you are doing batch computation, games, simulation, anything number crunchy, etc: no. As soon as you are CPU or memory bandwidth bound, optimisation does matter. And if you care about battery usage you also want to go to sleep as soon as possible (so any phone apps for example).
Funnily enough, in the blog post you linked Scott Alexander also ruminates about how he never previously questioned journalistic attempts to dox Satoshi Nakamoto.
> Signal insists on using your phone number too, refusing user ids or anything that will make analysis hard.
That is no longer true, you can use user IDs now.
For the other problem, you can enable self-deleting messages in group chats, limiting the damage when a chat does become compromised. Of course, this doesn't stop any persistent threat, such as law enforcement (is that even the right term anymore?) getting access to an unlocked phone.
It doesn't mean much if it isn't the default, even then people who got it prior to that use phone numbers, you can protect yourself maybe, but not other people in the group. But it's good they're doing this now.
Right, but I think that the Recycle Bin is exactly what is causing the issue here. Users have been taught for decades that if they delete something, it is not really gone, as they can always just go back to their Recycle Bin or Deleted Items folder and restore it. (I have worked with clients that used the Deleted Items folder in Outlook as an archive for certain conversations, and would regularly reference it.)
So users have been taught that the term "delete" means "move somewhere out of my sight". If you design a UI and make "delete" mean something completely different from what everyone already understands it to mean, the problem is you, not the user.
> Users have been taught for decades that if they delete something, it is not really gone
There are stories all over the internet involving people who leave stuff in their recycle bin or deleted items and then are shocked when it eventually gets purged due to settings or disk space limits or antivirus activity or whatever.
Storing things you care about in the trash is stupid behavior and I hope most of these people learned their lessons after the one time. But recycle bin behavior is beneficial to a much larger set of people, because accidental deletion is common, especially for bulk actions. “Select all these blurry photos, Delete, Confirm, Oh, no! I accidentally deleted the last picture of my Grandma!”
Recycle bin behavior can also make deletion smoother because it allows a platform to skip the Confirm step since it’s reversible.
What you describe is basically event sourcing, which is definitely popular. However, for OLAP, you will still want a copy of your data that only has the actual dimensions of interest, and not their history - and the easiest way to create that copy and to keep it in sync with your events is via triggers.
Business processes and the database systems I described (and built) have existed before event sourcing was invented. I had built what is essentially event sourcing using nothing more than database tables, views, and stored procedures.
Well, Microsoft SQL Server has built-in Temporal Tables [1], which even take this one step further: they track all data changes, such that you can easily query them as if you were viewing them in the past. You can not only query deleted rows, but also the old versions of rows that have been updated.
(In my opinion, replicating this via a `validity tstzrange` column is also often a sane approach in PostgreSQL, although OP's blog post doesn't mention it.)
MariaDB has system-versioned tables, too, albeit a bit worse than MS SQL as you cannot configure how to store the history, so they're basically hidden away in the same table or some partition: https://mariadb.com/docs/server/reference/sql-structure/temp...
This has, at least with current MariaDB versions, the annoying property that you really cannot ever again modify the history without rewriting the whole table, which becomes a major pain in the ass if you ever need schema changes and history items block those.
Maria still has to find some proper balance here between change safety and developer experience.
[1]: https://en.wikipedia.org/wiki/Qattara_Depression_Project#Fri...
reply