That's an incredibly broad statement; we are talking about what movies people have watched on netflix. What possible non-anonymous dataset could you be cross referencing to de-anonymize it? Even if you could, who cares?
Why would there be a warning? Your example does exactly what autovivification is supposed to do. Obviously you don't normally use it as obscurely as you've done here.
I hate C#'s way of assigning a value to a dictionary, without autovivification:
You also have to use ContainsKey before you try to use [] to retrieve a value, because [] throws an exception if the key doesn't exist in the dictionary. Perl's autovivification, which assumes you know what you're doing and expect the key to be there when you ask for the value rather than explicity check for the key, makes much more sense and is a lot easier to work with.
This is exactly my point, there should be no autovivification - I'd wager most people using perl these days don't know what they're doing, they're reluctantly editing some legacy code.
It's a dangerous default behaviour that can lead to very hard to debug errors. Most languages don't do this so people don't expect it; I don't know C# but can immediately tell what the code you wrote is doing. Explicit is better.
It's extra confusing because if you try and dereference undef without accessing a value it's an error:
perl -Mstrict -WE 'my $test; say @{ $test };'
Can't use an undefined value as an ARRAY reference at -e line 1.
What harm is there in a warning? You can disable classes of warnings in perl if you want to, or turn them off completely. I would much rather be explicit so I don't accidentally use some magic.
Explicit isn't better when the explicit boilerplate overwhelms and hides the intent of your code.
For the C# example, this can cause an exception:
dict["yes"] = "val";
If the dictionary already contains an entry with the key "yes", the line will assign "val" to that entry. But if there is no entry with that key, an exception is thrown. The alternative is to use the Add() method, but that throws an exception if the key does exist, and works if it doesn't. I can't imagine any common use case of dictionaries where this behavior makes sense.
So what you're left with is a bunch of boilerplace and an if/else branch wrapped around every place you want to set a value in a dictionary. It makes dictionary use very cumbersome, which obscures the intention of code that uses dictionaries and, worse, discourages their use. C#'s design here takes away a very handy data structure that can make entire classes of problems easy to solve.
Perl's dictionaries, aka hash types, are used everywhere in Perl code. They're extremely useful and flexible, and a lot of that is due to the ease of use that autovivification brings.
I don't think it's fair to judge a language (or any tool) by how difficult it is to use by someone who rarely uses it and has no desire to learn, rather than by how usable it is for someone who uses it all of the time.
> than by how usable it is for someone who uses it all of the time.
Most of my career since 1996 has been in developing Perl. I'm a contractor who gets hired (and paid well) to help with Perl projects. By no stretch can you say I "rarely use it".
And I still consider Perl one of the more difficult languages to deal with because it's very easy to rely on implicit behaviour that - whilst perfectly clear as you develop it - makes life inordinately hard a year later when you're trying to debug an edge case because now, e.g., people can have two different delivery methods in the same basket (whoops, the second one overwrote the first one in the hash!)
> Perl's autovivification, which assumes you know what you're doing
Which is fine if you're the only person who will look at this code and we're talking about a timespan of a few hours.
When someone comes to deal with this code a year later, it makes life more difficult than it needs to be; at least the C# way is explicit - you can come back to that years later and know exactly what's happening.
(Which is the basic problem with most of Perl's magic - coming back to it hours/months/years later makes life harder for no actual gain.)
Money kept in the bank is guaranteed by your government to retain (some) value; it is backed by the economy of your country. Bitcoin is not backed by anything other than speculation on implied value
No, all messages cannot be recovered by Facebook. Read the article - messages that are not yet delivered can potentially be read; if it has been delivered it cannot be retrieved.
You go read the article. The deciding factor is not whether the message has been delivered, but whether WhatsApp servers report to the device that the message has been delivered. There's nothing stopping them from claiming that no messages have been delivered and thus recovering all messages (as long as they had been preselected for false delivery reports) despite true delivery status.
> but whether WhatsApp servers report to the device that the message has been delivered
It is hard to check what WhatsApp does, but in Signal it is not the server, but a recipient who sends delivery receipt. WhatsApp then has to either recognize encrypted receipts or allow only one-way conversation during attack. Carrying out the whole attack just to decrypt "hi, are you here?" is not really interesting.
The delivery receipt is the message that is directly sent after the message has been delivered. Not too hard to distinguish those from other text messages.
So they can recover the messages, right? However, wouldn't these messages still be encrypted? Sure, they force a key change, and the messages are encrypted using the new key and sent. Theoretically, an attacker could have multiple copies of the same message, but these messages would still be encrypted under a variety of different keys right? Wouldn't the content of the messages still be secure?
Unless the key-change forces the user to be using an insecure key-pair, but is that actually happening?
New encryption (public) key is selected by the attacker, so he knows the decryption (private) key. Basically attacker just puts real device offline and registers his own device.
I disagree - a simple solution exists, which means you cannot use the mic and a button at the same time. Apple invested time and money to solve this problem in a non-trivial way; it seems perfectly reasonable to me that headset manufacturers wanting to use Apple's solution should pay royalties
I can't believe the time and money they "invested" was at all significant, certainly not enough to justify any non-trivial amount of royalties.
But frankly I just don't care about Apple's costs: I'm far more interested in the broader economic effects of moves like this, and it's clear to me that both consumers and device-ecosystem manufacturers are harmed economically by patents like this.
So who "wins"? Do we grant the monopoly because Apple is oh so clever (they're really not, in this case), and allow Apple to enrich itself at the detriment of others, or do we look out for the greater good? I argue the latter is the correct choice in a civilized society.
David Bigagli also says in the first post in that thread:
IBM does not have a technical answer to OpenLava and to the benefit its users have.
They cannot articulate why their software is better than OpenLava for the money they charge for it.
IBM fears OpenLava because is does provide a better functionality than their own software and
that's why it can only reply with a lawsuit by hiring:
Kirkland & Ellis LLP.
Sounds like there might be slightly more to this than is outlined in comments.