https://taler-ops.ch/ is live in Switzerland and allows exactly this: anonymous microtransactions. What law exactly would prevent someone from doing the same in the US?
Can I send money to someone else and then have them pay? Example: etsy knows who I am but maybe every individual store does not? Similarly patreon knows who I am but each person I'm supporting does not? How about only fans?
They don't. Start doing a lot of activity in cash, and the banks file a report on that too. It's called a Currency Transaction Report. They may not be able to find you as easily, but they can create a link between you and being an individual with an unusual predilection toward doing economic activity in cash.
Hot water has less oxygen, the fish suffocated. This was on the Japanese news over a week ago, but without the "nobody knows why" part: it was blamed on unusually warm ocean temperatures.
The article is focusing on the technical side, but the economics are also silly. The ECB says that the Digital Euro will be free of charge for consumers as a "public service". And that it will ensure low fees for merchants because of mandatory acceptance laws. But they also claim that the Digital Euro will be operated by commercial payment service providers. So who will be in the business of operating a public service where they only can charge low, capped fees from merchants? Obviously the existing high-fee payment service providers will not line up here to ruin their working business models. However, the model will work for one group: criminals, that basically run completely insecure low-quality payment services and that fail to provide good customer service or even steal customer's money at a large scale. That business model will work, because the Digital Euro is designed to be a liability of the central bank, even though the operation will be done by commercial operators. So they didn't just mess up the technology (as you would expect from big government), they also messed up the economics (which may surprise some, given that this is largely a central bank proposal).
The logical solution for browser vendors is to also roll back the URL bar by 10 years, where we had different indicators for extended validation, normal certificates and plaintext. I guess a blue EU-logo whenever Article-45 compliant CAs are used would make sense. Then we just have to teach people: blue is for "government snoop mode".
eIDAS in fact forces browser vendors to do that, but there are two problems with what you're suggesting:
1. Good luck teaching 99% of people to be wary when they see the blue address bar. People generally do not understand address bars, which is a large part of why browsers removed the EV indicator.
2. There is a strong possibility that a future version of eIDAS will force businesses in the EU to get certificates from an eIDAS CA. At that point, people in the EU will be seeing the blue address bar constantly, and most of the time the certificate will in fact be legit.
Teaching users is of course the tricky part, and I'm not trying to excuse the insane draft regulation here. That said, eIDAS doesn't force browser vendors to visually distinguish Article 45-forced CA certificates from traditional CAB CA certificates, and I doubt they considered the possibility. So re-adding the distinction is a valid band-aid. Your second point can be addressed relatively easily by businesses getting multiple certificates. Then, the browser can show 'trusted' only if one of the certificates is not from a Article 45-forced CA.
I thought the blue address bar would have a person's name and country in it. That person has a good lawsuit case against the government if it's faked. Or, are we worried the DE government will make up a fake Larry Ellision and MITM oracle.com with it? Larry Ellision would easily win that lawsuit.
I thought that was the whole intention of eIDAS. Everyone gets a government approved certificate they can use to sign their websites if they want to, and then the URL bar shows their identity. They don't have to sign websites with their identity, but they have the option to.
Simple. Give the manufacturers the choice: either they must provide full (FLOSS) source code and documentation (full schematics) to the user to enable them to maintain, patch and thus secure their devices (see also: right to repair), OR they are liable for all damages (direct, indirect) for a 30 year expected lifetime that arise from security issues with the device AND must have insurance to cover those damages (so that they cannot get out of that liability by bankruptcy). Most will opt for FLOSS, and none will have the excuse that it would be more secure to make it proprietary. And then users will at least be able to fix issues -- and the security community will be way more effective at finding issues as it wouldn't have to do the slow reverse engineering.
30 years of expected support is pretty unreasonable. Stating a requirement like this makes the discussion about competing dogmas. Rather, it's about the right way to
keep devices operational as long as possible while also allowing companies to remain possible.
30 years of support expectations immediately makes the cost of any device go up to hedge against the risk of fines during the entire 30 years. It also makes it harder to disrupt an industry with hardware at its core.
I don't have a single computing device that has lasted longer than 10 years. Reasonably speaking, either performance or features start to make the device largely obsolete and unusable.
I think a better way to propose this would be the expectation that when a product is EOL, it should be supportable by the buyer for a certain period. This requires figuring out the right period of support. I'd propose something that scales period based on cost or device class. A $1200 phone should be usable for 10 years while a $10 disposable glucose sensor with a battery should not.
Sorry, but some people will run routers (and other IoT devices) for > 10 years, and long past some random 2 year EOL a manufacturer may set. We need less e-waste, and if manufacturers have to warrant security for 30 years, they may also invest enough to make the hardware itself last longer. More expensive is totally fine if the product is useful for longer! Oh, and please double-check if you really have no 1st generation Raspberry PI anywhere, or maybe some ancient Arduino? What about your washer? Modern washers are IoT devices. My (admittedly not yet IoT washer) is > 10 years old. Or take your car. Sure, you may buy a new one every 10 years, but there are plenty of cars > 10 years on the road. Do you want all of them to be vulnerable and out of warranty in the future?
> 30 years of expected support is pretty unreasonable.
I happen to know, having been with a Ford unit at the time, that the Ford EEC-IV engine control unit in 1980s Ford cars and trucks was designed for a 30 year lifetime. Many are still working.
The average age of light vehicles in the US is 12.2 years.
> I don't have a single computing device that has lasted longer than 10 years. Reasonably speaking, either performance or features start to make the device largely obsolete and unusable.
Are you just buying cheap junk? An i7-3770 PC - a good example of an 11 year old PC, and one I happen to use every day - can be quite usable today.
If parts of the supply chain aren't FLOSS, then manufacturers would have to lean on those suppliers to change their licensing or find different suppliers. Same with other regulations around things like lead in consumer products. Anyone wanting to be part of consumer product software supply chains would have to start offering it as FLOSS if they want any customers, so the supply chain would adjust to the new reality.
We do need to establish common sense liability if it's not already there. If you modify your circular saw to remove the guard and injure yourself, that's your fault. If you modify some software to run outside of safe design parameters and it malfunctions/injures you, that's your fault.
I don't see why zero-trust is incompatible with user-modified devices. In fact it's in line with the spirit of zero-trust: don't assume just because something is able to talk to one of your servers (e.g. because it's on your VPN/LAN) that it's friendly. People should already always be assuming customer-owned hardware will potentially be completely controlled by a malicious actor and acting accordingly.
I'm working on an IoT device for industrial use, and we're wrestling with this very problem.
The answer we're probably going to go with is that the device is 'leased' to the customer. It's part of their subscription.
This solves a ton of problems about FLOSS and support of the same. It's now a closed device, and you have no rights to the code inside. If we go out of business, you have a brick that you don't have to pay for anymore.
I think it's always better for the customer to have access to the code inside. I'll actively recommend FLOSS solutions to customers even if they're not quite as good as the competition on paper right away. Simply because a large part of the cost of industrial hardware is actually supporting it for a long time. And support is SO MUCH EASIER if you have all the source code and schematics. Of course big customers get to demand this kind of arrangement (floss, escrow, or even just "give us all the paper") while small industrial operations end up paying a premium for inferior service.
>> The answer we're probably going to go with is that the device is 'leased' to the customer. It's part of their subscription. <<
1000% wrong answer, unless you straight up front sell a service with an installer making a site visit to deploy chattels of service.
such as satellite television, or DSL internet.
when you swap handfulls over the counter before any contractual agreements i.e. clickthrough TOS , you are selling a hardware, that means user ownership.
i find that revealing, it seems direct enduser engagement has really stung you, is there something other than people being people, or are there onerous requirements that are not worth it?
Ah, the utopian dream of a world where every manufacturer gives away their intellectual secrets just so users can play tech guru. You're suggesting that companies offer up decades of R&D and risk their competitive edge, or else face 30 years of liability? With the speed at which technology evolves, we're lucky if a device is even relevant after 30 months. And let's not forget the minor detail of skyrocketing costs. Want a device built under these fantasy rules? Hope you're ready to pay through the nose—think 10 times the current price. Because nothing says 'accessible technology' like pricing out the average consumer.
I favor something like this, if less strong. It should be required that a product that reaches end-of-life as defined by the manufacturer should have all documentation and source code released and open sourced; prior to end-of-life (and perhaps for one year after), they're required to provide security updates. The manufacturer is then free to decide the point at which closed source is no longer worth the maintenance cost.
A few additional thoughts:
- Perhaps hardware design/specs should be released as well?
- A government body should probably host this information after EOL.
Well, lots of people have been not-buying liquorice their whole life, but nothing's changing. The market for liquorice candy is alive and well.
Less snarky: if other people still want to buy certain products, manufacturers will provide. But that's not a bad thing. Different folks have different preferences.
Why did you suggest not-buying as a better action than regulation if you acknowledge that it doesn't work? Are you a manufacturer of low-quality IoT devices? People don't prefer insecure devices, they just want convenience and manufacturers are not being upfront about how dangerous these "convenient" devices are. Ergo, regulation.
Not when the consumer doesn't know the trade off they are making. Buying a bottle of colorful poison and drinking it and dying because it looked tasty is not a legitimate preference.
You are being willfully ignorant of the power dynamics and information disparity that exist between manufacturers and consumers. The whole point of the label is to better inform consumers.
So what we need is giant warning stickers on products of which their parent companies don't follow good practices. Kind of like tobacco products.
"Leaks your personal data to unknown servers" Or "Manufacturer typically does not support their products beyond 2 years after which critical features and functions may stop working"