Yeah but the core issue is that all apps for digital services for both private and government, at least in my EU country, are only shipped for the iOS/Android duopoly.
So having yet another 100th FOSS linux phone that won't run those apps is pointless until apps for these phones are shipped with feature parity, and they probably won't get shipped until these phones reach some critical mass adoption, and they won't get critical mass adoption because they don't run the popular apps.
If this is similar to LineageOS, then it's always potentially only a matter of time until some banking and payment apps stop working due to failing security attestation pushed by a Google update.
We need native apps that pass attestation out of the box for that phone/OS, not relying on hacks that may or may not work in the future.
This is not good UX and it poisons the well if you push users to a new platform then they discover some apps don't work as you promised.
Because FIDO2 is not enough for non-tech-savvy people.
The main issue is potential confusion about what transaction they’re actually signing. For example, a malicious browser extension can pretend the site sends money to X while actually sending it to Y.
The European PSD2 directive mandates that the 2FA scheme must let the user see what they’re about to sign. At the very least, that includes the amount and part of the recipient’s IBAN. FIDO2 doesn’t have that.
It’s the reason I own a device that looks like this [0]. Without it, I wouldn’t be able to transfer money at all due to the lack of banking apps that work on Linux phones.
In this case, wouldn't FIDO2 only be used to log into the bank's website, not to sign individual transactions? (Corresponding to Mode2 in the Wikipedia article you provided?) Would this "mode2" only usage be allowed under European law, given that there is no transaction involving an amount of money taking place?
Banks used to give us those RSA tokens in the past for securely logging in to the web UI, but then discovered they can cut down on cost since everyone has two brands of smartphones.
No doubt. At least with FIDO2, people can provide their own hardware key, and get real security rather than a rolling number generated by a compromised algorithm [1].
Your point seems to be "Some Jolla phones can run some Android apps," while GP's issue is that "It's not true that all Jolla phones can run all Android apps."
All true but it's a circular argument: these are unhealthy products because they're _designed_ that way. That design is directed from the top - no more so that Facebook/Instagram. Zuckerberg retains a controlling interest in Meta so he can't use the excuse of other public firms where CEOs throw up their hands and say "yeah, but we need to deliver shareholder return - it's out of my hands". Zuckerberg could choose differently. As GP notes, he hasn't - he's gone consistently hard the other way.
> It’s clear, people want to be addicted to social media
I'd say people are susceptible to addiction rather than wanting it. Suppliers of any addictive product - whether its tobacco, class A drugs, alcohol, gambling or social media - know that. Going too hard the other way into full prohibition is impractical because it starts to impinge on civil liberties: as a capable adult, why shouldn't I be able to smoke/drink/doomscroll instagram if I want?
That's why it's dificult; neither extreme liberty nor extreme prohibition is the answer. It's a grey area as GP notes. The trouble is it creates opportunities for people like Zuckerberg to exploit the middle ground and amass huge personal wealth paid for, in part, by the health detriment of those unable to self-regulate the addiction.
I must just lack empathy then. I feel it’s zucks role to build the best wine, whisky, casino game, meth, cigar, etc he can. It’s the consumers job to use it responsibly. They won’t so that’s when it’s time for regulation. Which is probably now/soon. And yes, he gets to amass wealth during this time. I wouldn’t say it’s all been exploitive though. I’d say many people have healthy addictions. Just like the average American who drinks 10 alcoholic beverages a week, every single week. They’re adults, they aren’t alcoholic, they just need a drink, every day they’re not being exploited, it’s a vice of sorts. But it’s an opt-in vice.
I think that yes, it's a lack of empathy stemming from the belief that everything can ultimately be distilled into personal responsibility.
In reality we are not so much in control, our psyche is easily manipulated by nudges, design that leaves you on the cusp of a dopaminic reaction is much more addictive. It's different to develop a vice to being manipulated into developing a vice. Morality should come into play on the latter, otherwise it's a free-for-all to discover the most effective ways to manipulate you into behaviours that are unhealthy but profitable.
> Just like the average American who drinks 10 alcoholic beverages a week, every single week. They’re adults, they aren’t alcoholic, they just need a drink
Drinking every day and "needing" a drink look like good indication of alcoolism to me.
It’s how people unwind from stressful day, just like doomscrolling. But most of these people aren’t considered alcoholics by society, it’s fairly normal behavior until it affects other parts of your life. From what I can tell anyways. I also don’t drink much so don’t get it when people need to have a glass of wine or whatever after a completely normal day.
I have children and don’t work in tech, I was able to self moderate myself and keep kids away from it. It’s simply not that hard to see it for what it is, and never has been. It’s bad. Glad people are finally seeing what’s obvious.
Takes some minimal effort to be honest to tell the kid no and give them other outlets for their boredom. I never did tablets or small screens at all. Parenting today, and last decade or so, instead puts infants in front of tablets. Its insane. All media is then altered to steal attention and maximize engagement. It’s to be expected. Zuck is basically cocomelon. Garbage that people love to eat.
Oh and we did ban cocomelon. My kids watch plenty of Tv and I’m not going to rave about it, it’s crappy kid TV, we try to push some educational stuff too. But It was obvious that when cocomelon was on kids eyes glaze over, they forget to blink, they have no idea what is going on around them, and just look like zombies staring at a TV/screen. Let’s be honest though, that’s what most parents like about it.
If you distill everybody involved down to a single function, this makes sense. But that's not all we are. It is not a physical law that Zuckerberg make his products the most addictive and harmful as they can be; he can choose to be more responsible with his influence. Consumers cannot always simply just choose not to be addicted; when you grow up with these things and people & companies are constantly pushing you to try them, it's very hard to avoid.
I think that's the point. The underpinning exhortation is to "think about design" where the outcome is something that successfully addresses users needs, is feasible to create, and commercially viable.
"Design Thinking" as a brand has codified that in several ways - not all successful. But the underlying principle is sound: there are plenty of examples of products/services that failed to address one or more of the 3 dimensions.
I found this quote from the linked article [0] more helpful:
> Design thinking can be described as a discipline that uses the designer’s sensibility and methods to match people’s needs with what is technologically feasible and what a viable business strategy can convert into customer value and market opportunity.
I’m going to presume good faith rather than trolling. Some questions for you:
1. Coding assistants have emerged as as one of the primary commercial opportunities for AI models. As GP pointed out, LWN is the primary discussion for kernel development. If you were gathering training data for a model, and coding assistance is one of your goals, and you know of a primary sources of open source development expertise, would you:
(a) ignore it because it’s in a quaint old format, or
(b) slurp up as much as you can?
2. If you’d previously slurped it up, and are now collating data for a new training run, and you know it’s an active mailing list that will have new content since you last crawled it, would you:
(a) carefully and respectfully leave it be, because you still get benefit from the previous content even though there’s now more and it’s up to date, or
(b) hoover up every last drop because anything you can do to get an edge over your competitors means you get your brief moment of glory in the benchmarks when you release?
I train coding models with RLVR because that's what works. There's ~0.000x good signal in mailing lists that isn't in old mailing lists. (and, since I can't reply to the other person, I mean old as in established, it is in no way a dig to lwn).
You seem to be missing my point. There is 0 incentives for AI training companies to behave like this. All that data is already in the common crawls that every lab uses. This is likely from other sources. Yet they always blame big bad AI...
Old scrapes can't have data about new things though; have to continously re-scan to not be stuck with ancient info.
some scrapers might skip out on already-scraped sources, but easy to imagine that some/many just would not bother (you don't know if it's updated until you've checked, after all). And to some extend you do have to re-scrape, if just to find links to the new stuff.
> it's really just a spec that gets turned into the thing we actually run. It's just that the building process is fully automated. What we do when we create software is creating a specification in source code form.
Agree. My favourite description of software development is specification and translation - done iteratively.
Today, there are two primary phases:
1. Specification by a non-developer and the translation of that into code. The former is led by BAs/PMs etc and the output is feature specs/user stories/acceptance tests etc. The latter id done by developers: they translate the specs into code.
2. The resulting code is also, as you say, a spec. It gets translated into something the machine can run. This is automated by a compiler/interpreter (perhaps in multiple steps, e.g. when a VM is involved).
There have been several attempts over the years to automate the first step. COBOL was probably the first; since then we've had 4GLs, CASE tools, UML among others. They were all trying to close the gap: to take phase 1 specification closer to what non-developers can write - with the result automatically translated to working code.
Spec-driven development is another attempt at this. The translator (LLM) is quite different to previous efforts because it's non-deterministic. That brings some challenges but also offers opportunities to use input language that isn't constrained to be interpretable by conventional means (parsers implementing formal grammars).
We're in the early days of spec-driven. It may fail like its predecessors or it may not. But first order, there's nothing sacrosanct about the use of 3rd generation languages as the means to represent the specification. The pivotal challenge is whether translation from the starting specification can be reliably translated to working software.
Related: Michael Kennedy moved TalkPython [0] hosting to Hetner in 2024. There's a blog about the move here [1] and a follow up after Hetzner changed some pricing policy [2].
He's also just released a book on hosting scale production Python apps [3]. Haven't read yet though would assume it'll get covered there in more detail too.
yes, though perhaps stating the obvious: it depends what they do with it.
Ladybird currently has 8 full-time devs [1] and is making impressive progress on delivering a browser from scratch. Wise investment in small, focused, capable teams can go a long way if they're not chasing VC-driven Unicorn status (or in stasis as a Google anti-trust diversion).
That's not challenging your point though: in the face of competing budgets at US tech giants, EUR17Mn still barely registers above noise level. Nevertheless, it's a start. We can only hope it grows and doesn't get shut down by some political lobbying by the aforementioned US behemoths. A modest budget might actually help there - not yet big enough to cause concern to incumbents.
What is the point of a new browser engine? What will be the advantage over WebKit/Blink/Gecko?
Sure it “isn’t monetized”, but nothing stops you from making non-monetized forks of chromium or Firefox. And nothing stops company from forking Ladybird and monetizing it, either.
Hopefully new independent voice in the questions of platform features, development and future.
Google could do pretty much anything with the platform if it were not for Apple and iOS. And that's a big if because if they align on something it will get to the platform.
Firefox unfortunately seems to be infected by Silicon Valley people that seem to be quite obedient to the status quo.
Ladybird is at least developed by people from all around the world.
Kind of. The intent is good and the wording disallows some of the dark patterns. The challenge is that it stands square in the path of the adtech surveillance behemoths. That we ended up with the cesspit of cookie banners is a result of (almost) immovable object meeting (almost) irresistable force. There was simply no way that Google, Facebook et al were ever going to comply with the intent of the law: it's their business not to.
The only way we might have got a better outcome was for the EU to quickly respond and say "nope, cookie banners aren't compliant with the law". That would have been incredibly difficult to do in practice. You can bet your Bay Area mortgage that Big Tech will have had legions of smart lawyers pouring over how to comply with the letter whilst completely ignoring the intent.
> The standard, developed in 1994, relies on voluntary compliance [0]
It was conceived in a world with an expectation of collectively respectful behaviour: specifically that search crawlers could swamp "average Joe's" site but shouldn't.
We're in a different world now but companies still have a choice. Some do still respect it... and then there's Meta, OpenAI and such. Communities only work when people are willing to respect community rules, not have compliance imposed on them.
It then becomes an arms race: a reasonable response from average Joe is "well, OK, I'll allows anyone but [Meta|OpenAI|...] to access my site. Fine in theory, dificult in practice:
1. Block IP addresses for the offending bots --> bots run from obfuscated addresses
2. Block the bot user agent --> bots lie about UA.
Thanks for the info. However people seem to think that robots.txt will protect them while it was created for another world as you nicelly stated. I guess Nepenthes like tools will be more common in the future, now that tragedy of commons entered digital domain.