The secret reason they are doing this is because governments want to be able to identify everyone online everywhere it matters at all time. They want to strip anonymity from computing.
Apple and Google can now credibly claim to governments to have nearly ubiquitous computing platforms that they can guarantee do not run any software that is not approved or antithetical to the goals of authorities. This makes the device safe for storing things like government IDs. OSs and Browsers will be required to present these IDs or at first just attest to them.
Before posting online, renting a server, using an app you will have to idenitfy yourself using your phone or similarly locked down PC (i.e. mac).
The introduction is under the guise as always of protecting the children. In reality they are removing your rights to privacy and free speech.
For now I would be happy if it just explored the problem space and identify the choices to be made and filtered down to the non-obvious and/or more opinionated ones. Bundle these together and ask the me all at once and then it is off to the races.
IMHO, LLMs are better at Python and SQL than Haskell because Python and SQL syntax mirrors more aspects of human language. Whereas Haskell syntax reads more like a math equation. These are Large _Language_ Models so naturally intelligence learned from non-code sources transfers better to more human like programming languages. Math equations assume the reader has context not included in the written down part for what the symbols mean.
They are heavily post-trained on code and math these days. I don‘t think we can infer that much about their behavior from just the pre-training dataset anymore
SQL - somewhat, python - no. LLMs that write code only work well with proper guardrails. Dynamic languages like Python lack essential guardrails.
Once the project can't fit in model's usable context window (~150k tokens even for 1M models), you need code fighting back and leaving breadcrumbs for model to follow.
I suspect your probably right, but just for completeness, one could also make the argument that LLMs are better at writing Haskell because they are overfit to natural language and Haskell would avoid a lot of the overfit spaces and thus would generalize better. In other words, less baggage.
I would guess they’re better at python and SQL than Haskell because the available training data for python and SQL is orders of magnitude more than Haskell.
I disagree. I have a successful software product that I vibe coded using claude code starting last June. It does something novel and useful that wasn't yet offered on the App Store or any app on Android.
I am not going to say what it is because all of the AI haters will immediately flock to leave it bad reviews and overwhelm my support systems with bad faith requests (something that has already happened).
I've been writing software for 25 years, I know what I am doing. Every bug I shipped was my fault either because I didn't test well enough or I did not possess enough platform knowledge to know myself the right way to do things. "Unknown unknowns"
But I have also learned better ways to do things and fixed every bug using AI tools. I don't read the code. I may scan it to gain context and then tweak a single value myself, but beyond that I don't write or read code anymore.
Its not a magical few shot prompt then reap profits machine. I just feel like a solopreneur ditch digger who just got a lease on a new CAT excavator. I can get work done faster I can also do damage faster if I am not careful.
We need something like SETI@home/Folding@home but for crawling and archiving the web or maybe something as simple as a browser extension that can (with permission) archive pages you view.
This exists although not in the traditional BOINC space, it's Archiveteam^1. I run two of their warrior^2 instances in my home k3s instance via the docker images. One of them is set to the "Team's choice" where it spends most of its time downloading Telegram chats. However, when they need the firepower for sites with imminent risk of closure, it will switch itself to those. The other one is set to their URL shortener project, "Terror of Tiny Town"^3.
Their big requirement is you need to not be doing any DNS filtering or blocking of access to what it wants, so I've got the pod DNS pointed to the unfiltered quad9 endpoint and rules in my router to allow the machine it's running on to bypass my PiHole enforcement+outside DNS blocks.
In the US at least, there is no expectation of privacy in public. Why should these websites that are public-facing get an exemption from that? Serving up content to the public should imply archivability.
Sometimes it feels like ai-use concerns are a guise to diminish the public record. While on the other hand services like Ring or Flock are archiving the public forever.
Why? It's an excellent recruiting tool. I used to read it as a kid (along with every other paper or digital encyclopedia I could get my hands on), and it certainly made me interested in the CIA.
Because intelligence agencies generally have a vested interest in spreading subtle propaganda, such as by distorting facts.
Now, I have yet to see any cases of the CIA doing this to the World Factbook, since that would tank its credibility, but I also don't browse the Factbook too often.
You are looking at the trees, and missing the forest. The subtle propaganda that the Factbook exists to spread is “the CIA is a neutral and trustworthy gatherer and purveyor of facts”.
I think that’s a secondary or even tertiary goal. The primary goal is to provide a public service to public and private parties who want to become better informed and to show the American people that their tax dollars are at work and reduce the risk of having their funding get cut.
The part before the “and” is the how of the propaganda I described, the part after the “and” is one of the outcomes the propaganda is intended to influence; neither is an alternative to the propaganda function.
I think the problem is people are acting like propaganda is inherently bad, so this subconsciously comes across as “the CIA is problematic because they have a source of factlettes for people to peruse”.
They have multiple competing interests. One of their interests is telling the truth to their local military and politicians - getting caught in a lie to their side is the worst that could happen to them.
The world factbook was mostly things that the military or politicians might care about the truth of, and data they need anyway. Mostly what is there were things where there wouldn't be much value in spreading lies - and what value that might have is outweighed by you can fact check everything (with a lot of work) so lies are likely to be caught.
Not saying they are perfect, but this isn't a place where I would expect they would see distorting facts help them.
> One of their interests is telling the truth to their local military and politicians - getting caught in a lie to their side is the worst that could happen to them.
It's definitely not the worst that can happen. Happens fairly often - google: CIA lying to congress. Getting audited is the worst that thing that happens to the CIA. ie The U.S. Government Accountability Office (GAO) last actively audited the Central Intelligence Agency (CIA) in the early 1960s, specifically discontinuing such work around 1962.
I remember a few amusing examples which weren't strictly inaccurate but were pretty blatant official lines, like how the US uniquely got to stress a "strong democratic tradition" as its political system, whereas everywhere else in the Western world was just "parliamentary democracy" or "constitutional monarchy" and at least the Cold War era versions had a "Communists" line item which purported to identify how few people in democratic societies were members of Communist parties...
The degradation does not need to be in the inference it can be in how often inference is used.
It is closed source but the algorithms that decide what Claude code does when, could behave differently when the API responses are slower. Maybe it does fewer investigatory greps or performs fewer tasks to get to “an” answer faster and with less load.
Unless it’s icy or rainy or you’re on a dirty road and the cameras can’t see anything. Enjoy getting out to wash your camera lenses off every quarter mile.
Apple's locked down ecosystem is enabling the rollout of Digital ID which will eventually be required for Internet access and age verification law. This is why Google is locking down their ecosystem now too.
Apple and Google can now credibly claim to governments to have nearly ubiquitous computing platforms that they can guarantee do not run any software that is not approved or antithetical to the goals of authorities. This makes the device safe for storing things like government IDs. OSs and Browsers will be required to present these IDs or at first just attest to them.
Before posting online, renting a server, using an app you will have to idenitfy yourself using your phone or similarly locked down PC (i.e. mac).
The introduction is under the guise as always of protecting the children. In reality they are removing your rights to privacy and free speech.
reply