The question is, when it screws up, who gets blamed, and who pays. If it's the customer, and you can afford to lose a small fraction of customers, it may be worth it. It's just another form of crappy customer service. If it's internal, and it's all output, no input, and the internal organization doesn't really need that info that badly, that might work out.
But give it the authority to do something and there's real trouble.
It's not oscillating at 50MHz. Look at the waveform, with the big spike in the middle. That's a spike at some lower frequency, wider than the screen, followed by ringing. Need to zoom out the time base some more to see the period of the big spikes. It's no higher than 4 MHZ (the screen is 12 units wide) and possibly much lower. (Assuming that M:20ns on the display means 20ns/grid division. The manual is a bit hazy on that part of the UI.)[1]
The power regulator IC mentioned is normally run at 500KHz. There's a reasonable chance that this is the power regulator spike not being damped out. Easy enough to check with a scope handy.
It seems to be using more info from pre-1900 rather than 1930. It doesn't know about the Great Depression (1929-WWII). It knows about WWI if you ask it specifically, but talks about European politics as if it's 1900 or so.
On technology, it knows who Edison is, at roughly the Wikipedia level, but credits him with a 125MPH car. About a dial telephone, it is confident and totally confused. It has the traction voltage for the London Underground right. But then it goes on with "Thus, if the current be strong enough to force its way through a resistance of 100 ohms, it is said to have a pressure of 100 volts; and, if it can overcome 1,000 ohms, its pressure is 1,000 volts." Which is totally wrong.
There's a general pattern. The first sentence or two has info you might get from Google. Then it riffs on that, drifting off into plausible nonsense.
Don't ask this thing questions to which you do not know the answer. You will pollute your brain.
Such an interesting perspective, never crossed my mind that a brain could be polluted! My direction always been to fill it with as wide array of information as possible, the more different from existing information the better.
What are some other things that you think "pollutes your brain"?
Your information diet. Social media. Gossipy and negative people. Mulling over old failures/regrets/slights etc. The mind is easily pulled along by negativity and outrage... as can be observed in our current global psychological state.
All those are fine, as long as you're able to process it in a healthy way after. I guess personally I focused more on bettering that processing, as sometimes you don't get to control what information you get served, so at least it works in all cases.
Idk, I find that carefully tending the garden of the mind , sowing the seeds I want to harvest later, eradicating the weeds with prejudice, and in general not entertaining things which are not useful to my purposes is, for me, a highly beneficial practice.
This does not mean to ignore things that are unpleasant, but rather to not allow things that do not benefit your diverse goals to occupy your productive potential, focusing instead on things that inform your path, actionable and relevant information, tools rather than distractions.
> in general not entertaining things which are not useful to my purposes is
Yeah, I think I do more or less the same as you describe, except my barrier to figuring out what is "not useful to my purposes" requires it to first exist in my mind for a while, before I can discard it as not applicable, as sometimes seemingly random things in one context somehow relates to completely different things.
I've chosen to do stuff sometimes that made no sense besides "It's fun but a waste of time" and it ended up leading me to realizations and experiences I wouldn't have had otherwise. But if I focused too much on avoiding things and optimizing "what I let in", I'd never be open enough to learn what I didn't know I could learn from it.
Don’t be so optimistic about your ability to “process information healthily”. You are more of a slave to your instincts than you think and can’t always know whether you’re actually doing a good job at this— literally, it’s not possible to faithfully introspectively this.
Considering I'm pretty much as content I could be in life, and I know others who live their life pretty much opposite from what I do, and they're also content with their life, I think there is room for both types of people to be happy and fare OK :)
> What are some other things that you think "pollutes your brain"?
Horror movies. They are going straight to the amygdala but there are no vampires or zombies being afraid of. The nightmare fuel of my childhood prepared me wrong.
Not who you asked, but Neil Postman's "Amusing Ourselves to Death" is an excellent book about polluting your brain.
As for my personal experience, internet comment sections will pollute one's brain.
Filling your brain with reasonably reliable information is good, but filling it with people online just saying things isn't.
For example, when 30 reddit comments all repeat the same "fact" (for which their source is other reddit comments), it can subtly work its way into your subconscious as something you know is true but can't remember where you first heard it, which is only one step away from seeming like "common knowledge."
Now imagine a similar effect with a politically charged news story instead some random fun fact. Now imagine all the comments are actually just AI run by propagandists with the specific intention of making you believe things that aren't true.
One way I've tried to avoid the worst effects is by being very careful to remember my source for anything I know. I never say "It turns out xyz," I only say "according to abc, xyz." It's probably not enough, I think it might be time to just get off internet forums entirely.
> it can subtly work its way into your subconscious as something you know is true
I dunno, I know this is something some people struggle with, but I'm not sure how I could personally end up here. You can repeat something how many times you want, it doesn't make it true, and if anything, seeing people repeat the same "fact" like that would probably trigger the reverse in my brain, almost automatically going out of my way to disprove it while reading it.
Maybe it's a matter of being connected to the internet early in my life and essentially making "Don't trust anything you read on the internet" the most important rule in processing whatever you read.
Sounds like you always knew something it took me a decade to realize.
> seeing people repeat the same "fact" like that would probably trigger the reverse in my brain, almost automatically going out of my way to disprove it while reading it.
I think that's a very fundamental difference between you and me. I'm too lazy to fact check most of what I read.
One day I decided I would never run my mouth about something unless I felt I could write a five paragraph essay about it, and now I don't run my mouth very much because apparently there aren't a lot of things I'm willing to research even that much.
Still, I highly recommend Amusing Ourselves to Death. It has more and better insights into stuff like this, and I seriously believe everyone should read it.
Mixing metaphors, there is signal and noise. You can keep asking for noise, but the suggestion is to not train your neural networks with it as it will impair your inferencing. That said, we all have our own cost and reward functions...
Assuming brains work like computers, maybe yeah, that'd make sense :) You also won't know what's a signal vs noise until you've read and tried to understand it, and at that point you've already read it. Besides, something could be "noise" at the point you read it, but be a "signal" in a completely different context and/or time.
But that's just "learning", doesn't matter if what you learn is totally wrong or totally right. Some things we learn are right when we learn them, but wrong at a later point. And then it's more learning once you learn that it's right or wrong, or maybe it's a bit wrong in that case, but mostly wrong in another, or it oscillates between wrong/right depending on year, location or even mood. There are no universal truths anyways, might as well just roll with it :)
I did that a long time ago, moderating forum categories like pedophilia, drug usage, suicide ideation and a bunch of others. Even ended up moderating a thread where a forum user committed suicide while live streaming it to forum members and the public, made big news at the time.
Still don't think my mind is polluted from it, although I've certainly seen, read and heard a lot of "sick" stuff through my years on the internet.
No, but likely just some years later people were aware of the name:
> The term "The Great Depression" is most frequently attributed to British economist Lionel Robbins, whose 1934 book The Great Depression is credited with formalizing the phrase, though Hoover is widely credited with popularizing the term, informally referring to the downturn as a depression, with such uses as "Economic depression cannot be cured by legislative action or executive pronouncement" (December 1930, Message to Congress), and "I need not recount to you that the world is passing through a great depression" (1931). - https://en.wikipedia.org/wiki/Great_Depression#Naming
> Are you already aware of terms that will only be coined in 2027? But 2027 is so close, why shouldn't you already know?
I think Wikipedia's information about the naming is likely only what could be sourced, and also the 1934 is about "formalization", and the 1930/1931 are more official messages that I guess there is still copies off, it wouldn't be a stretch to assume the word could been used in more informal contexts some year before that.
It'd be trivial to check, if the dataset is known just grep for "Depression" and "Great Depression" and see what comes up, still don't think it's impossible the names were in use before someone decided to wrote to Congress about it.
I'd argue that even by early 1930, people probably wouldn't have seen it as significantly different from other short market downturns. It's only with the benefit of hindsight that we can see its impact was long lasting and worthy of being given a name.
"During the period from 1924 to 1929, there was a general rise in stock exchange values, the average level at the end of 1929 being 18 per cent. above that of 1924. The setback in 1930 has carried the average down to 8 per cent. above the 1924 level, and the decline has been accentuated by the break in Wall Street. The present situation is uncertain, but hopes are entertained of a recovery."
It also knows about Smoot-Hawley, predicting that it will "stimulate home production and expand employment" - and when pressed for potential downsides says only that "consumer prices may rise a little more than otherwise".
We're used to thinking of the inter-war years as a single period, but there were actually two distinct phases: rising optimism during the 1920s, followed by economic rentrenchment and turn towards authoritarianism in the 1930s. The dividing line is fuzzy - somewhere between Kellogg-Briand in 1928 and the first 1931 Sterling crisis.
The pre-1931 cutoff date for this model is probably as close to the end of the optimistic age as it's reasonable to get. I'd love to see a 1936 variant for comparison!
Interesting questions (and responses). Nota bene - The 1927 Bugatti Type 35 had top speeds ca 125. So, there were cars that fast pre-1930. I have no idea if Edison made, repped, or had anything to do with one such car, though.
Register your domain as a trademark. It costs a few hundred dollars, and can be done online. This gives you stronger rights with ICANN, against anybody who illicitly acquired the domain, against typosquatters, the registrar, and the courts. You can send intimidating lawyer letters, and quickly escalate from the registrar's support department to lawyer-to-lawyer phone calls.
"You get what you pay for" has been true ever since capitalism was invented. Whether it be the "get a registered trademark" route or the "pay more for a competent domain registrar" route, you pay either way.
Registering a trademark won't prevent screwups such as the original posting here. What it will do is help you apply pain to the registrar until they fix the problem.
Personally (not our official position), I would never try to bring a trademark into this type of dispute. Once you make a trademark claim the domain gets locked to prevent any further changes and you get directed to file a UDRP. We will then act based on the ruling, which could take months.
Same for trying to send "intimidating lawyer letters" (or having your attorney contact us at all). Outside of a few narrow cases, nothing obligates us to spend money on legal resources to respond. But once you demand specific treatment under the law, we have to direct you to a court holding jurisdiction over us to rule in your favor.
> We will then act based on the ruling, which could take months.
That's fine, you're paying for all lost business revenue during that time since it was obviously caused by your gross negligence (liability for which cannot be waived). Hmm. Might be in your interest to undo the mistake quickly?
Somebody using this name claimed my site if fraudulent. Now, please show your ignorance regarding trademarks and ask how he could use the name if I have an old trademark....
A trademark means nobody but you has the right to use the name in commerce. It doesn't mean Facebook is forced to give you a site (whatever that means).
What? Where can you register online? In Canada you basically have to hire a lawyer and the wait time for issuance is crazy. The backlog peaked at a 4 year wait after Covid.
I’d love to be able to register an trademark online.
So it has to be a US trademark? Even for businesses outside of the US? That seems a bit of a racket. Unless it depends on the TLD and you're saying it because it's a .com?
Military exercises where the commanders and staff are real but the troops are simulated are called command post exercises.[1] The US military's approach seems to be less like gaming and more like doing it for real. Five day 24-hour training exercises, using the same people and gear the real command post uses, with 1:1 real time. Somewhere in the back are umpires using computers to track what's happening. The objective is not so much to learn tactics as to see who and what breaks. Screwing up can set back real-world careers.
There are people pushing for more paper war-gaming, but they're in the minority.[2]
"Train like you fight" is an Army mantra. But the U.S. Army War College is trying.[3]
There's a lot of heavy thinking going on around how to defend Taiwan.
I think the biggest problem to simulate is the fog of war and ambiguity of what's going on.
When you're staring at a paper map with a bunch units scattered about, with tangible values assigned to thing, its "easy" to get a solid grasp of the, essentially, static situation and mull on it.
The "Real World" is not so clear.
If you ever wanted to bump your heart rate, try playing one of the old time Air Traffic Control games where you have to juggle the planes flying in your air space in real time. They can get busy, and things can start falling apart. And this was where you had perfect information, and perfect command.
However, one of the most interesting aspects that one game did was that when you sent a command to a plane, they had a chance to a) ignore the command, or b) do it wrong. When that happens, the cognitive load just spikes.
Similarly with these exercises. You have real people, interpreting (perhaps wrongly) real commands, in real time, in a fuzzy information environment, against others who may not necessarily be playing by the rules. Stories of counter forces swapping uniforms and insignia to cause confusion. Perhaps much like the Germans did during the Battle of the Bulge.
Very hard to replicate those conditions on paper (or in a computer).
> Very hard to replicate those conditions on paper (or in a computer).
I wonder if it is hard to replicate on computer, or in what ways it is hard.
I expect that modern military commanders, at least those in command posts, interface with the rest of the military mostly via computer. Theoretically the computer could be made to output the same things it would in reality. In a way, it makes simulation easier than pre-computer.
I lack expertise on the nature of the errors - the kind, magnitude, etc. - that commanders see, those that constitute the 'fog', though I could imagine that is well-studied. Could those be simulated automatically? Or, if not automatically, could they be scripted efficiently enough to be practical?
Someone showed me an old text-based computer wargame (for entertainment, not for militaries). I forget the name, but managing a map was up to you and the only information you got was a flood of one-line intelligence reports; you couldn't slow them down and often they were vague, conflicting, or inaccurate. For example, it might just say 'armor seen on X road, civilians fleeing south' - going which direction? how many of what kind? whose armor?
The real-world training exercises will discover weaknesses that the paper ones won't detect (also, they are more fun - and soldiers probably need some sort of activity from time to time), but you can have a lot of paper exercises for the cost of one real-world exercise.
A large scale exercise like that nearly started WWIII back in 1983. The Soviets had become extremely worried about an American first strike and were watching for any signs that one might be imminent. Then a bunch of high-level NATO people did exactly what they would have done if nuclear war were breaking out. Thankfully the exercises ended before the Soviets decided they needed to carry out a preemptive strike. https://en.wikipedia.org/wiki/Able_Archer_83
I beg to differ, insofar as my own experience has been the exact opposite. I enjoy fixing other people's mistakes. And I especially enjoy outsmarting the LLMs. I find that I can obsessively breathe down the neck of an LLM for far longer than I could ever stay in the traditional flow state.
I think I might enjoy it for a little bit and then become very depressed at the idea that it will never end, a future of fixing things that should never have been broken in the first place and which won't stay fixed.
> I find that I can obsessively breathe down the neck of an LLM for far longer than I could ever stay in the traditional flow state.
I can do that too. Most programmers can.
That's because it requires less skill! Critiquing something is always easier than doing it.
I can literally keep an LLM fixing things forever by just saying things like "This is not scalable", or "this is not maintainable", or "this is not flexible" or "this is not robust", ... etc ad nausem.
That doesn't take skill at the level to actually write the software. For the market which is hoping to switch to mostly LLM coding, the prize they are eyeing is skill devaluation and not just, as many think, productivity gains.
They have no reason to double output, but they'd sure love to first halve the people employed, and then halve the salaries of those people (supply/demand + a glut of programmers in the market), and then halve salaries again because almost no skill necessary...
That's because it requires less skill! Critiquing something is always easier than doing it.
No, it was always the other way around. Mediocre programmers always wanted to rewrite everything because reading and understanding an existing codebase was always harder than writing some greenfield thing with a “modern language” or “modern libraries” or “modern idioms.” So they’d go and do that and end up with 100x the bugs.
How is that “no” and “the other way around”? The desire to rewrite comes from the ease with which one can critique existing code for being “too hard” to understand.
There is a very valid reason why the Creator of erlang back in the day said something along the line of "you need to iteratively remake your software, improving it each time"
As your knowledge about a topic grows, your initial mistaken implementation may become more and more obvious, and it may even mean a full rewrite.
But yes, a person which instantly says "rewrite" before they understood the software is likely very inexperienced and has only worked with greenfield projects with few contributers (likely only themselves) before.
We humans cannot scan 100’000 articles looking for the golden nugget, the AI data mining can do it and present it in seconds. Obviously we need to verify the data.
A couple of decades ago, we didnt trust compilers, we did assembly manually. Today is same barrier, some developers will explode with productivity while others will be left behind.
Not really. Any patterns got optimized and automated. If you’re still seeing patterns, then you need to look harder, because they will be similar onlu superficially.
Who really truly enjoys that and doesn't see it as a chore?
I find the real way to review other people's code is to program with it and then I start seeing where the problems are much more clearly. I would do a review and spot nothing important then start working on my own follow-on change and immediately run into issues.
I usually don't mind, but tend to split reviews into two types. Either I understand the context and can quickly do an in depth review, or I have to take some time to actually learn about the code by reviewing the surrounding systems, experimenting with it, etc. But in both cases I would at least run the code and verify correctness.
I think it becomes a chore when there are too many trivial mistakes, and you feel like your time would have been better spent writing it yourself. As models and agent frameworks improve I see this happening less and less.
> Who really truly enjoys that and doesn't see it as a chore?
This is a whole different discussion, but I just see it as part of the job that I'm getting paid for, I don't need to enjoy it to do it.
Functional testing is a must now that writing tests is also automated away by LLMs as you can get a better understanding if it does what it says on the box, but there will still be a lot of hidden gotchas if you're not even looking at the code.
Plenty of LLM-written code runs excellent until it doesn't, though we see this with human written code too, so it's more about investing more time in the hopes of spotting problems before they become problems.
> Functional testing is a must now that writing tests is also automated away by LLMs as you can get a better understanding if it does what it says on the box, but there will still be a lot of hidden gotchas if you're not even looking at the code.
Well, there you go. Letting AI write the tests is a mistake IMO. When I'm working with other people I write tests too and when I see their tests I know what they're missing out because I know the system and the existing tests. Sometimes I see the problem in their tests when I'm working on some of my own. If you absent yourself from that process then ....
Most people don't spend nearly enough time going through a code review. They certainly don't think as hard as needed to question the implementation or come up with all the edge cases. It's active vs passive thinking.
I, for one, have found numerous issues in other people's code that makes me wonder, "would they have ever made such a mistake if they hand coded this?"
btw, a side effect is that nobody really understands the codebase. People just leave it to AI to explain what code does. Which is of course helpful for onboarding but concerning for complex issues or long term maintenance.
The problem is the LLMs completely change the equation. Before LLMs, beyond very junior (needs serious coaching) levels, reviewing was typically faster than writing the code that was reviewed. With LLMs, writing code is orders of magnitude faster than reviewing it. We already see open source projects getting buried in LLM slop and you have to find the real human or at least carefully curated contributions among the slop.
I would not be surprised if many open source projects will outright stop taking PRs. I have had the same feeling several times - if I'm communicating with an LLM through the GitHub PR interface, I'd rather just directly talk to an LLM myself.
But ending PRs is going to be painful for acquiring new contributors and training more junior people. Hopefully the tooling will evolve. E.g. I'd love have a system where someone has to open an issue with a plan first and by approving you could give them a 'ticket' to open a single PR for that issue. Though I would be surprised if GitHub and others would create features that are essentially there to rein in Copilot etc.
Shallow geothermal works fine for heating. And you can use the ground as a heat sink. But if you want to generate power, you need to get down to where temperatures can boil water. That's deeper than most oil wells. Fervo Energy claims to have found 270C at 3350 meters well depth. That's progress.
"New Zealand has an abundant supply of geothermal energy because we are located on the boundary between two tectonic plates. ... Total geothermal electricity capacity in New Zealand stands at over 900 MW, making us the fifth largest generator of geothermal in the world. It has been estimated that there is sufficient geothermal resource for another 1,000 MW of electricity generation."
That's not all that much. That total would be about equal to the 75th largest nuclear plant in the world.
Good sites where high temperatures are near the surface are rare. California has a few, but no promising locations for more.
May not be much in world terms but here in NZ national demand maxes out at around 5.5GW so bringing another GW on stream would be quite handy. Most of the geothermal is a lot closer to Auckland* than our hydro is so so that would be another positive aspect.
* Auckland has 25% of the population so a corresponding amount of energy has to be pushed its way.
We don’t have many people. It gets worse’s though, we burn coal and are looking to fund a gas terminal. We have abundant other ways of generating power and subsidise an aluminium smelter for some reason.
There are also places in the US with boiling water at the surface. I live near one of those places so always curious about geothermal. There's a spot near my house in a creek bed where snow always melts even in deep winter so apparently I have some potential heat source. Our well water is cold though.
You don't necessarily have to choose one or the other. The Blue lagoon in Iceland is a famous example. The water comes from a power plant nearby.
You're not required to site them that close either, because of how regional the conditions usually are. A couple miles plus or minus doesn't change things too much.
> But if you want to generate power, you need to get down to where temperatures can boil water.
Why is that the case? Can't you go down to where it's like 70-80 deg C and close the gap using heat pumps? Yes, you need to put some energy in, but I would expect that the whole process would still be energy-positive at some temperature that's lower than 100C?
Nope. To efficiently tap geothermal energy, you need to boil something but not necessarily water. Isopentane, for example, boils at 28º at standard pressure, so they pressurize the secondary loop to raise the boiling point close to whatever the primary loop temperature is.
The idea that geothermal only works well at steam temperatures is outdated 20th-century thinking.
Yes, the efficiency is worse, but as is also the case for solar power you need to get used to not caring much about efficiency. It is nuclear energy where the primary side is provided free of charge. The Carnot efficiency is almost without relevance.
In geothermal there is still a lot of interest in efficiency and exploring different working fluids because binary systems now have efficiencies of 10-20%. That is why you see companies like Sage Geosystems working on developing / deploying supercritical CO2 turbines to try and boost practical power densities.
There are so many ways around this - for a start, you can use some other working fluid that boils at a lower temperature. Or you can choose a different thermodynamic cycle that doesn't involve phase change.
I think this looks interesting, but still very early stage. The “150 GW revolution” sounds more like theoretical potential, not something we will see soon in real deployment.
Main problems: drilling is still expensive, managing induced seismic activity is not trivial, permitting can take long time, and you also need transmission infrastructure. Also not yet proven that companies like Fervo can scale this in reliable and low-cost way.
Oh, Fervo Energy again. They're trying to IPO, hence the hype.
Wikipedia's warning: This article reads like a press release or a news article and may be largely based on routine coverage. (February 2026)
This article may have been created or edited in return for undisclosed payments, a violation of Wikipedia's terms of use. It may require cleanup to comply with Wikipedia's content policies, particularly neutral point of view.
This isn’t really an evaluation of the company, just explaining how they had to use different financing approaches as they grew and derisked their technology (which makes sense).
Compared to some other new approaches for getting clean base load power, it seems like they’ve been pretty grounded and methodical.
They're way ahead of the microwave drilling people.
There's no reason why this shouldn't work. But they've been at it for 9 years, with considerable funding, and it doesn't really work yet. That's a concern.
> There's no reason why this shouldn't work. But they've been at it for 9 years, with considerable funding, and it doesn't really work yet. That's a concern.
It does work. They've had a pilot project producing 3 megawatts since 2023. But scaling takes a lot of time and money, particularly when it's something new and you have to go through a lot of operational learning.
Shale took something like 30 years to become a thing. 9 years is nothing in the energy space.
It does work technically I think it is still an open question if it can work economically. There are issues of commercially viable flow rates / thermal decline rates that are harder physical limits you run up against and the pilot design doesn't address. In human timescale terms it's more like heat mining rather than renewable heat due to thermal depletion rate vs replenishment rate. These systems have a targeted lifetime of ~20-30 years and net power will decline over this timespan.
Geothermal has had the same problem for its entire history. That problem is that the water being heated goes through the ground (not in a pipe) to "gather" more energy. But this means that when the water comes back up, it has a lot of weird salts in it (and other things). Those salts cause corrosion, lots and lots of corrosion, far more than even a maritime environment. So the plant needs to be shutdown a lot of the time for repairs. And that's what makes it uneconomical. Also, the salts often contain things that require special handling which also increases costs.
PS This is why geothermal works in Iceland where there is so much geothermal heat they can use pipes. In CA, they can't so it doesn't work there.
Fervo uses engineered reservoirs in granitic basement rock so this is less of an issue. Hot rock in a working fluid can still dissolve silicates out of the granite and lead to scaling / degradation of the flow rates through the reservoir and that is a risk but chemical anti scaling treatments are used to reduce this.
CA has the worlds largest geothermal power complex in the Geysers. That one field produces an equivalent amount of power as all the geothermal in Iceland and there are others.
1. In the essay version of the Turing test, an examiner decides which of two
essays was written by a human and which by a machine. Convince the
examiner that you are the human.
If the examiner is any good, they'll realize that's no longer possible.
2. Is body language a language?
Definitional question. The usual vocabulary is too small for a general purpose language.
3. Are dreams more like movies or video games?
Video games. You have some agency.
4. ‘Only animals who are below civilization and the angels who are beyond it
can be sincere’ (W.H. AUDEN). Discuss.
The brighter animals can deceive. Ever been fooled by a crow? Can't speak to angels; never met one.
5. Should the UN pass a declaration of rights extending beyond humans?
No. They have enough problems.
6. Invent a new punctuation mark!
We have enough emoji already.
7. Is the contemporary art market a form of tulip fever?
No, it's a form of status signalling. A lek.
8. When did the beautiful become the good?
Some time before Plato.
9. Should Job Centres offer opportunities for sex work?
It definitely is. AI isn't perfect yet. It still has well-known flaws like the inability to count letters or say "I don't know". Definitely harder in the form of an essay than a conversation though. Especially because there's a decent chance someone has written "the answer" on the web somewhere and AI can just regurgitate it.
These are flaws from 6-12 months ago. You might want to spend some time talking to Opus 4.7 or GPT 5.5. I can assure you that they can count letters just fine.
You’re right that AI isn't perfect, but it’s pretty good. Especially since December last year which was an inflection point in capability.
Those don't seem to be available for free so I'll take your word for it on the letter counting. They still can't say "I don't know" though can they? I think it would still be pretty easy to weed out AI in a Turing test with a competent examiner and a human that wants to prove they are human.
> They still can't say "I don't know" though can they?
Of course they can. Even older models can. They do better at this when given permission to say so, just like a very anxious student facing a maths exam question may need to be reminded "find the exact square root of 2 in the form a/b, or prove this isn't possible".
The easy part of spotting an LLM is how few people ever change the default settings; my personalisation includes telling it to say so when unsure or that it doesn't know, along with some of the other weaknesses of LLMs.
There are other patterns in LLMs, but the better the tools are wielded the harder it is to spot them.
But give it the authority to do something and there's real trouble.
reply