They mention it uses MXFP4 quant which is a blackwell capability but it looks like this is also supported by ascend 950 series according to marketing material
I don't know why this keeps coming up. This has always been the least reliable way to know the cutoff date (and indeed, it may well have been trained on sites with comments like these!)
Just ask it about an event that happened shortly before Dec 1, 2025. Sporting event, preferably.
you cant but its pretty reproducible across api and codex and other agents so i just thought it was odd. full text it gives:
Knowledge cutoff: 2024-06
Current date: 2026-04-24
You are an AI assistant accessed via an API.
# Desired oververbosity for the final answer (not analysis): 5
An oververbosity of 1 means the model should respond using only the minimal content necessary to satisfy the request, using
concise phrasing and avoiding extra detail or explanation."
An oververbosity of 10 means the model should provide maximally detailed, thorough responses with context, explanations, and
possibly multiple examples."
The desired oververbosity should be treated only as a *default*. Defer to any user or developer requirements regarding
response length, if present.
i wonder if they put an older cutoff date into the prompt intentionally so that when asked on more current events it leans towards tool calls / web searches for tuning
I wonder if the cutoff date is the result of so many people posting about the date over time and poisoning the data. "Dead cutoff date theory," perhaps.
Whatever it is, the cutoff date reporting discrepancy isn't new. Back when Musk was making headlines about buying/not buying Twitter, I was able to find recent-ish related news that was published well after the bot's stated cutoff date.
ChatGPT was not yet browsing/searching/using the web at that point. That tool didn't come for another year or so.
That sort of test isn't super reliable either, in my experience.
You're probably better off asking something like "what are the most notable changes in version X of NumPy?" and repeating until you find the version at which it says "I don't know" or hallucinates.
the MDM profile requirement is suspect though I get why they are doing it. but it doesn't inspire confidence to see that their profile is unsigned and still using the default micromdn scep challenge...
Lol with 2 VMs per VM you can do an infinite VM linked list where each macOS hosts a "guest" and a "next host". I'm too lazy to test this out. Any takers?
IIRC, that's only for Linux guests that can nest. macOS can only one level deep. That is: you can't have a macOS guest (running on the Apple hardware host) make its own macOS guest.
I tried the periodic table in their examples using sonnet 4.6 on the $20/mo plan. After a few minutes Claude told me it reached the max message length and bailed. I pressed continue and eventually it generated the table, but it wasn't inline, it was a jsx artifact, and I've now hit my daily usage limit.
I’m intermittently getting artifacts vs the new visuals api, depending on which version of the Claude app I use. iOS/iPadOS apps are not yet supporting the visualization API, and I don’t see an app-store update yet.
Since the underlying tool seems to be named something like "widget," I found I can nudge it into this embedded interactive output instead of artifacts by saying, "show me a widget that..."
claude models with 'extended thinking' toggled answer very quickly and the quality of the answer is far ahead of what gpt 5.2 'instant' provides. i wont even bother using the non-thinking version of chatgpt because the quality of the answers is awful and usually incorrect.
reply