the model generates probabilities for the next token, then you set the probability of not allowed tokens to 0 before sampling (deterministically or probabilistically)
The patches could have been written by humans, it doesn't matter that much. Or written by a clanker and polished by engineers. The difficult part is usually not in writing the patches that fix such vulnerabilities, but in finding the vulnerabilities. And these days it's even harder to exploit them, since you need to bypass modern hardening features.
March 2025, Anthropic was claiming that 90% of code would be written by LLMs in three to six months, and "essentially all" code within twelve months. This was one week after closing a Series E round for $3.5 billion. When they began working on their Series F round for $13 billion. You shouldn't need more than that to understand what's going on here.
The Claude Code leak revealed that Anthropic runs Claude-operated bots on the internet. One should be very cautious in getting swept up in the fund-raising process if they are not seeing first-hand the fruition of all of the flattering claims being presented by strangers on the internet.
>March 2025, Anthropic was claiming that 90% of code would be written by LLMs in three to six months, and "essentially all" code within twelve months.
There's a pretty big difference between "We predict in X time frame our model will be capable of Y" and "Our model did Y."
This is like watching someone measure the size of an object and saying "I don't believe you because you guessed it was X before you pulled out your tape measure."
https://github.blog/changelog/2026-01-16-github-copilot-now-...
reply