Hacker Newsnew | past | comments | ask | show | jobs | submit | ako's commentslogin

Then you’ll be happy to learn it’s not Chinese

GP is stating that the second best in the field, the Chinese, is so far behind the best in the field, GPT 5.5, that it is not even worth testing anything else.

Thanks for the translation, I did not express it very clearly. Anything that I try is so much worse.

Is GPT 5.5 the best in the field? I think Opus is still better despite Anthropic's recent stumbling.

Because that allows us to create useful tools that we didn't have before. For me it feels like a carpenter going from a hand-saw to an electrical saw. Still requires the skills of a good carpenter, but faster and easier.

… so a bunch of people just decided that rights we granted to humans also apply to their tools? Without any discussion? This isn't how anything is supposed to work when it comes to common rules!

The common rules are so because we agree on them. On principle, in this case, we do not agree what the rule should be here and it's in a way unprecedented. We'll soon converge to a societal agreement. I hope society abstaining itself from tools will not be the answer.

And the process by which we agree is lawmaking.

I've created my own DSL, and instruct Claude Code how to generate code for this DSL using skills.

Since this is a new language, and not documented on the web nor on Github, Claude's ability is not based off of stolen IP. At best it's trained on other language concepts, just like we can train ourselves on code on GitHub.

Maybe a good reason to create a new programming language?


Interesting, but I still do not think this is as easy. The AI model is still trained on some existing works, and it is generating code in the new DSL or programming language still based some higher level ideas and expressions it has consumed during training. You have added just one more level of indirection. The output cannot anymore be verbatim copy of some existing work or non-short snippets, however, the output may still carry "expression" that are substantially similar to something pre-existing.

Note: IANAL. The above is just from my current understanding.


Opus and Gpt are generic LLMs with knowledge on all sort of topics. For specific use cases you probably don't need all the parameters? Suppose you want to generate code with opencode, what part of the generic LLM is needed and what parts can be removed?

we're already doing that, it's called distillation and how models like deepseek are trained.

I run a lot of Linux on my macbook pro m1, in parallels if i want a desktop, in docker or podman if i don't need a desktop. I prefer linux on my macbook to my previous Thinkpad (p1 gen2, 64gb, 2tb, 4k oled, core 9i). The thinkpad feels less solid, battery life is horrible, keyboard of the thinkpad is surprisingly bad.

Typing on a Mac is also subpar: I use one mechanical keyboard that can easily switch between Mac and iPad. Same typing experience on both.

I think Oracle PlSQL was also based on Ada, basically Ada + SQL embedded. So it may be the widest used version of "Ada".

Pretty much [close enough for government work]; see: https://stackoverflow.com/questions/7764656/who-is-diana-and...

Wow, that’s a factoid I’d love to learn more about!



That is because they know the users. Users are very sensitive to this: if the outside wasn't changed then the internals cannot be much improved. You see this with cars, cars need a new design otherwise customer will think nothing much changed. Customer will usually buy newer over better because they think newer must have improvements, and styling signals new. Same with computers, all the disappointments when apple releases a new macbook without changing the exterior....


Generating software still token costs, generating something like ms-word will still cost a significant amount, takes a lot of human effort to prompt and validate. Having a proven solution still has value.


You can already generate surprisingly complex software on an LLM on a raspberry pi now, including live voice assistance, all offline. Peoples hardware can self write software pretty readily now. The cost of tokens is a race to zero.


That is not what i'm seeing. I've been coding intensively with claude code for the last 3 months: 200k lines of go, 1200+ commits, mostly using opus. I don't think i could have done this with a local LLM. Maybe on a M5 pro?


Qwen 3.5 122b is competitive with Opus 4.6, and runs at 35t/s on a Strix Halo. It is my daily driver.

Unlike Opus I can run abliterated models with censorship removed so it can be used for security research and reverse engineering and whatever I want with privacy, offline.

It makes any hosted models feel like a kids toy.


Same here, not a flat but rely on street parking. There's at least 20 public charging points in walking distance of my home.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: