Man, for some reason the ass kissing of Claude is just insane. Claude this, Claude that. I do Elixir for a living and Claude is just absolutely garbage in comparison to Gemini Pro. But hey, everyone hates Google - despite them being the only model provider that is competent and the one that explicitly states for paid providers that they do not use any of our data for training right under the chatbox.
And Anthropic isn't even the saint that everyone pitches them out to be:
Gemini is great, but when I last tried it it wasn't as good as Claude Code with agentic workflows. When I tried Antigravity is was very unreliable, like the tooling had yet to catch up to the model such that it wasn't able to fully leverage the model's intelligence and capabilities. I think it comes down to how you're using the models, so I'll ask: how are you interfacing with the LLM?
> every delay to AGI results in deaths that AGI could have prevented
Sure, that's what AGI would be used for /s
In other news, we are not even close to AGI and even with the current experimental technology, frontier AI model companies are already fighting to help departments of war, which actually results in the most deaths. What makes you think AGI would be used for not leading to the same millions of deaths?
It's a classic deflection tactic - when they can't refute you by merit, they answer something with a question that is completely different about what was said - BOOM, the discussion is now about something else, completely different from the original issue. I honestly can't tell if it's bots or humans these days doing this a lot, but they're getting pretty good at it.
Every other company sends out cold emails to prospects outside of the company, but Oracle is the only company to send out cold emails to their own employees. Gotta give it to them...
I wonder if this has any connection with the recent string of attacks including the FBI director getting hacked. The attack surface is large, executed extremely cleanly - almost as if done by a high profile state sponsored actor, just like in Hollywood movies.
The NPM ecosystem is a joke. I don't even want anything to do with it, because my stack is fully Elixir. But, just because of this one dependency that is used in some interfaces within my codebase, I need to go back to all my apps and fix it. Sigh.
JavaScript, its entire ecosystem is just a pack of cards, I swear. What a fucking joke.
Yeah, but this was also strategically in Apple's interest to sell the iPads with nerfed up iPad OS as a separate line up. I love Steve Jobs and all, but this did NOT age well. The millions of people using Surface and Surface Pro will absolutely disagree with this take.
Yeah I have a Surface Laptop Studio. Windows 11 is generally awful to the point where I have switched to Bazzite for my desktop, but the form factor with touch support (and pen support) is great. Easel mode is great for drawing, tablet mode is pretty good for drawing as well and also for casual browsing or for displaying DND character sheet info. Even in laptop mode sometimes I find myself using it to scroll a bit on pages.
> I would like to know when someone is trying to have the tool do all of their work for them.
Absolutely spot on. Maybe I'm old school, but I never let AI touch my commit message history. That is for me - when 6 months down the line I am looking at it, retracing my steps - affirming my thought process and direction of development, I need absolute clarity. That is also because I take pride in my work.
If you let an AI commit gibberish into the history, that pollution is definitely going to cost you down the line, I will definitely be going "WTF was it doing here? Why was this even approved?" and that's a situation I never want to find myself in.
Again, old man yells at cloud and all, but hey, if you don't own the code you write, who else will?
There will always be room for craftsmen stamping their work, like the expensive Japanese bonsai scissors. Most of the world just uses whatever mass-produced scissors were created by a system of rotating people, with no clear owner/maker. There's plenty of middle ground for systems who put their mark on their product.
If you architect and review everything, but someone else does the implementation, and you iterate, do you believe you did not do anything? I let AI write the commit message too, and the motivation behind the PR is the first thing in it. With my guidance, of course.
Imagine just having the copilot extension installed will be an excuse at some point for them to steal our code to train their AI models. Not sure if they already do this.
> Copilot may include both automated and manual (human) processing of data. You shouldn’t share any information with Copilot that you don’t want us to review.
so they're reserving the right to process whatever it looks at.
You're sending them your codebase already, as part of the prompt for generating new snippets, debugging, etc. So they have access to it.
They'd be absolute fools not to be using the results of sessions to continue to refine their models, and they already reserved the rights to look at what you send them, so yeah - they're doing it.
Also for some reason that site hijacks your scrolling and tries to "smooth" it, which just makes it feel more unresponsive as most browsers already have smooth scrolling?
That's the TOS for the broader Microsoft Copilot, not for the GitHub one, which has its own TOSes (depending whether your last renewal was before or after March 5) that don't include the "entertainment" wording.
New Section J — AI features, training, and your data: We’ve added a dedicated section that brings all AI-related terms together in one place. Unless you opt out, you grant GitHub and our affiliates a license to collect and use your inputs (e.g., prompts and code context) and outputs (e.g., suggestions) to develop, train, and improve AI models.
We should not be using Copilot in the first place.
I think anyone using a "Team" or enterprise plan of ChatGPT/Claude/Copilot doesn't have their data used for training, that's the same across the board.
Yeah, but it's a shitty move though - it should be by default opt-in, rather than opt-out. Imagine, you just continue coding normally consciously avoiding co-pilot only to find out that Github has been secretly training their models on your code, just because you forgot to toggle a setting off which was turned on without your knowledge, which they didn't even have the decency to email you about, but just posted on a blog no one reads.
And Anthropic isn't even the saint that everyone pitches them out to be:
https://www.cnbc.com/2026/02/12/anthropic-gives-20-million-t...
reply