Claude Code Team: Please fix the core experience instead of branching out into all these tertiary features. I know it is fun and profit to release new features but you need to go deeper into features not broader into there be dragons territory.
There is that (you can still use coding agents on the other 80% to polish, by the way) but to me this situation underscores the value of a good "editor"--someone who says this is good to ship vs. not this not now.
Depends where you live, in most places they don't bother anymore, in the few that they do a VPN obviously gets around it but it's incredibly unlikely you'd be doing enough to ever be on the radar let alone get caught. That battle was lost long ago.
I believe it is less that they stopped caring, and more that most piracy these days is web streaming, which is much harder to detect than torrenting or similar. AFAIK most major American ISPs are still fairly strict about pirate torrents.
Or when I would try and place ads in newspapers for my internet companies and they wouldn’t run them because they “don’t run ads for competitors”, okay then, how did that work out for you? Did you stop the internet?
The fundamental understanding is vague. I bet half of the people don't understand what we're talking about. And 90% of the people don't understand how they think, I guess. Moreover 90% of people 80% of time probably don't really think logically but do routine in a rudimentary thinking mode.
Honestly it is unclear to me still. Last month wrote a whole coding principles AGENTS/CLAUDE MD file but not clear if the produced code has changed _because of_ that or other reasons. Couple of things give me confidence that the models actually do read them and follow them.
1) I have a sentence that says: "I am your pairing buddy Bob. Occasionally address me by my name." And you will see them address you by the name, occasionally.
2) I do have per project "Project Tenets." Each tenets is a catchy couple of words, like Design for Agent First, Tolerant Interfaces over Robust Interfaces. If you notice the coding assistant referring to this you know they are aware, and they do sometimes refer to it. I also have a common list of principles but I think they are less useful because those are generic for software engineering.
3) Occasionally, I will do the following prompt:
i) Who am I?
ii) What is our design philosophy and tenets?
iii) Go through the repo and ensure the architecture and codebase adheres to our design philosophy and tenets.
> i) Who am I? ii) What is our design philosophy and tenets? iii) Go through the repo and ensure the architecture and codebase adheres to our design philosophy and tenets.
I also use similar questions for different situations, and different ones for different projects. So, to avoid writing these questions every time, I created commands for each question.
There is also a "/capture" command that analyzes the dialogue and suggests what to save from it, what commands and rules to create.
The "/insights" command not only analyzes the current dialogue, but also suggests ideas for creating new commands and rules.
If you have asked the same question several times in the dialogue, these commands will suggest creating new commands specifically for these questions.
I would love to see the prompt history. Always curious how much human intervention/guidance is necessary for this type of work because when I read the article I come away thinking I prompt Claude and it comes out with all these results. For example, "So Claude went after the app instead. Grabbed the Android APK, decompiled it with jadx." All by itself or the author had to suggest and fiddle with bits?
Why would anyone watch a live stream of someone else poking a computer into completing a task? It’s barely more interesting than having someone tell you about a dream they had.
Unclear how much of this is autonomous behavior versus human induced behavior. Two random thoughts:
1) Why can't we put GitHub behind one of those CloudFlare bot detection WAFs
2) Would publishing a "human only" contribution license/code of conduct be a good thing (I understand bot don't have to follow but at least you can point at them).
> Why can't we put GitHub behind one of those CloudFlare bot detection WAFs
At small scale of individual cases it's useless. It can block a large network with known characteristics. It's not going to block openclaw driving your personal browser with your login info.
> Would publishing a "human only" contribution license/code of conduct
It would get super muddy with edge cases. What about dependabot? What about someone sending you an automated warning about something important? There's nothing here that is bot specific either - a basic rule for "posting rants rather than useful content will get you banned" would be appropriate for both humans and bots.
...but, like, why even offer an API at that point? Now every API-initiated PR is going to be suspect. And this will only work until the bots figure out the internal API or use the website directly.
Unfortunately enforcing "human behind the push" would break so many automations I don't think it's tenable.
But it would be nice if Github had a feature that would let a human attest to a commit/PR, in a way that couldn't be automated. Like signed commits combined with a captcha or biometric or something (which I realize has its own problems but I can't think of a better way to do this attestation).
Then an Open Source maintainer could just automatically block unattested PRs if they want. And if someone's AI is running rampant at least you could block the person.
not so clear whether this matters for harm intensity- anything an ai can be induced to do by a human, which it can then do at scale, some human will immediately induce. especially if its harmful.
Somebody still needs to do lower-level work and understand machine architecture. Those feeling like they might be replaced in web or app dev might consider moving down the stack.
reply