Let's say I have the canonical example of a stack from main via a backend-pr and a frontend-pr. When my stack is done I send it for review to one frontend reviewer and one backend reviewer.
Usually when you develop a "full stack" thing you continuously massage the backend into place while developing frontend stuff. If you have 10 commits for frontend and 10 for backend, they might start with 5 for backend, then 5 commits to each branch to iron out the interface and communication, and finally 5 commits on the frontend. Let's call these commits B1 through B10 and F1 through F10. Initially I have a backend branch based on main wuth commits B1 through B5.
Then I have a frontend branch based on B5 with commits F1 through F5. But now I need to adjust the backend again and I make change B6. Now I need to rebase my frontend branch to sit on B6? And then I make F6 there (And so on)?
And wouldn't this separation normally be obvious e.g. by paths? If I have a regular non-stack PR with 20 commits and 50 changed files, then 25 files will be in /backend and 25 in /frontend.
Sure, the reviewers who only review /frontend/* might now see half the commits being empty of relevant changes. But is that so bad?
> If you have 10 commits for frontend and 10 for backend
In this model, you tend to want to amend, rather than add more commits. And so:
> they might start with 5 for backend, then 5 commits to each branch to iron out the interface and communication,
You don't add more commits here, you modify the commits in your stack instead.
> Now I need to rebase my frontend branch to sit on B6?
Yes, when you change something lower in the stack, the things on top need to be rebased. Because your forge understands that they're stacked, it can do this for you. And if there's conflicts, let you know that you need to resolve them, of course.
But in general, because you are amending the commits in the stack rather than adding to it, you don't need to move anything around.
> And wouldn't this separation normally be obvious e.g. by paths?
In the simplest case, sure. But for more complex work, that might not be the case. Furthermore, you said you have five commits for each; within those sets of five, this separation won't exist.
I also kind of wish Rust had a two-tier system where you could opt into (or out of) the full thing.
The lighter version he describes would make some assumptions that reduce not just the mental overhead but importantly the _syntax_ overhead. You cant have Arc<dyn Animal> if what you describe is an Animal and the readability is infinitely more important than performance. But if you could create a system where in the lighter Swift/C#-like syntax things are automatically cloned/boxed while in the heavier Rust they aren’t, then maybe you could use Rust everywhere without a syntax tax.
When you use LLMs with APIs I at least see the history as a json list of entries, each being tagged as coming from the user, the LLM or being a system prompt.
So presumably (if we assume there isn't a bug where the sources are ignored in the cli app) then the problem is that encoding this state for the LLM isn' reliable. I.e. it get's what is effectively
Someone correct me if I'm wrong, but an LLM does not interpret structured content like JSON. Everything is fed into the machine as tokens, even JSON. So your structure that says "human says foo" and "computer says bar" is not deterministically interpreted by the LLM as logical statements but as a sequence of tokens. And when the context contains a LOT of those sequences, especially further "back" in the window then that is where this "confusion" occurs.
I don't think the problem here is about a bug in Claude Code. It's an inherit property of LLMs that context further back in the window has less impact on future tokens.
Like all the other undesirable aspects of LLMs, maybe this gets "fixed" in CC by trying to get the LLM to RAG their own conversation history instead of relying on it recalling who said what from context. But you can never "fix" LLMs being a next token generator... because that is what they are.
That's exactly my understanding as well. This is, essentially, the LLM hallucinating user messages nested inside its outputs. FWIWI I've seen Gemini do this frequently (especially on long agent loops).
This isn't about security of the same kind as authentication/encryption etc where security by obscurity is a bad idea. This is an effort where obscurity is almost the only idea there is, and where even a marginal increase in difficulty for tampering/inspecting/exploiting is well worth it.
My point is: the "security through obscurity is bad" and "security through obscurity isn't real security" are both incorrect.
They apply to different threats and different contexts. When you have code running in the attackers' system, in normal privilege so they can pick it apart, then obscurity is basically all you have. So the only question to answer is: do you want a quick form of security through obscurity, or do you not? If it delivers tangible benefits that outweigh the costs, then why would you not?
What one is aiming for here is just slowing an annoying down an attacker. Because it's the best you can do.
Trusting the messages to contain specific keywords seems optimistic. I don't think I used "emergency" or "hotfix" ever. "Revert" is some times automatically created by some tools (E.g. un-merging a PR).
For the stuff I've worked on, if you want to know about bugfixes and emergency releases, you'd go to Jira where those values are formalized as fields. Someone else in the comments here had a suggestion which just looks for the word "fix" which would definitely capture some bugfix releases, but is more likely to catch fixes that were done during development of a feature.
Yes for a meaningful context you should need both the source repo and the work tracking system. But today most systems have apis (jira, ADO, gh, ...) so this should be fairly doable, especially using a bot like copilot cli. But it's not doable as a little shell script.
The roadmaps look messy if you look at them as coming from one company. But if you remember that windows and .net (or DevDiv) are more like competing companies then it makes more sense. Then for one side it’s Win32, MFC, WinSDk and for the other it’s Win32 (WinForms), WPF, MAUI
If Microsoft hadn’t been preoccupied with a failed mobile bet then this wouldn’t have happened. It’s a lost decade followed by a (much more successful) cloud pivot. The reason desktop is ignored is because it can be. No one is eating their lunch on desktop.
reply