If you think about it, git is really just a big undo/redo button and a big "merge 2 branches" button, plus some more fancy stuff on top of those primitives.
"Merge 2 branches" is already far from being a primitive. A git repository is just a graph of snapshots of some files and directories that can be manipulated in various ways, and git itself is a bunch of tools to manipulate that graph, sometimes directly (plumbing) and sometimes in an opinionated way (porcelain). Merging is nothing but creating a node (commit) that has more than one parent (not necessarily two) combined with a pluggable tool that helps you reconcile the contents of these parents (which does not actually have to be used at all as the result does not have to be related to any of the parents).
(you may know that already, but maybe someone who reads this will find this helpful for forming a good mental model, as so many people lack one despite of working with git daily)
> Basically, in the megamerge workflow you are rarely working directly off the tips of your branches. Instead, you create an octopus merge commit (hereafter referred to as “the megamerge”) as the child of every working branch you care about. This means bugfixes, feature branches, branches you’re waiting on PRs for, other peoples’ branches you need your code to work with, local environment setup branches, even private commits that may not be or belong in any branch. Everything you care about goes in the megamerge. It’s important to remember that you don’t push the megamerge, only the branches it composes.
> You are always working on the combined sum of all of your work. This means that if your working copy compiles and runs without issue, you know that your work will all interact without issue.
You don't even push the megamerge to the origin. Or perhaps you don't even need to push it. You can just... work off it.
> You don't even push the megamerge to the origin.
But why would I do that with git anyway ? My local branch is what I'm working of, if I'm not ready to push, why would I ? I can as you say just work off it..
And when I'm ready to push, I prep my commit, because I'm expecting it to be immutable and pulled by others 'as-is'. Again, I must be missing something. I think the tool is just not for me, yet at least.
I've often found myself needing to work on two features at once, esp. where one has a dependency on the other. Maybe that branch is the real goal, and the other branch is an independent chunk of work that the feature needs.
Both get iterated on, because it's hard to know everything about a feature before it's done; maybe you find bugs that need fixing, or you realise you were missing something.
Rebasing the dependent branch onto the tip of the other branch gets you there, but as a workflow it's not pleasant, especially if you're not the only person working on the features... It's a recipe for conflicts, and worse that rebased branch conflicting with another person's view of it.
Another example is that there is this simple monorepo that holds both the latest _BlaJS_ frontend and the backend API.
You are working on stuff in the backend, but it sure would be nice to see it in the frontend so you jury rig something in the frontend to display your work as well as some console.log() commands. Then you forget to revert changes to the frontend before pushing the branch.
In jj you would start with these as separate branches. Then you work on a merge of these. Then you use the absorb command to auto-move the code you are working on to the correct branch or you squash the changed files to the branch. Then you can push the backend branch to server and PR it. Then you throw away the frontend branch or just leave it there so you can use it in a future feature.
A real case from my work. I had to work on an old Python project that used Poetry and some other stuff that was just not working correctly on my computer. I did not want to toucj the CD/CI pipeline by switching fully to uv. But I created a special uv branch that moved my local setup to uv. Then went back up the tree to main and created a feature branch from there. Merged them together and worked out from that branch moving all the real changes to the feature branch. Now whenever I enter that project I have this uv branch that I can merge in with all the feature branches to work on them.
Imagine you have 3 branches: local-dev-tuneup, bugfix-a, feature-b
Remember in JJ you're always "in a commit", so the equivalent of the git working tree (i.e. unstaged changes in git) is just a new commit, often with no description set yet. (Because in JJ a commit also doesn't need a description/commit message immediately).
So in a mega-merge you can have a working tree that pulls from local-dev-tuneup, bugfix-a, and feature-b, and you can then squash or split changes out of it onto any of those source branches. Like you can avoid serializing those branches before you're ready to.
I've definitely faced the scenario in Git where I have a unmerged changes that I want to use while continuing work on a feature branch. I end up creating a PR for the branch of the first smaller features (e.g. local-dev-tuneup->master), then a second PR pointing at the first (feature-a -> local-dev-tuneup). It works but it's all a bit cumbersome, even more so if feature-a ends up needing to land before local-dev-tuneup. JJ has better tools for handling this.
Or potentially a team member has critical changes with a PR open and you want to start building on top of their changes now. Manageable in Git but you're locked in on a branch of their branch. Now add a second set of critical changes. Can be done in git but you'll be jumping through hoops to make it happen.
Of course you might say that all indicates a workflow/process problem, but my experience is this situations are common enough but not frequent.
(I haven't actually used megamerges myself yet but the article has me ready to try them out!)
Assume VCs are brainless profit maximizers who don't understand ethics. How do you get them to say "I'm gonna stop you right there"?
Answer: Make it unprofitable to collect this data. Change the incentives.
So really, the correct answer IS on the legal level. Make a set of laws which make it burdensome at best and completely unprofitable at worst, and then the incentives within the system aligns.
Agree with your point and the solution. Make it risky to operate - so that most VCs would wash their hands off due to legal risks. Kind of like what happened to the crypto space. But, it always gets worse before it gets better. Tons of rug pulls happened before SEC took action.
Edit: Oh wait, no, I was thinking of the Drew's Campfire double pendulum video. That video was extra interesting because the creator is not a typical content producer. He just has a few videos without any views, then dropped what might be one of the best videos of all time, and then went back to his technical videos.
This is where STEM people are weak- a lack of knowledge on history. In another forum, someone would have chipped in that England's virgin forests were fully deforested by 1150. And someone else would have pointed out that this deforestation produced the economic demand for coal that drove the Industrial Revolution in the first place.
Still, that kind of underscores OP's point. Yes, natural resources were not completely unlimited prior to the Industrial Revolution; Jonathan Swift predated Watt's steam engine, after all. Still... Neither were information resources 10 years ago. Intellectual property laws did exist prior to AI, of course. The legal systems in place are not completely ignorant of the reality.
However, there's an immense difference in scale between post-industrial strip mining of resources, and preindustrial resource extraction powered solely by human muscle (and not coal or nitrogylcerin etc). Similarly, there's a massive difference in information extraction enabled by AI, vs a person in 1980 poring over the microfilm in their local library.
The legal system and social systems in place prior to the Industrial Revolution proved unsuitable for an industrial world. It stands to reason that the legal system and social systems in today's society would be forced to evolve when exposed to the technological shift caused by AI.
Both animals and water power go way back. The early steam engine was measured in horsepower because that’s what it was replacing in mines. It couldn’t compete with nearby water power which was already being moved relatively long distances through mechanical means at the time.
Hand waving this as unimportant really misunderstands just how limited the Industrial Revolution was.
Irrelevant. Here's Bret Devereaux (an actual historian) explaining this distinction and precisely why those are irrelevant in the context of the Industrial Revolution:
> Diet indicators and midden remains indicate that there’s more meat being eaten, indicates a greater availability of animals which may include draft animals (for pulling plows) and must necessarily include manure, both products of animal ‘capital’ which can improve farming outputs. Of course many of the innovations above feed into this: stability makes it more sensible to invest in things like new mills or presses which need to be used for a while for the small efficiency gains to outweigh the cost of putting them up, but once up the labor savings result in more overall production.
> But the key here is that none of these processes inches this system closer to the key sets of conditions that formed the foundation of the industrial revolution. Instead, they are all about wringing efficiencies out the same set of organic energy sources with small admixtures of hydro- (watermills) or wind-power (sailing ships); mostly wringing more production out of the same set of energy inputs rather than adding new energy inputs. It is a more efficient organic economy, but still an organic economy, no closer to being an industrial economy for its efficiency, much like how realizing design efficiencies in an (unmotorized) bicycle does not bring it any closer to being a motorcycle; you are still stuck with the limits of the energy that can be applied by two legs.
So yeah, actual historians would be dismissive at your exact response, basically saying "I know, I know, but I don't care". You're still just talking about a society mostly 'wringing efficiencies out the same set of organic energy sources'. It IS unimportant, and you completely misunderstand how the Industrial Revolution reshaped production if you think it is important.
I think I prefer the 'STEM people' approach of trying to say true things, rather than this superior approach of just saying things and then, when they turn out to be false, dismissing them as irrelevant. If the truth of the claim is irrelevant, why did you make it in the first place!
The statement IS true anyways, the problem is that you failed to distinguish between an example and a universal claim. You want to argue on logic? I'm an engineer, I can argue on precision too:
The (true!) statement is "However, there's an immense difference in scale between post-industrial strip mining of resources, and preindustrial resource extraction powered solely by human muscle (and not coal or nitrogylcerin etc). Similarly, there's a massive difference in information extraction enabled by AI, vs a person in 1980 poring over the microfilm in their local library."
I said there is a major difference in scale between "modern strip mining" and "a preindustrial extraction method powered only by human muscle", and I made an analogous point about AI-enabled information extraction versus 1980s manual archival research. That statement is purely true. Nothing in that statement says the muscle-powered-extraction example was the only preindustrial mode of production, just as "someone using microfilm in 1980" does not imply microfilm was the only way information was accessed in 1980. The fact that other information formats existed in 1980 is irrelevant to the truth of the example.
So no, nothing I said "turned out to be false". You are attacking a claim I never made because you failed to parse the logic in the one I did. Most importantly, this direction missed the big picture dialectical synthesis that I was introducing as well, and just kept decomposing the argument into locally falsifiable atoms which lost the thread of what was actually being discussed.
Is your counter argument that you’re not wrong just attacking a straw man? Because it really sounds to me like you are just clueless.
Strip mining goes back thousands of years, it’s a simpler technology than making tunnels. And no it wasn’t limited to human power to crack rock several more powerful methods existed.
Roman mining literally destroyed a mountain, operating within an order of magnitude of the largest mines today. That’s what makes what you say false. It’s not some minor quibble over details you are simply speaking from ignorance.
It’s almost like you’re intentionally trying to be wrong.
You don't seem to understand how analogies work. I’m not talking about strip mining vs tunnel mining, I was comparing scale of human powered mining to mining with nitroglycerin.
I’ll let you figure out how the scale of mining “going back thousands of years” is very different from modern explosive mining on your own. Go google “iron production by year” or something. Hint: it took generations for the Romans to strip a small hill, that a modern midsize mining company can do in a few days.
How so, being precise and correct is IMO worth preserving in a world of handwaving slop.
The industrial revolution was from ~1760–1840, it was a major shift it doesn’t cover everything that happens between 1760 and now more did it overwhelm many existing trends.
Yeah - really struggling to understand why people are not grasping this point.
Yes, Easter Island was deforested far earlier - but you wouldn't compare the steam engine's capability in resource extraction compared to what people on Easter Island were doing.
It feels like people are almost straining to not understand the point - I think it's quite clear how ML + AI serve to extract resources of data at a unheard of scale.
It's the autism. And I say that endearingly. I'm an engineer who probably likes trains way too much.
I intentionally pointed out the STEM-esque responses of pedantic correction as a symptom of a disciplinary blind spot: technically correct nitpicking that misses the forest for the trees, a tendency to atomize arguments and lose the structural point, and that tendency is a weakness, not a strength.
There's also a lack of historical training to contextualize their own objection. That's also why I brought up Devereaux as an authority hammer: the actual domain experts consider those objections and dismiss it.
It is hard to convince a man of that which his income is dependent on him not understanding. -Upton Sinclair
You aren't wrong. There's definitely going to be a need to drag people kicking and screaming to enlightenment unfortunately. Too much money to be made at stake otherwise.
the conclusion doesnt follow from the premise is the issue.
the laws and enclosure happened basically orthogonal to the respurce constraints, so there's no actual comparison to draw.
if you insist on a causation, id go with the opposite - the laws making ownership and forcing people off of land enabled the exploitation and innovation, not that it was cleanup for exploitation that was already happening. existing exploitation across all kinds of degrees was already being managed without the enclosure.
if you just want to make stuff up, you can reference anything you want, like that some elaborate thing happened in star wars, and thus the same thing must be happening with AI
reply