Hacker Newsnew | past | comments | ask | show | jobs | submit | chasinglogic's commentslogin

I prefer modal editing. It feels the best to me, but that could be because I learned it.

I had to learn vim back when I was a basic system administrator so when it came time to start doing real coding it was an obvious choice. I also prefer properly open source software, and if possible, non-corporate backed. So it checks all my boxes especially now that I'm not "missing" anything from VSCode.

When people ask me though I recommend VSCode to most normies, I don't think vim is worth it nowadays.


> Using Jujutsu, “amending a commit” also produces a new commit object, as in Git, but the new commit has the same change ID as the original.

This is confusing to me, though to be fair I'm a "git expert" by trade. If you're amending a commit surely the "change" has changed so the change ID should also change? If the "change" isn't tracking the actual changes then what could it be tracking?

Overall I think this is just more confusing than using git but I think it's cool that people are building alternative clients. That's definitely the way to go if you want adoption.

Making history manipulation easier seems like a bit of a recipe for disaster given my experience training people. That old XKCD about git comes to mind and honestly that's where most people stay, if you bother to learn it then things like Jujitsu are probably harder to use for you. If you aren't interested in learning git to that level then I doubt you want / need something like Jujitsu.

For those curious the "multiple branches" at a time thing they're selling can be done with git, IMO easily, using worktrees: https://git-scm.com/docs/git-worktree


> If you're amending a commit surely the "change" has changed so the change ID should also change? If the "change" isn't tracking the actual changes then what could it be tracking?

The author is using newer terminology around "changes", but I prefer the older "revisions", as being less overloaded. But yes, the revision/change ID remains the same even if the commits underneath changes. `jj obslog` will show you the history of commits underlying a revision. This stability is what we want when rebasing, and git doesn't provide it.

> Making history manipulation easier seems like a bit of a recipe for disaster ...

I used to think this too, but it was really due to git and its CLI. Under jj, history manipulation is easy, consistent, and easily reversible with `jj undo`. Because it's safer and easier, I routinely do way more rebasing, and stacked PRs are much less painful to incorporate feedback on. Basically, it makes git's more advanced operations feel like everyday tools.

(Of course, jj doesn't fix the problem of rewriting history that's already been shared with other people, but even there, its notion of immutable commits tries to stop you from breaking other people's histories.)

> the "multiple branches" at a time thing they're selling can be done with git, IMO easily, using worktrees

Worktrees aren't quite the same thing the author is describing. Worktrees allow you to check out multiple branches at the same time to different directories, but they're still sort of separate.

The author is making the working revision a merge revision where every parent is a branch they want to work on. This allows them to see what the code will do when those branches are merged. They can also add revisions to all branches simultaneously by working on the merge rev, and using `jj squash` to choose which parent branch to push work to on the fly. When done for the day, `jj abandon` the merge commit. AFAICT, it's both lighter and more flexible than worktrees.


imo revision is worse.

I feel like the best terms are patch id and patch revision id.


I mean none of the options are great, imo.

- "commit" is overloaded with git, and jj still uses commits for other things under the hood. - "patch id" is overloaded with patch files, and jj still uses git's snapshots, not patches (unlike darcs/pijul, iiuc) - "patch revision id" isn't bad, but it's a bit wordy - "change id" just seems vague, since it's unclear where one change begins and another ends

"revision" at least captures the idea that you are revising the same piece of functionality, but then you might expect each snapshot/commit to be a different revision, and not have the same ID, which also isn't quite right.


I am sad I read this, because patch is perfect, but I doubt they will change the language again.


patch sounds too specific... like an actual patch file tied to the actual contents of the patch.

change is probably the right word, you want to change something, the exact operations of the change (multiple revisions of different patches) can evolve over time.


Maybe because I have never used an actual patch file, but patch just feels right to me. As an end user, a patch is an intentional delta blob resulting in some difference to the software. Writing software is just organizing those deltas. If I need to cherry pick between branches, pulling a patch from one to another feels more right than “changes” as a collective object.

Oh well, naming things is hard.


Think of the change ID as a "symbolic name", as opposed to the commit ID which identifies a particular snapshot.

As you amend a commit, it creates a new commit ID each time. Only the commit ID of the most recent amendment is in your final graph. All the old ones are orphaned for garbage collection.

The change ID never changes as you make amendments (because you're still working on the same change), and this change ID will always be in the graph. So you can refer to the change ID (which doesn't change) instead of the commit ID (which changes with every amendment).

Another way to think of it would be akin to filenames vs inodes in a filesystem (it's not 100% the same, but the concept should help you visualize). If you delete a file and create another one with the same name, its filename will be the same, but its inode number will be different because it's technically a different file. The old inode gets marked deleted so that it can be reaped somehow. If you make a symbolic link to the file, you'll always get the intended one (because a symbolic link refers to a path). If you make a hard link to the file, you'll get an outdated file after something replaces it (because a hard link refers to an inode).


There’s still a git-compatible commit ID which changes along with the contents. There’s also an immutable revision/change identifier that persists even as you continue working on it.

This works extremely well in practice and makes rebase-heavy workflows practical even when collaborating with others.


  …can be done with git, IMO easily, using worktrees
Like many things in git, the capability exists, but I would not call worktrees easy. The few times I have tried to play with worktrees resulted in enough friction that it felt safer to use a clone in a separate directory.


I’ve also run into problems with tools that aren’t worktree aware so often that I’ve stopped using it.

I’ve been using jujutsu for about 6 months now, and the only time I’ve reached back for git was when I had to rebase and amend someone else’s branch to get it merged (when they weren’t available to do so themselves of course).

Switching between changes in jujutsu has been a pleasant experience for me thus far, although I’m not as good with it as I was with stacked-git to keep local only changes (things I’m hacking to match my workflow / local setup) out of change sets.

The way it displays diffs is also still something I am getting used to, and have made plenty of mistakes when pulling in changes from trunk. That’s probably more of a case of “old dog new tricks” than jujutsu.


Yeah, after the first month of jj, I abandoned git forever, because it's already so much better. There are some hiccups, though.

I switched over to colocation for all repos, because too many things expect git directories to be where they expect.

I think the revset language is cool and powerful, but if I'm honest, it's tempting me to spend too much time trying to master, when 99% of the time all I need is, "show me the nearby ancestors and descendants within k revisions".

I think the diffs need work. Or I need to get comfy with 3-way diffs. It's unfamiliar, and an obstacle to fixing conflicts. Luckily I get maybe 1/10th the conflicts I used to under git.


> I think the revset language is cool and powerful, but if I'm honest, it's tempting me to spend too much time trying to master, when 99% of the time all I need is, "show me the nearby ancestors and descendants within k revisions".

I just spend enough time to write a new function for what I want to do, and then just know the basics for regular day to day stuff. I feel like that gets me really far.


Yeah, I've done that as well. I wonder if the revset docs should be split into "Basic" and "Advanced" sections.


> I think the diffs need work. Or I need to get comfy with 3-way diffs. It's unfamiliar, and an obstacle to fixing conflicts.

You should get comfy, you won't regret it. I haven't got around to trying jj yet, but I use them in git; frequently see people messing things up or just having a hard time resolving a conflict that they wouldn't if they used (& understood) diff3.

In brief: a regular 2-way diff shows you the current state, and what you wanted to change to right? Well 3-way just adds an extra bit of information (the middle) which shows you from what state you were changing to the bottom.

So say you have:

  <<<<<< HEAD
  def wazzle(widget):
    try:
      widget.wazzle()
    except Exception:
      return False
    return True
  |||||||
  def wazzle(widget):
    widget.wazzle()
  =======
  def wazzle(widget):
    from wazzler import wazzle
    wazzle(widget)
  >>>>>>> (deadbeef Abstract wazzle implementation to own package)
If you didn't have the middle, it might not be at all clear why you were getting the conflict, and what the appropriate fix is. It allows you to see that Ah ok, master (or whatever I'm rebasing on or whatever) has changed to return a bool indicating success or error, that's fine, I was just trying to change the wazzle method to pass it to a library function instead.

Or you might have it that the same change is already in the HEAD part at the top, but there's a conflict because they put the import elsewhere. The middle then allows you to see that you were making essentially the same change, you don't care where the import goes (or like their idea better), you can just remove it and stick with the changes on HEAD.

My point is that it's strictly more information, that can help or make it easier to resolve the conflict. It shouldn't be confusing at all, because the same 2-part thing you're accustomed to is there too.


Others have explained the change IDs already in detail, but I want to try to give a short and sweet explanation: changes in Jujutsu are a mutable, high level abstraction built on gits immutable commits. Change IDs are stable across mutations.


Sounds like how Mercurial handles commits!


The people who work on Jujutsu explicitly draw a lot of inspiration from Mercurial :)


worktrees are pretty great but they make some things a bit more complicated conceptually compared to just checking out two copies for some people


TIL about worktrees, a feature I've needed in the past but didn't expect to even exist.


Jj also supports worktrees, though they call them workspaces.


Whenever I see tools like this I always think "that wouldve been great at my old job where we didn't do post mortems"

But nowadays I think if I can automate a runbook can I not just make the system heal itself automatically? If you have repeated problems with known solutions you should invest in toil reduction to stop having those repeated problems.

What am I missing? I think I must be missing something because these kinds of things keep popping up.


A lot of on call teams lack the capability to do that automation, either because ops takes the pages and can't code (or can't code well enough) or because dev takes the pages and have no access or knowledge about the infra APIs they could use for self-healing.

These platforms can form a sort of "common ground" where dev can see the infra APIs and the "code" is simple enough for ops people that don't code to rig stuff up.

I don't think these platforms are built for the kind of places where being able to write a Python script to query logs from CloudFront are just table stakes for all ICs regardless of role.


Writing post-mortems is generally pretty kludgy. You might have a Slack bot that records the big picture items, but ideally, a post-mortem would include connections to the nitty-gritty details while maintaining a good high-level overview. The other thing most post-mortems miss is communicating the discovery process. You'll get a description of how an engineer suspected some problem, but you rarely get details as to how they validated it such that others can learn new techniques. At a previous job, I worked with a great sysadmin/devop who would go through a concise set of steps when debugging things. We all sat down as a team, and he showed us the commands he ran to confirm transport in different scenarios. It was an enlightening experience. I talked to him and other DevOps folks about Rundeck, and it was clear that the problem isn't whether something can be automated, but rather whether the variables involved are limited enough to be represented in code. When you do the math, the time it would take to write code to solve some issues is not worth the benefit.

Iterating on the manual work to better communicate and formalize the debugging process could fit well into the notebook paradigm. You can show the scripts and commands you're running to debug while still composing a quality post-mortem, as the incident is happening where things are fresh.

The other thing to consider is how often you get incidents and how quickly you need to get people up to speed. In a small org, devs can keep most things in their head and use docs, but when things get larger, you need to think about how you can offload systems and operational duties. If a team starts by iterating on operational tasks in Notebooks, you can hand those off to an operations team over time. A quality, small operations team can take on a lot work and free up dev time for optimizations or feature development. The key is that devs have a good workflow to hand off operational tasks that are often fuzzier than code.

The one gotcha with a hosted service IMO is that translating local scripts into hosted ones takes a lot of work. On my laptop, I'm on a VPN and can access things directly, where you need to figure out how to allow a 3rd party to connect to production backend systems. That can be a sticky problem that makes it hard to clarify the value.


> if I can automate a runbook can I not just make the system heal itself automatically

The runbooks are still codified by a human in the current scenario. We are experimenting with some data to see if we can generate accurate runbooks for different scenarios but haven't found much luck with it yet. I do think that some % of issues will be abstracted in near future with machines doing the healing automatically.

> you should invest in toil reduction to stop having those repeated problems.

Most teams I speak to say that they try their best to avoid repeating the same issue again. Users typically use PlayBooks for:

(a) A generic scenario where you have an issue reported / alerted and you are testing 3-4 hypotheses / potential failure reasons at once.

(b) You want to run some definitive sequence of steps.


For self-hosting I've found https://k3s.io to be really good from the SUSE people. Works on basically any Linux distro and makes self-hosting k8s not miserable.


For handling secrets we manage helm via Pulumi (previously Terraform) and pass in the secrets to values from Secrets Manager or whatever cloud provider.

I haven't found a good alternative to Helm. Pulumi is probably the best if you wanted to just create manifests their k8s provider is great, but we ultimately want to shift left the kubernetes manifests and helm is pretty ok for that.


As an SRE with a database background this should be required reading for any development team.


The declarative DSL for defining user interfaces reminds of QML and I'm not sure why I would use this over QT really given that Slint seems to have a similarly weird licensing model.


Slint author here. Allow me to share a few reasons why we believe choosing Slint over Qt:

- Qt is a C++ framework for C++ developers, its language bindings for other languages, are second-class citizens and not truly idiomatic (I know, I've made bindings for Rust before). C++ may not always be the optimal language for writing GUI application logic. Slint is designed with a smaller API surface and improved bindings for multiple programming languages. (eg, we don't duplicate half of the C++ stdlib) With our JavaScript binding, we are a good lightweight alternative to Electron.

- The Slint language is using static typing, and each file is self-contained, simplifying tooling. We already offer IDE support with the language server protocol, featuring a Live Preview. We're also developing a WYSIWYG editor. Catching errors at compile time is better than at runtime.

- Slint is optimized to work on less powerful hardware devices, even supporting micro controllers with less than 300K of RAM, while Qt require 100 or 1000 times that amount)

- As a young and small company, every customer is invaluable to us. We are offering personalized support and implementing features to allow for the product of our customers. Even small customer receive the attention they deserve.

- In general, Slint is what we think QML could be without its legacy, by starting afresh. We acknowledge that this is an ambitious project that will take time to mature, but we encourage you to give it a try.


Thanks for creating slint! Qt still has not delivered on the promise of rapid prototyping with qml, after 14 years. The biggest issue I see is that half of the Qt devs themselves do not like qml. See my rant I wrote a while ago: https://kelteseth.com/post/20-04-2023-current-issues-with-th...


The author or Slint worked on Qt and is one of the designer of the QMetaObject system and QML.


Most of this codebase seems to be Rust. I'm not sure how good the Qt/Rust experience currently is; there might well be room for another player in that space.

More generally, I get the impression that the current state of the art in cross-platform desktop-first UI development leaves a lot to be desired, so, tentatively, I applaud any and all efforts in this space, especially the ones that attempt to implement something from the ground up, rather than introducing unnecessary layers of leaky abstraction.


Qt is a heavily object oriented framework, which is unlikely to work well as a binding in Rust. Currently afaik there is no Qt Widgets binding.


The Qt licensing model is actually a bit less restrictive, since Qt is distributed under the LGPL, and Slint is full-fat GPLv3


Slint developer here.

Slint is under multiple licenses. The GPLv3 and also the Slint Royalty-free license.

For desktop applications, The Royalty free license is actually less restrictive than the LGPL since you can do static linking and don't need to share the modifications.

For embedded products, the LGPLv3 can often not be an option anyway, and we hope that users can find our pricing model better the one of Qt.


Can you share a ballpark estimate of what embedded runtime licenses cost? I really don't want to engage your marketing department to find out.


For non-commercial and personal projects, the runtime licenses are free.

For commercial projects, the online form https://slint.dev/get-price (from https://slint.dev/pricing page) automatically sends an email with pricing info (no marketing person is involved :) ). Note: you need to enter your business email in the form.


While I can't contest that SCons is comprehensive, I would never recommend it as a source of learning "what to do".

SCons is not idiomatic Python and it abuses things like `eval` which gives it terrible performance.

Source: I used to work for MongoDB and my full time job was to make SCons faster, which I eventually did by making it a Ninja generator (which has now been upstreamed). But the code is still pretty bad.

Using SCons however is much nicer than using make / autoconf IMO, especially now that you can farm the builds out to Ninja.


Oh definitely agree on the non-idiomatic python. I was thinking less of "how to correctly implement and write the python code" and more of the "learn the history and decisions on why it might do things, and the user experience of defining a build in python". I really liked its consistency of repeatable build steps, the way it handled it's dependencies, and accepted the tradeoff it presented for slowness. But its been perhaps 10 years since I looked it, likely longer, and rose coloured glasses are applicable.

And nice work making SCons faster!! not at easy thing to do at all.


While there is definitely a higher barrier to entry, once I got comfortable with Rust (and finally stole someones working cross-compile / publish github actions for it) it has surplanted Golang in this use case because it does spark joy for me.


Man I love Make. But recently started a new job and we decided to use Just* and it's been fantastic. I doubt I would use Make again unless I was planning to use it as a real build system (which has been the minority use case of my use cases in the last 5 or so years).

* https://github.com/casey/just


Same, except I do not love make. Love did not grow in the three years I have had to use it for C and C++ building and as a command file. I don't understand the love. There are so many footguns. Nothing you want from it is easy. `make` experts always tell you that your makefile is wrong even when it does what it is intended to and when you ask to see a good makefile, you're given a document you can barely understand even though you thought you had good `make` knowledge. Correct handling of header files needs files generated by the compiler the alternative being recompiling everything when you change one header file.

Run

I now use `xmake` for building (found love at last) and basic scripts or `just` for command files


I love make, personally, because no other system gives me the level of flexibility and power. I use it for all sorts of things that aren't even building code.

> There are so many footguns.

That's true. That sorta comes with flexibility and power.

> `make` experts always tell you that your makefile is wrong even when it does what it is intended to

I wouldn't do this. "If it's stupid and works, it's not stupid."

> Correct handling of header files needs files generated by the compiler the alternative being recompiling everything when you change one header file.

I'm not sure I understand what you mean here.


I suspect people love the concept of Make, not its implementation, that’s just Stockholm Syndrome after using it for too long. It does fit beautifully into the „everything is a file“ paradigm tho!

Never used Xmake, looks nice and clean - the Zig way seems promising as well…


Huge vote for Just from me as well.

It has revolutionised how I build my projects, particularly with monorepos and is a massive productivity booster when compared to manual task running.


I dont get it. Isn't "just" simply a "friendlier make" but only for linear pipelines that could have been simple bash scripts in the first place?

(in which case, why not just use a bash script to begin with?)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: