That doesn’t quite work for cases where you’re either the primary author of a commit (asking the model for some touch ups) or when you heavily edit model output. Easier to just say “this is who’s driving the AI” and keep it to your username.
I could be wrong, but I’m not sure those settings are enough to mitigate Copy Fail.
If your distro offers a patched kernel, it’s best to upgrade to that one and reboot.
You can also disable the vulnerable module (how to do it depends on what distro you’re using). But if you stay on an old unpatched kernel you might be exposed to other vulnerabilites.
You are misinterpreting my goal here. I have patched my kernel against copy fail but I am thinking of ways to harden my setup against future CVEs in the kernel.
So the question is, before I learned about copy fail, what could I have done that would have limited the possible damage this vulnerability could do to me? CapabilityBoundingSet is one answer and rootless podman as mentioned in this article is another. They don’t prevent all but at least `su` is useless.
If so, I would look into applying a decent seccomp profile.
Other hardening solutions could be to run the workloads inside of a VM such as Firecracker, or gVisor. But that might be more work to implement compared to seccomp.
`user.email` is always my email.
`user.name` is either my account name, or model name like `gpt-5.5-high`.
I can easily filter & blame which line was written by me or some specific AI
reply