Hacker Newsnew | past | comments | ask | show | jobs | submit | cube2222's commentslogin

Honestly, for now they seem to be buying companies built around Open Source projects which otherwise didn't really have a good story to pay for their development long-term anyway. And it seems like the primary reason is just expertise and tooling for building their CLI tools.

As long as they keep the original projects maintained and those aren't just acqui-hires, I think this is almost as good as we can hope for.

(thinking mainly about Bun here as the other one)


And how likely is that?

Once you’re acquired you have to do what the boss says. That means prioritizing your work to benefit the company. That is often not compatible with true open source.

How frequently do acquired projects seriously maintain their independence? That is rare. They may have more resources but they also have obligations.

And this doesn’t even touch on the whole commodification and box out strategy that so many tech giants have employed.


> Once you’re acquired you have to do what the boss says.

Or quit, and take the (Open Source) project and community with you. Companies sometimes discover this the hard way; see, for instance, the story of how Hudson became Jenkins.


A reasonably cool part about this approach (duplicating the commits, though I suppose you could just add your own bookmark on the existing commit, too) is that you can easily diff the current pr state with what you last reviewed, even across rebases, squashes, fixups, etc. Will have to give that a go.

Unfortunately GitHub still doesn't make that easy, and branch `push --force`'s make it really hard to see what changed, would be amazing if they ever fixed that.

In general, I think with the rise of agentic coding, and more review work, I hope we see some innovation in the "code review tooling" space. Not AI reviewers (that's useful too but already works well enough)! I want tools that help the human review code faster, more effectively, and in a more pleasant way.

Of course can't end the comment without the obligatory "jj is great, big recommend, am not affiliated, check out the blog post I wrote a year ago for getting started with it[0]", ha! I'm still very happy with it, no going back.

[0]: https://kubamartin.com/posts/introduction-to-the-jujutsu-vcs...


> I hope we see some innovation in the "code review tooling" space.

There's some things happening, for sure, but GitHub working towards finally supporting Stacked Diffs will, I hope, accelerate the general demand for better things.


In git you could compare to the version prior to the latest fetch:

    git diff origin/master@{1} origin/master
or if you vaguely know the time:

    git diff origin/master@{5.weeks.ago} origin/master


This is actually... pretty cool?

Definitely won't use it for prod ofc but may try it out for a side-project.

It seems that this is more or less:

  - instead of modules, write specs for your modules
  - on the first go it generates the code (which you review)
  - later, diffs in the spec are translated into diffs in the code (the code is *not* fully regenerated)
this actually sounds pretty usable, esp. if someone likes writing. And wherever you want to dive deep, you can delve down into the code and do "microoptimizations" by rolling something on your own (with what seems to be called here "mixed projects").

That said, not sure if I need a separate tool for this, tbh. Instead of just having markdown files and telling cause to see the md diff and adjust the code accordingly.


We'd love to hear your feedback! Feel free to come to our discord to ask questions/share experience: https://l.codespeak.dev/discord


Spacelift | Remote (Europe) | Full-time | Senior Software Engineer | $80k-$110k+ (can go higher)

We're a VC-funded startup (recently raised $51M Series C) building an infrastructure orchestrator and collaborative management platform for Infrastructure-as-Code – from OpenTofu, Terraform, Terragrunt, CloudFormation, Pulumi, Kubernetes, to Ansible.

On the backend we're using 100% Go with AWS primitives. We're looking for backend developers who like doing DevOps'y stuff sometimes (because in a way it's the spirit of our company), or have experience with the cloud native ecosystem. Ideally you'd have experience working with an IaC tool, i.e. Terraform, Pulumi, Ansible, CloudFormation, Kubernetes, or SaltStack.

Overall we have a deeply technical product, trying to build something customers love to use, and have a lot of happy and satisfied customers. We promise interesting work, the ability to open source parts of the project which don't give us a business advantage, as well as healthy working hours.

If that sounds like fun to you, please apply at https://careers.spacelift.io/jobs/3006934-software-engineer-...

You can find out more about the product we're building at https://spacelift.io and also see our engineering blog for a few technical blog posts of ours: https://spacelift.io/blog/engineering


This seems to agree with my own previous tests of Sonnet vs Opus (not on this version). If I give them a task with a large list of constraints ("do this, don't do this, make sure of this"), like 20-40, Sonnet will forget half of it, while Opus correctly applies all directives.

My intuition is this is just related to model size / its "working memory", and will likely neither be fixed by training Sonnet with Opus nor by steadily optimizing its agentic capabilities.


I'd agree that this effect is probably mainly due to architectural parameters such as the number and dimensions of heads, and hidden dimension. But not so much the model size (number of parameters) or less training.

Saw something about Sonnet 4.6 having had a greatly increased amount of RL training over 4.5.


Attention is, at its core, quadratic wrt context length. So I'd believe that to be the case, yeah.


So tldr it seems like it's

- a reasonable improvement over sonnet 4.5, esp. with agentic tool use

- generally worse than opus 4.6

Probably not worth it for coding, but a win for anybody building agentic ai assistants of any sort with Sonnet.


It’s similar to or better than Opus 4.5 as per benchmarks, while being 2x-3x cheaper, definitely worth it over Opus 4.6, if cost/tokens is the concern.

To remind, Opus 4.5 was SOTA 2-3 weeks ago.


Yes but Opus 4.6 is a massive step up. Some applications don’t need that power though.


Spacelift | Remote (Europe) | Full-time | Senior Software Engineer | $80k-$110k+ (can go higher)

We're a VC-funded startup (recently raised $51M Series C) building an infrastructure orchestrator and collaborative management platform for Infrastructure-as-Code – from OpenTofu, Terraform, Terragrunt, CloudFormation, Pulumi, Kubernetes, to Ansible.

On the backend we're using 100% Go with AWS primitives. We're looking for backend developers who like doing DevOps'y stuff sometimes (because in a way it's the spirit of our company), or have experience with the cloud native ecosystem. Ideally you'd have experience working with an IaC tool, i.e. Terraform, Pulumi, Ansible, CloudFormation, Kubernetes, or SaltStack.

Overall we have a deeply technical product, trying to build something customers love to use, and have a lot of happy and satisfied customers. We promise interesting work, the ability to open source parts of the project which don't give us a business advantage, as well as healthy working hours.

If that sounds like fun to you, please apply at https://careers.spacelift.io/jobs/3006934-software-engineer-...

You can find out more about the product we're building at https://spacelift.io and also see our engineering blog for a few technical blog posts of ours: https://spacelift.io/blog/engineering


I'm not an expert, but I've done a bunch of reading on this previously, and also skimmed the article which also mentions some parts of this.

First, when taking omega 3 supplements, you generally care about increasing the ratio of omega 3 to omega 6. Hemp hearts have much more omega 6 than omega 3, so they're not very effective for improving the ratio.

Second, hemp hearts contain ALA, while what you generally want to improve is EPA and DHA (this is also covered in TFA). The body can convert ALA to EPA and DHA, but it's not efficient.

So all in all, if Omega 3 for the article's stated benefits is what you want, this is not the way. I recommend looking into eating more fish, or if you want a vegan route, algae-based supplements. [0] is a decent source from the NIH about foods and their Omega 3 content, split by ALA/EPA/DHA.

[0]: https://ods.od.nih.gov/factsheets/Omega3FattyAcids-HealthPro...


The ratio of Omega 6 to 3 needs to be below 4:1 to be a good source of Omega 3, and hemp hearts are at 3:1, so they're listed as a good source of Omega 3.

Flax seeds are even better just for Omega 3 at 1:3, but hemp hearts have other benefits, like more protein, which is why I called them out. That said, I eat a fair amount of flax seeds as well.


Just to reiterate, both of those (hemp hearts and flaxseed) only contain ALA, while what you're generally looking for is EPA and DHA. TFA also explicitly mentions it's only talking about EPA.

This is not to say that they're unhealthy of course.

EDIT: see the sibling comment by code_biologist, it's much more comprehensive than what I've written.


Your body converts ALA into EPA and DHA, however, so plants are fine sources of both.


I think the main problem in estimating projects is unknown unknowns.

I find that the best approach to solving that is taking a “tracer-bullet” approach. You make an initial end-to-end PoC that explores all the tricky bits of your project.

Making estimates then becomes quite a bit more tractable (though still has its limits and uncertainty, of course). Conversations about where to cut scope will also be easier.


But how long it'll take you to make that PoC? Any idea? :P


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: