Hacker Newsnew | past | comments | ask | show | jobs | submit | persedes's commentslogin

not soooo much though. It's heavily subsidized for residential consumption, but industrial power rates are almost comparable to the US (depends on the state you go to etc).

it's funny how adding AI to notion actually made it a lot more usable. Most products force it on you, but here I feel like it's actually a massive benefit. It was hard finding content and using the filters felt clunky. (And the whole UI either in a browser or their app feels buggy + slow). But with their notion AI / MCP it's gotten super easy to get information in and out.


Soo can I put this on top of e.g. grafana?


Interesting that the MCP-Atlas score for 4.6 jumped to 75.8% compared to 59.5% https://www.anthropic.com/news/claude-opus-4-6

There's other small single digit differences, but I doubt that the benchmark is that unreliable...?


page is updated to state:

MCP-Atlas: The Opus 4.6 score has been updated to reflect revised grading methodology from Scale AI.


huh neat, somehow completely missed out on the rules/ + path filters as a way to extend CLAUDE.md


there's probably a more precise way, but if you're on uv:

  rg litellm  --iglob='*.lock'


funnily enough the model switching is mostly thanks to litellm which dspy wraps around.


Dspys advertising aside, imho it is a library only for optimizing an existing workflow/ prompt and not for the use cases described there. Similar to how I would not write "production" code with sklearn :)

They themselves are turning into wrapper code for other libraries (e.g. the LLM abstraction which litellm handles for them).

Can also add:

Option 3: Use instructor + litellm (probabyly pydantic AI, but have not tried that yet)

Edit: As others pointed out their optimizing algorithms are very good (GEPA is great and let's you easily visualize / track the changes it makes to the prompt)


The sklearn to me is (and mirrors) the insane amount of engineering that exists/existed to bring Jupyter notebooks to something more prod-worthy and reproducible. There’s always going to be re-engineering of these things, you don’t need to use the same tools for all use cases


Hmm not quite what I meant. Sklearn has it's place in every ML toolbox, I'll use it to experiment and train my model. However for deploying it, I can e.g. just grab the weights of the model and run it with numpy in production without needing the heavy dependencies that sklearn adds.


That initial drop reminds me of one of the things that stuck to me from my thermodynamic lectures / tests: If you want to drink coffee at a drinkable temperature in t=15min, will it be colder if you add the milk first or wait 15min and then add milk? (=waiting 15 min because the temperature differential is greater and causes a larger drop). Almost useless fact, but it always comes up when making coffee.


This is true if the milk is in the fridge the whole time. With the milk out the whole time, it's nearly the same, exact answer depends on the geometry of both containers.


If it quacks like a duck...?


To an extent, but with caution and charity. A lot of exceptionally good people have come from bad families - one of the Borgias was a saint, for example. A lot of exceptionally nasty people seem to have perfectly nice families.

Of course sometimes people who are, for example, brought up to be racist, are racist.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: