The real benchmark should be comparing the amounts with a human guess. And aa far as I know with diabetes if you are within 30% of guessing carbs then you should be fine.
The 3d scan is generally used as a base for your cad model, you don’t print it it directly, you instead replicate the shapes in your cad software, that gives you pretty much infinite precision thanks to NURBS
My guess is that both 3D printed fans and production fans get balanced, but the production fans have an extra bit of design, that makes the profile sail at both a wider speed range, and peaks at a higher speed.
Imagine you are the top engineer of your company. Everybody wants your attention, many meetings, design sessions, and of-course code reviews.
With Claude Code, I use Gitlab for reviewing code. And then I let Claude pull the comments.
It looks like the new UI has a big focus on multiple agents. While it feels wrong, the more you split up your work into smaller merge requests, the easier it is to review the work.
Chat first is the way to go since you want the agent busy making its code better. Let it first make plans, come up with different ideas, then after coding let it make sure it fully tests that it works. I can keep an agent occupied for over a hour with e2e tests, and it’s only a couple hundred lines of code in the end.
I have found that maximising AI coding is a skill on its own. There is a lot of context switching. There is making sure agents are running in loops. Keeping the quality high is also important, as they often take shortcuts. And finally you need an somewhat of an architectural vision to ensure agents don’t just work in a single file.
This is all very tiring and difficult. You can be significantly better than other people at this skill.
This is not an argument for its revolutionary utility. Balancing rocks on the beach is very tiring and difficult for some people, and you can be significantly better at it. Not really bringing anything to the immediate conversation with that insight.
There are also software model checkers that can model distributed processes. You have to simplify the state a bit, otherwise you get a state space explosion.
I tried it out myself, I let AI add action transitions through the code, like: // A -> B: some description. Then I validate via a test that every action transition defined in my model is also defined somewhere commented in code, and other way around that every comment exists in the model.
Finally, I let AI write model check queries on particular properties. If I notice a particular bug, then I ask AI to analyze the model and the model check queries on why it could happen, and ask to strengthen it.
It sounds like a lot of effort, but I got it working in a half hour.
reply