I gave Claude Code with Sonnet 4.6* a try a few weeks ago. I pointed it at a hobby project with less than 1kloc of C (about 26,500 characters) across ~10 modules and asked it to summarize what the project does. It used about $0.50 worth of tokens and gave a summary that was part spot on and part hallucinated. I then asked it how to solve a simple bug with an easy solution. It identified the right place to make the fix but its entire suggested solution was a one-liner invoking a hallucinated library method.
I use LLMs pretty regularly, so I'm familiar with the kinds of tasks they work well on and where they fall flat. I'm sure I could get at least some utility from Claude Code if I had an unlimited budget, but the voracious appetite for tokens even on a trivially small project -- combined with a worse answer than a curated-context chatbot prompt -- makes its value proposition very dubious. For now, at least.
* I considered trying Opus, but the fundamental issue of it eating through tokens meant, for me, that even if it worked much better, the cost would dramatically outweigh the benefit.
(Not the OP) I use my Zoom F3 (which is a 32-bit float recorder) for field recordings for hobby music production. I'm not a professional in any respect.
But I've found f32 to be incredibly useful because it allows me to very spontaneously capture unexpected sounds with little to no setup or preparation. In fact, sometimes I even forego monitoring in favor of just quickly getting out the recorder and microphones and then hitting record -- since I don't have to fiddle with gain, I know that I can capture something usable rather than missing the opportunity altogether.
When I have time to prepare a recording and it's not going to have a crazy amount of dynamic range, then sure, f32 isn't a make-or-break feature, and needing to do post-processing 100% of the time before the audio is usable in non-f32 contexts could be seen as a drawback. But for my use cases, absolutely useful and worthwhile.
I’m not closely familiar with this benchmark, but data leakage in machine learning can be way too easy to accidentally introduce even under the best of intentions. It really does require diligence at every stage of experiment and model design to strictly firewall all test data from any and all training influence. So, not surprising when leakage breaks highly publicized benchmarks.
Mandating the wearing of seatbelts isn’t entirely about protecting the person wearing the seatbelt. An unbelted occupant becomes a projectile in a sufficiently violent collision, and that projectile can cause harm to people outside of the vehicle.
Heck, I recently saw a video (may be an old one) of a driver who fell out of his car while showing off his acceleration. Now the entire car is an uncontrolled projectile.
Interesting that pleasure craft have dead-man switches you can optionally affix. They’re also designed to turn anti-clockwise forever if nobody is at the wheel.
I guess because there aren’t seatbelts and these boats are usually open-top.
Isn't this dangerous to the person falling off (assuming no dead-man switch)? You fall off only to be run over by your own craft one turn later... It does mean that the boat won't run away far though, so there's that.
From the paper: "The traps were baited with ~100 g of mackerel that was enclosed in a mesh bag to allow the development of an odour plume, but prevent the amphipods consuming any of the bait that might otherwise affect POP levels in downstream assays."
So even if the mackerel did contain the pollutants, they weren't transferred to the amphipods.
My impression from the article was that the barrier to entry for new hires was less the language and more the domain-specific knowledge of the industry and the institution.
Salient quote from the article: "the time before a new employee can stand on their own feet is 2–3 years"
Ever tried it? I would not recommend it, especially with Xcode but generally for most OS X apps.
It's fine for running multiple instances of the simulator if that's all you need to do. You'll be sorry if you try to actually work with code though. Settings will be overwritten, caches will be corrupted and some things won't work. The problem is that OS X isn't designed with this in mind at all and you're actually working against that design when you do this.
I use LLMs pretty regularly, so I'm familiar with the kinds of tasks they work well on and where they fall flat. I'm sure I could get at least some utility from Claude Code if I had an unlimited budget, but the voracious appetite for tokens even on a trivially small project -- combined with a worse answer than a curated-context chatbot prompt -- makes its value proposition very dubious. For now, at least.
* I considered trying Opus, but the fundamental issue of it eating through tokens meant, for me, that even if it worked much better, the cost would dramatically outweigh the benefit.