People have been talking about the book on here since it came out; I see no reason to believe people aren't genuinely interested in it. I loved it, personally.
In my experience, the Epic downloader would frequently lead to degraded performance and/or system instability when I'd leave it running; I've never noticed such problems at all with the Steam client.
I accidentally clicked through the explanatory text after the first slide (I was still clicking the pump and didn't realize one more click was going to skip through); I have not been able to get the applet to rewind back to the beginning.
If you hard reset, it will erase your save file cookie and forget your progress. Another option, if you push through to the end and deliver your first product, you unlock the refinery map feature and can jump back to the extraction step!
The scare quotes around words that don't warrant it, or are unnecessarily idiosyncratic, are something I get pretty often in response text from Gemini.
In this case the use of quotes seems to have been perfectly appropriate as it's almost certainly a word they've seen many people using when giving feedback.
I'm really surprised that didn't jump out at more people; I had to get halfway through the comments to the 27th mention of "Department of War" to find the first comment pointing out that using the name is itself a capitulation.
Defense is a much more fitting name for an organization that does a million more things than just prosecute wars. War is just the favorite part of their mission for these wannabe toughguys.
For me there seems to be a listing of configurable settings or something but it only pops up for a single frame after I right-click-drag a component - this seems like a broken mouse interaction.
I strongly suspect there's a major component of this type of experience being that people develop a way of talking to a particular LLM that's very efficient and works well for them with it, but is in many respects non-transferable to rival models. For instance, in my experience, OpenAI models are remarkably worse than Google models in basically any criterion I could imagine; however, I've spent most of my time using the Google ones and it's only during this time that the differences became apparent and, over time, much more pronounced. I would not be surprised at all to learn that people who chose to primarily use Anthropic or OpenAI models during that time had an exactly analogous experience that convinced them their model was the best.
While this is certainly very true, I find coding through an LLM to require far less effort dedicated to this cognitive switching than does writing in some programming language, primarily because I no longer have to load the mental context for converting my high level human instructions to code that a programming environment actually supports. The mental context seems more lightweight and closer to the way I think about the problem when I'm not sitting at the computer actively working on it. If an idea comes to me while I'm away from the computer I can momentarily sit down, type in whatever I just thought of, and get going almost immediately. I think it also saves a huge amount of cognitive load and stress (for me) involved with switching around between different programs and languages, an unfortunate fact of life when dealing with legacy systems.
I just kept scrolling, hoping it would learn from how long I paused over content to read it the way FB's seems to, but it seems you're right, in this case "likes" are required.
reply