Hacker Newsnew | past | comments | ask | show | jobs | submit | wa008's commentslogin

This is the Twitter API playbook. Restrict developers, win the short term, lose the ecosystem war to the next paradigm

What I cannot build. I do not understand


I'm not sure this is a useful way to approach "magic". I don't think I can build a production compiler or linker. It's fair to say that I don't fully understand them either. Yet, I don't need a "full" understanding to do useful things with them and contribute back upstream.

LLMs are vastly more complicated and unlike compilers we didn't get a long, slow ramp-up in complexity, but it seems possible we'll eventually develop better intuition and rules of thumb to separate appropriate usage from inappropriate.


if only the opposite were true!


This transparent report can earn my trust


Input lag is one of those things you feel before you can explain it. Good to finally have a resource that breaks down the full chain — controller, engine, display — instead of just blaming the monitor like everyone does

The engine section is the part most developers seem to ignore. A locked 60fps doesn't mean 16ms latency, and that gap make me surprise


I used to get into arguments all the time about how triple-buffering reduces latency, and I think it's because we lacked resources like this; people assume it adds the additional back buffer to a queue, when the traditional implementation "renders ahead" and swaps the most recently-completed back buffer. It's a subtle difference but significantly reduces the worst-case latency vs. a simple queue.

I think most people get their information from help blurbs in settings menus for PC games, which are often hilariously vague or incorrect.


  Vulkan's presentation API makes this distinction explicit: VK_PRESENT_MODE_MAILBOX_KHR is the "replace if already queued" mode that actually reduces latency, while VK_PRESENT_MODE_FIFO_KHR is the pipeline-queue variant that adds frames ahead of time. OpenGL never standardized the difference,
  so "triple buffering" meant whatever the driver implemented -- usually vendor-specific extension behavior that varied between hardware. The naming confusion outlived OpenGL's dominance because the concepts got established before any cross-platform API gave them precise semantics.


1. It doesn’t help that on Windows’ “Triple buffering” options actually means FIFO forced three-frame buffering. So people had prestablished PTSD from those dreadfully laggy smoothing.

2. Triple buffering does not reduce latency compared to unsynced tearing. It’s a spatial vs temporal tradeoff between whether to let frequency mismatches manifest as tearing or jitter. For passive consumption of motion, losing temporal consistency in exchange for spatial cohesion is the better tradeoff and so triple buffering is appropriate. For active controls of motion and its feedback, temporal consistency is absolutely critical whereas spatial cohesion while in motion is far, far less important, so triple buffering is unacceptable in this use case.


I should have been more clear and contrasted with double-buffering, thanks.


It does increase input lag in the ssme manner Vsync does, there is a wait time before the information is sent to the screen to avoid tearing.

If you wanna minimize latency, you'd want always the most recent information available, which vsyc or buffering does not provide. You trade that for tearing with those schemes.


I should have been more clear and contrasted with double-buffering, thanks.


Based on my experience, most health benefits are from personal habits over external hardware. But people care health so much, it's a great opportunity for merchants to get revenue.


AI is much more powerful than human in the closed fields, like game and defense. AlphaGo proved that at first.


Agree. However, the described technique isn't really AI, there's no neural network or training. It's GA-driven exploration for testing: mutate inputs, keep what gets you further down the state space, discard what doesn't. AlphaGo optimizes for winning; testing optimizes for coverage. That said, what does apply well to testing from the AI field is the exploration during the training phase, as well as the ability to beat the game, giving you paths to branch off from to explore the test space further.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: