I just took a look at your last post. It asks to use a Google account or create an account. Unless there is a way to try it freely, it's very difficult to get traction in HN.
After a few clicks, I noticed you posted a link to a shared document, but I can click "make a copy" and edit my own copy. I tried clicking the button "f(x)" and typing
\displaystyle\cancel{\frac{1}{2}}
and it works :)
---
So I took a look at your last^2 post. It goes to the landing page. It looks good but it may be too long for the TikTok generation and AI generated waiting list pages. Also, no mention of LaTeX.
* Check after a few minutes that the HN server has not changed the URL to a document (It happens when there is a canonical URL or a redirect or something, I don't know the details. In case of a problem send an email to dang/tomhow hn@ycombinator.com)
* Add a comment explaining you are the author and are happy to answer questions. Bonus points for a general description of the tech stack. Some backstory is also nice.
* Include in the comments 2 or 3 links to sample documents, like one with LaTeX formulas and one with more usual text. Add something like 'Press the "make a copy" button to edit them'. (Is it real LaTeX? Which packages does it support?) (Markdown? Some people love markdown.)
I'm not sure how viable is to make an editor in a space that is squashed between Overleaf and Google Docs, but I wish you luck.
The Vietnam War was the first one to be "televised" on pretty much a daily basis.
While more average US citizens and service members recognized the folly in greater numbers through time because of it.
It was the somewhat more extreme faction of the anti-war crowd that would have been in favor of a revolution of some kind, mainly because Nixon needed to be toppled ASAP without a doubt, they were just the most disruptive when it comes to "whatever it takes."
That's why the old saying was coined, "The revolution will not be televised."
Usually two methods `onMessage(long timeNow, byte[] buf)` and `onTimer(long timeNow, int timerId)`.
All output sinks and a scheduler need to be passed in on construction of the application.
Then you can record all inputs to a file. Outputs don’t need to be recorded because the inputs can be replayed but it is useful for easier analysis when bugs happen.
I have even worked on systems where there were tools that you could paste recorded input and outputs to and they code generated the source code for a unit test. Super useful for reproducing issues quickly.
But you are spot on in that there is an overhead. For example, if you want to open a TCP socket and then read and write to it, you need to create a separate service and serialise all the inputs and outputs in a way that can be recorded and replayed.
The Pivot to "Inference Sovereignty"
NVIDIA is shifting focus from raw training power to deterministic inference to solve the "Stochastic Wall"—the unpredictable latency jitter in current GPUs that hampers real-time AI agents.
Feynman Architecture (1.6nm): Utilizing TSMC’s A16 node with Backside Power Delivery (Super Power Rail) to achieve a projected 100x efficiency gain over Blackwell.
LPX Cores: Integration of Groq-derived deterministic logic to provide guaranteed p95 latency for "Chain of Thought" reasoning.
Storage Next: Collaboration on 100M IOPS SSDs that function as a peer to GPU memory, eliminating the "Memory Wall" for million-token contexts.
Vertical Fusion: 3D logic-on-logic stacking that places SRAM-rich chiplets directly over compute dies to minimize token-generation energy costs.
Supply Chain: Rumors of a strategic shift to Intel Foundry (18A) for I/O sourcing to diversify away from total TSMC reliance.
This is fine. How else do you learn but by taking things apart and rebuilding them? This obsession with productivity is incompatible with onboarding new talent. Having 1000 versions of the same concept is exactly what progress is.
Actually the truth is that a lot of senior devs are not very good either, and have negative value. But they have an inflated value of themselves that does not reflect reality.
Pretty much all software projects seem to peak, and then decline in quality. There are only a handful of senior devs in the world who are actually good programmers.
No, capital (i.e. money) decides. It’s called capitalism not marketism. The difference is important because it means that if you’re already rich (or are perceived as such, and thus can get loans, extensions, and the like) you can continue to survive longer than the alternatives.
This is ridiculous. New developers will learn a completely different skill path from what we learned, and they will get where we are faster than we did.
Are you selling insights from chat logs too? Until you're monetizing my health, sex life and snitching to any government agency with a shiny nickel, you're playing in the shallows.
One can have a loop where AI generates new ideas, rejects some and ranks the rest, then prioritizes.
Germans also underestimated USA in WW2, saying their soldiers were superior, and USA just had technology
While Stalin said: Quantity has a quality all its own.
Just look at new math proofs that will come out, as one example. Exploration vs Exploitation is a thing in AI but you seem to think that human creativity can’t be surpassed by promots like “generate 100 types of possible…”
The architecture is also important: there's a trade-off for MoE. There used to be a rough rule of thumb that a 35bxa3b model would be equivalent in smarts to an 11b dense model, give or take, but that's not been accurate for a while.
Anthropic’s tool calling was exposed as XML tags at the beginning, before they introduced the JSON API. I expect they’re still templating those tool calls into XML before passing to the model’s context
Yeah no. Almost all companies I've chatted with - from MSPs to C-Suite of F10s - expect and demand humans-in-the-loop. I'm also on a couple boards and we've aligned on the same expectation as well.
Look, AI/ML and especially LLMs are powerful, but there does remain a degree of instability and non-determinism which will require human intervention to remediate.
A missing link right now is automated high-quality code reviews. I would love an adversarial code review agent that has a persona oriented around all incoming code being slop, that leverages a wealth of knowledge (both manually written by the team and/or aggregated from previous/historical code reviews). And that agent should pull no punches when reviewing code.
This would augment actual engineer code reviews and help deal with volume.