Hacker Newsnew | past | comments | ask | show | jobs | submit | taoh's commentslogin

Author here. Built this with Claude Code. Happy to answer questions.

Congratulations on your launch! 4 years of work is certainly remarkable perseverance.

The sync engine feature looks very interesting to me. There have been quite a few products available on the market today, but none has achieved a dominant share yet. So if this is your main strength, I'd like to see more demos built local first.

Curious if you considered shipping the engine itself as a standalone infra piece.


Thank you.

> Curious if you considered shipping the engine itself as a standalone infra piece.

We are thinking about supporting something like "Bring Your Own Postgres", which would allow folks to opt into just the sync engine piece.

Right now we focused on the integrated system, because we really wanted to optimize for a delightful developer experience on greenfield projects.


I'm a vercel customer, and I like using vercel AI SDK and Chat SDK. But I found myself moving away from vercel and next.js whenever I start a new project. I wish they maintain the technical standards while achiving commercial success.

Does GPT-5.4 pro give a much better result in some circumstances? What're their typical uses in your experience?

If you want it to deeply research something pro is great. I had a problem I just couldn’t find with my oven so I gave it a lot of information and it went off on its own for about 2 hours and then gave me what I needed to fix the problem (fan was turning off too quickly which was causing the panel to overheat). I have no idea how it figured it out and I couldn’t find anything after hours of googling so it was very impressive. I even went and googled for it once I knew what the problem was and I still couldn’t find the solution that it came up with.

Thanks for sharing this experience. Does it cost a lot of token in the deep analysis - which will make the $100 plan much quicker to drain all budgets.

I think it’s going to be very hard to blow through your tokens just using chat. I mostly bought the plan so I could use Codex and on the $200 a month plan I’ve basically been using it 15 hours a day almost nonstop and I don’t run out of tokens for the week.

Congratulations! The difference between pure agentic exploration and deterministic steps is spot on. Runbooks give ops more confidence on the data exploration and save time/context.

Curious how much savings do you observe from using runbook versus purely let Claude do the planning at first. Also how the runbooks can self heal if results from some steps in the middle are not expected.


>> how the runbooks can self heal if results from some steps in the middle are not expected.

Yeah this is a very interesting angle. Our primary mechanism here is via agent created auto-memories today. The agent keeps track of the most useful steps, and more importantly, dead end steps as it executes runbooks. We think this offers a great bridge to suggest runbook updates and keep them current.

>> Curious how much savings do you observe from using runbook versus purely let Claude do the planning at first.

Really depends on runbook quality, so I don't have a straightforward answer. Of course, it's faster and cheaper if you have well defined steps in your runbooks. As an example, `check logs for service frontend, faceted by host_name`, vs. `check logs`. Agent does more exploration in the latter case.

We wrote about the LLM costs of investigating production alerts more generally here, in case helpful: https://relvy.ai/blog/llm-cost-of-ai-sre-investigating-produ...


Re: savings - it depends on the use case. For example, one of our users set up a small runbook to run a group-by-IP query for high-throughput alerts, since that was their most common first response to those alerts. That alone cuts out a couple of minutes of exploration per incident and removes the variability of the agent deciding what data to investigate and how to slice it.

In our experience, runbooks provide a consistent, fast, and reliable way of investigating incidents (or ruling out common causes). In their absence, the AI does its usual open-ended exploration.


Have been using it in Claude Code with Max Plan for one day. The rate of acceptance is noticeably higher.


Hi, I'm working on something tackling this problem. Do you mind if I contact you for more discussions?


feel free, email in bio.


It should be pretty easy to export to MP4, but using SVG will be lighter and faster, which is why we created the dg tool to automatically export to SVG. Please see my previous comments if interested


We use asciinema to record CLI tools terminals and add the recordings as svg to our README. We also use the recordings to replay as part of our CI. works great!


How do you use the recordings as part of the CI?


We made a tool using termsvg: https://github.com/DeepGuide-Ai/dg. It'll use recorded sessions and execute the non-interactive sessions during CI.


That sounds cool! An animated SVG? How do you convert to SVG format?


We use termsvg to convert cast to svg automatically. The tool is open source: https://github.com/DeepGuide-Ai/dg.


Thanks! I hadn't heard of it!


svg-term-cli I think. I found a post talking about it not long ago.

https://github.com/marionebl/svg-term-cli


Upon investigation, both dg and svg-term-cli output SVG with embedded CSS animation. So it's not that SVG supports animation per se. This also remodels my understanding of what CSS animation can do.


Using SVG for Demos is much better than GIFs or Videos due to the lightweight nature. We have created a tool to make the recording and sharing CLI tool demos much easier: https://github.com/DeepGuide-Ai/dg . Simply call `dg capture` and it generates the svg and content ready to paste to README. An added benefit is it can be used for CI validations. It utilizes termsvg under the hood. Would love your comments.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: