Hacker Newsnew | past | comments | ask | show | jobs | submit | TheDong's commentslogin

The cost of ownership for an OpenClaw, and how many credits you'll use, is really hard to estimate since it depends so wildly on what you do.

I can give you an openclaw instruction that will burn over $20k worth of credits in a matter of hours.

You could also not talk to your claw at all for the entire month, setup no crons / reoccurring activities / webhooks / etc, and get a bill of under $1 for token usage.

My usage of OpenClaw ends up costing on the order of $200/mo in tokens with the claude code max plan (which you're technically not allowed to use with OpenClaw anymore), or over $2000 if I were using API credits I think (which Klause is I believe, based on their FAQ mentioning OpenRouter).

So yeah, what I consider fairly light and normal usage of OpenClaw can quite easily hit $2000/mo, but it's also very possible to hit only $5/mo.

Most of my tokens are eaten up by having it write small pieces of code, and doing a good amount of web browser orchestration. I've had 2 sentence prompts that result in it spinning up subagents to browse and summarize thousands of webpages, which really eats a lot of tokens.

I've also given my OpenClaw access to its own AWS account, and it's capable of spinning up lambdas, ec2 instances, writing to s3, etc, and so it also right now has an AWS bill of around $100/mo (which I only expect to go up).

I haven't given it access to my credit card directly yet, so it hasn't managed to buy gift cards for any of the friendly nigerian princes that email it to chat, but I assume that's only a matter of time.


Absolute madman :)

Giving an agent access to AWS is effectively giving it your credit card.

At the max, I would give it ssh access to a Hetzner VM with its own user, capable of running rootles podman containers.


Not at all. AWS IAM policy is a complex maze, but incredibly powerful. It solves this exact problem very well.

Would having a locally-hosted model offset any of these costs?

Just have to know... What the heck are you building?

Github actions has had a bunch of high-profile prompt injection attacks at this point, most recently the cline one: https://adnanthekhan.com/posts/clinejection/

I guess you could argue that github wasn't vulnerable in this case, but rather the author of the action, but it seems like it at least rhymes with what you're looking for.


Yeah that was a good one. The exploit was still a proof of concept though, albeit one that made it into the wild.

To me it seems like a pretty strong given because context windows are an important thing.

I can tell an llm "write hello world in C", and it will produce a valid program with just that context, without needing the C language spec nor stdlib definition in the context window because they're baked into the model weights.

As such, I can use the context window to for example provide information about my own function signatures, libraries, and objectives.

For a language not well-represented in the training data-set, a chunk of my context has to be permanently devoted to the stdlib and syntax, and while coding it will have to lookup stdlib function signatures and such using up additional context.

Perhaps you're trying to argue that the amount of tokens needed to describe the language, the stdlib, the basic tooling to look up function signatures, commands to compile, etc is not enough tokens to have a meaningful impact on the context window overall?


I think the AI labs need to be the ones to build AI-specific languages so they can include a huge corpus in the model training data-set, and then do RL on it producing useful and correct programs in that language.

If anthropic makes "claude-script", it'll outmog this language with massive RL-maxing. I hope your cortisol is ready for that.

If you want to try and mog claude with moglang, I think you need to make a corpus of several terrabytes of valid useful "mog" programs, and wait for that to get included in the training dataset.


I personally didn't get good results until I got the $100/mo claude plan (and still often hit $180/mo from spending extra credits)

It's not that the model is better than the cheaper plans, but experimenting with and revising prompts takes dozens of iterations for me, and I'm often multiple dollars in when I realize I need to restart with a better plan.

It also takes time and experimentation to get a good feel for context management, which costs money.


I bought the $200 plan so after my extras started routinely exceeding that. Harsh.

But, let me suggest that you stop thinking about planning and design as "prompts". I work with it to figure out what I want to do and have it write a spec.md. Then I work with it to figure out the implementation strategy and have it write implementation.md. Then I tell it I am going to give those docs to a new instance and ask it to write all the context it will need with instructions about the files and have it write handoff.md.

By giving up on the paradigm of prompts, I turned my focus to the application and that has been very productive for me.

Good luck.


plan.md / implementation.md is just a prompt.

You're not telling me to do anything different.


Unfortunately, it has been ruled that the president is immune to legal prosecution on this matter, regardless of whether it is legal or not.

https://en.wikipedia.org/wiki/Trump_v._United_States

> the Supreme Court ruled in a 6–3 decision that presidents have absolute immunity for acts committed as president within their core constitutional purview

It turns out that "checks and balances" meant "the president is unchecked and unbalanced".


Predidency is an Institution which needs to be protected at all costs. The checks and balances wasn’t meant to setup a system where Presidents can be sent to prison but to prevent “crimes” (for the lack of a better word) to happen to begin with. Of course our current “party over Country” system has practically killed any semblance of checks and balances…

> Predidency is an Institution which needs to be protected at all costs.

That sounds a lot like a king.

Last I checked, our founders were pretty against the whole king thing.

I would be shocked if a single one of them said that a President should be immune to prosecution for crimes they commit.


> I would be shocked if a single one of them said that a President should be immune to prosecution for crimes they commit.

They said or they haven’t said it, no? If they did we’d have paper trail.


They did. Hamilton even argued that presidents should be subject to “forfeiture of life and estate” if crimes deemed it so. Federalist 77.

Article I, Section 3, Clause 7 of the constitution makes it clear that while impeachment is limited to removal, but that after they are fair game for criminal processes.

Wilson wrote 'far from being above the laws, he is amenable to them"

Anti-federalists went even farther - they believed that the Federalists' reliance on the impeachment process, for example, left far too wide of a gap to be exploited.

(They seem to have been correct.)


Federalist Papers. Go read them. Anti-Federalist Papers too. At the end of the day, we're still trying to hash out the same old song.

Does Trump want to be Mussolinied? It should always be legal to jail and hang the head of state, otherwise the head of state risks going by a much funnier way. Its not about politics, it's simple game theory.

You know what's even easier for AI agents to use than TUIs? CLIs.

My experience has been that agents suck at using TUIs, and are good at using CLIs. I would argue that agents are a reason that TUIs might die in favor of CLIs.


I agree, agents struggle with TUIs. I do think this is easy to fix though (here's an interesting approach: https://github.com/remorses/ghostty-opentui). I think agents will have much better luck with TUIs than browsers.

The more interesting scenario IMO is having apps that are both TUIs AND CLIs where the agent uses the CLI but can pause and show the user a TUI for complex tasks where the user needs to input something.


> I think agents will have much better luck with TUIs than browsers.

I’m very skeptical. Why would you think that? TUIs inherently don’t provide programmatically accessible affordances; if they have any affordances at all, they’re purely visual cues that have unstandardized and of varying quality.

Compare that to the DOM in a browser where you’ve got numerous well-understood mechanisms to convey meaning and usability. Semantic HTML and ARIA roles. These things systematically simplify programmatic consumption.


> AI has reached a state of software issue, not hardware

Citation very much needed.

At the very least, OpenAI seems to believe more and larger datacenters is the path to better models... and they've been right about that every time so far.


Moreover, all the frontier labs and hyperscalers are capacity constrained, and will be for the foreseeable future.

Their story (valuation) hinges on it - therefore that’s their investment thesis when raising money.

>OpenAI seems to believe more and larger datacenters is the path to better models...

Does that mean they produce better slop, or more slop faster?


Better slop. The effect that these systems get better as you scale up [0] is real you know.

[0]: https://arxiv.org/pdf/2001.08361


Slop is still slop. There is no legitimate evidence that these systems get any better just by throwing more hardware at it. Every one of the people in this paper is involved with OpenAI, so it is very suspect in its findings.

I have a google nexus 7 tablet from 2013. Thanks to Google unlocking all their bootloaders by default, I can install u-boot and a modern linux kernel on it (thanks PostmarketOS).

Since linux runs on it, I can run the latest versions of great pieces of software like ed, slack in a web browser, etc.

It is 100% apple's fault that they do not open up the bootloader for devices they'll no longer offer updates for and allow the community to build a custom darwin or linux fork. Even though we paid for the hardware, we are not allowed to use it any longer than apple says.


Wow, that website is impressively cpu-intensive. Like, I'm on a beefy desktop processor (linux + firefox if it matters), and it's chewing through over 100% cpu and not keeping up. Just having the tab open causes my CPU fans to spin up to max.

The real million dollar homepage at least performs well.


> Wow, that website is impressively cpu-intensive. Like, I'm on a beefy desktop processor (linux + firefox if it matters), and it's chewing through over 100% cpu and not keeping up. Just having the tab open causes my CPU fans to spin up to max.

No such issue on my Macbook in either Firefox or Chrome.

> The real million dollar homepage at least performs well.

So would this one it seems, at least for some.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: