To a certain extent, I do wonder if just letting claude do everything and then using the bug reports and CVE’s they find as training data for an RL environment might be part of the plan. “Here’s what you did, here’s what fixed it, don’t fuck up like that again"
I’ve heard this take before, but if you’ve spent any time with llm’s I don’t understand how your take can be: “I should just let this thing that makes mistakes all the time and seems oblivious to the complexity it’s creating because it only observes small snippets out of context make it’s own decisions about architecture, this is just how it does things and I shouldn’t question it.”
I don’t know that those people were exactly out of a job though, they didn’t do that job, but I find it hard to believe that any of the people solving orbital mechanics by hand wound up with nothing to do but twiddle their thumbs for the remainder of their lives. Similarly, I don’t know that there’s any realistic prospect, even if ai winds up writing all the software, that there wont also be incentive to have people that also understand it.
Once upon a time, I heard someone tell me a fairytale about this thing called a ‘law' and they said that laws could be used to enforce compliance with standards across an entire country. Pure fantasy I know, but a man can dream.
Presumably, there must be some point in time where the bill is made public in some form before going to a vote. If you could get the right tool in the hands of a journalist to turn whatever obscure format it’s in into something legible by an ordinary person there’s probably value there.
I think a lot of people will do this, it remains to be seen how the actual economics of this shake out in the long run, especially considering, it’s not like the existing vendors are going to remain static.
Once the hardware to run inference for something like the vision understanding module of this can be run on a low / medium power asic drones are going to be absolutely horrifying weapons.
I think this is disingenuous, people want to be able to use a tool that they pay for to do useful work on their own terms because they payed for it and don’t see the differential pricing model offered by Anthropic as legitimate.
I don’t agree, what people want is very consequential, because those people are paying customers of a service, if they aren’t happy with it they have every right to complain.
People should be vocal about what they do and do not think is reasonable behavior by corporations and then act based on those opinions with their wallets. Lord knows we have precious few other ways of influencing corporate behavior.
>What the people want is inconsequential here. The people also want to abolish copyright and freely share and download media too.
I already approved of the complaints against Anthropic here, you don't have to sell it this hard to me.
(Not to mention the blatant hypocrisy that their whole business is based on open copyright abuse - all that copyrighted training material, illegally obtained books and movies, etc).
reply