Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The world is figuring out how to make this technology fit and work and somehow this is "behind" schedule. It's almost comical.


For a company that sees itself as the undisputed leader and that wants to raise $7 trillion to build fabs, they deserve some of the heaviest levels of scrutiny in the world.

If OpenAI's investment prospectus relies on them reaching AGI before the tech becomes commoditized, everyone is going to look for that weakness.


Reminds me of this Louis CK joke:

I was on an airplane and there was high-speed Internet on the airplane. That's the newest thing that I know exists. And I'm sitting on the plane and they go, open up your laptop, you can go on the Internet.

And it's fast, and I'm watching YouTube clips. It's amazing. I'm on an airplane! And then it breaks down. And they apologize, the Internet's not working. And the guy next to me goes, 'This is bullshit.' I mean, how quickly does the world owe him something that he knew existed only 10 seconds ago?"

https://www.youtube.com/watch?v=me4BZBsHwZs


The investors need their returns now!

Soon, all the middle class jobs will be converted to profits for the capital/data center owners, so they have to spend while they can before the economy crashes due to lack of spending.


People who say „it’s bullshit” are the ones that push the technological advance forward.


Not invariably. Some of those people are the ones who want to draw 7 red lines all perpendicular, some with green ink, some with transparent and one that looks like a kitten.


For anyone who hasn't seen what this comment is referencing: https://www.youtube.com/watch?v=BKorP55Aqvg


No, people who say "it's bullshit" and then do something to fix the bullshit are the ones that push technology forward. Most people who say "it's bullshit" instantly when something isn't perfect for exactly what they want right now are just whingers and will never contribute anything except unconstructive criticism.


Sounds like "yes but" rather than "no" otherwise you're responding to self created straw man.


That's really not true.


[flagged]


There's someone with this comment in every thread. Meanwhile, no one answers this because they are getting value. Please take the time to learn, it will give you value.


I’m a consultant. Having looked at several enterprises, there’s a lot of work being done to make a lot of things that don’t really work.

The bigger the ambition, the harder they’re failing. Some well designed isolated use cases are ok. Mostly things about listening and summarizing text to aid humans.

I have yet to see a successful application that is generating good content. IMO replacing the first draft of content creation and having experts review and fix it is, like, the stupidest strategy you can do. The people you replace are the people at the bottom of the pyramid who are supposed do this work to upskill and become domain experts so they can later review stuff. If they’re no longer needed, you’re going to one day lose your reviewer, and with it, the ability to assess your generated drafts. It’s a foot gun.


> Having looked at several enterprises, there’s a lot of work being done to make a lot of things that don’t really work.

Is this a new phenomenon that started post-LLM?


I mean, no, not generally. but the success rate of other tools is much higher.

A lot of companies are trying to build these general purpose bots that just magically know everything about the company and have these but knowledge bases, but they just don’t work.


It gives me value but I am not even sure it is $20 a month of value at this point.

It was in 2023 but I picked all the low hanging fruit.

More importantly though, where is all the great output from the people who are getting so much value out of the models?

It is all privately held? How can that be with millions of people using these models?


I'm someone who generally was a "doubter", but I've dramatically softened my stance on this topic.

Two things: I was casually watching Andreas Kling's streams on Ladybird development (where he was developing a JIT compiler for JS) and was blown away at the accuracy of completions (and the frequency of those completions)

Prior to this, I'd only ever copypasta'd code from ChatGPT output on occasion.

I started adopting the IDE/Editor extensions and prototyping small projects.

There's now small tools and utilities I've written that I'd not have written otherwise, or would have taken twice the time invested had I'd not used these tools.

With that said, they'd be of no use without oversight, but as a productivity enhancement, the benefits are enormous.


For my mental health I’ve stopped replying to comments where it’s clear the author has no intention of having a discussion and instead wants their share their opinion and have it reinforced by others.

No, we don’t have AGI or anything close to it. Yes, AI has come a long way in the past decade and many people find it useful in their day-to-day lives.

It’s difficult to know where AI will be in 10 years, but the current rate of improvement is staggering.


Something can generate value and still have negative unit economics.


> Meanwhile, no one answers this because they are getting value.

You're literally doing the same thing you're accusing of. Every HN thread is full of AI boosters claiming AI to be the future with no backing evidence.

Riddle me this. If all these people are "getting value", why are all these companies losing horrendous amounts of money? Why has nobody figured out how to be profitable?

> Please take the time to learn, it will give you value.

Yeah, yeah, just prompt engineer harder. That'll make the stochastic parrot useful. Anyone who has criticism just does so because they're dumb and you're smart. Same as it always was. Everyone opposed to the metaverse just didn't get it bro. You didn't get NFTs bro. You didn't get blockchain bro.

None of these previous bubbles had money in it (beyond scamming idiots), if AI wants to prove it's not another empty tech bubble, pay up. Show me the money. Should be easy, if it's automating so many expensive man-hours of labour. People would be lining up to pay OpenAI.


There’s clearly some value. People are paying for something.

> AI start-ups generate money faster than past hyped tech companies

https://www.ft.com/content/a9a192e3-bfbc-461e-a4f3-112e63d0b...


> Riddle me this. If all these people are "getting value", why are all these companies losing horrendous amounts of money? Why has nobody figured out how to be profitable?

While I agree that LLMs are not currently working great for most envisioned use cases; this premise here is not a good argument. Large LLM providers are not trying to be profitable at the moment. They’re trying to grow and that’s pretty sensible.

Uber was the poster child of this, and for all its mockery, Uber is now an unqualified profitable company.


I'm not sure I would call incinerating 11b dollars a year to the point where you need to do one of the biggest raises ever and it doesn't even buy you a year of runway sensible.


Based on their forecasts it’s still pretty sensible. I don’t personally believe the forecasts are sensible. But that’s besides the point.


Think of all the search engines alltheweb, yahoo, astalavista,... where sooo much money got poored in, and finally there was just one winner taking it all. That's the race openai is trying to win now. The competition is fierce and we can just play with all kinds of models for free and we do nothing but complaining.


> Why has nobody figured out how to be profitable?

From what I've seen claimed about OpenAI finances, this is easy: It's a Red Queen's race — "it takes all the running you can do, to keep in the same place".

If their financial position was as simple as "we run this API, we charge X, the running cost is Y", then they're already at X > Y.

But if that was all OpenAI were actually doing, they'd have stopped developing new versions or making the existing models more efficient some time back, while the rest of the industry kept improving their models and lowering their prices, and they'd be irrelevant.

> People would be lining up to pay OpenAI.

They are.

Not that this is either sufficient or necessary to actually guarantees anything about real value. For lack of sufficiency: people collectively paid a lot for cryptocurrencies and NFTs, too (and before then and outside tech, homeopathic tinctures and sub-prime mortgages); For lack of necessity: there's plenty of free-to-download models.

I get a huge benefit even just from the free chat models. I could afford to pay for better models, but why bother when free is so good? Every time a new model comes out, the old paid option becomes the new free option.


I use them to:

• Build toys that would otherwise require me to learn new APIs (I can read python, but it's not my day job)

• Learn new things like OpenSCAD

• To improve my German

• Learn about the world by allowing me to take photos of things in this world that I don't understand and ask them a question about the content, e.g. why random trees have bands or rectangles of white paint on them

• Help me shopping, by taking a photo of the supermarket that I happen to be in at the time and ask them where I should look for some item I can't find

• Help with meal prep, by allowing me to get a recipe based on what food and constraints I've got at hand rather than the traditional method of "if you want x, buy y ingredients"

Even if they're just an offline version of Wikipedia or Google, they're already a more useful interface for the same actual content.


That was puzzles me now. Everyone with a semblance of expertise in engineering knows that if you start with a tool and try to find a problem it could solve you are doing it wrong. The right way is the opposite - you start with a problem, and find the best tool to solve it, and if it's the new shiny tool - so be it, but most of the time it's not.

Except the whole tech world starting with the CEOs seems to do it the "wrong" way with LLMs. People and whole companies are encouraged to find what these things might be actually useful for.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: