Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> after LLM costs go down

I think 100 Gb of GPU memory will always cost multiples of CPU + regular memory.

Using LLMs and computer vision for these kinds of tasks only make sense in small scales. If the task is extensive and repeated frequently, you're better off using an LLM to generate a script using Selenium or whatever, then running that script almost for free (compared to LLM). O1 is very good at it, by the way. For the $0.10 of 1 page interaction charged by Skyvern, I can create several scripts using O1.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: