Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's a bit of fools-errand because they train on information which is no longer valid and will get stuck if you don't inform them. For instance GPT cannot write a connection to an openai endpoint because the API was upgraded to 1.0 and broke compatibility with all the code it learned from


Is it possible for an LLM to have knowledge/input marked as "deprecated/obsolete"? Either by a user or some sort of "retraining" process?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: