GPT-5 is kinda pointless until they make some type of improvement on the data and research side. From what I’ve read it’s not really what OpenAI has been pursuing it
One big improvement is in synthetic data (data generated by LLMs).
GPT can "clone" the "semantic essence" of everyone who converses with it, generating new questions with prompts like "What interesting questions could this user also have asked, but didn't?" and then have an LLM answer it. This generates high-quality, novel, human-like, data.
For instance, cloning Paul Graham's essence, the LLM came up with "SubSimplify": A service that combines subscriptions to all the different streaming services into one customizable package, using a chat agent as a recommendation engine.
Are you just blindly deciding what will make “gpt-5” more capable? I guess “data and research” is practically so open ended as to encompass the majority of any possible advancement.
I've been interviewing a ton of Googlers since my company (Salesforce) has a pretty good remote work policy going forward. We don't pay as well but probably the only big tech company with better WLB.
Eh you'd be surprised, I couldn't even get an interview with some places, got denied from others, but now work at Google doing full stack. Every technology I use though is Google internal, and I don't have much knowledge of full stack development outside of Google. Some companies just test your algo / ds / system design knowledge, and sure I do fine there, but any specific knowledge I'm useless. TBH I think I'm not very good at any one language either because I often have to switch