Do those use cases need LLMs? Probably not. but if good results can be had with a day of prompting (in addition to the stuff mentioned in the article, which you have to do anyway) and a smaller model like Haiku gives good results why would you build a classifer before you have literally millions of customers?
The LLM solution will be much more flexible because prompts can change more easily than training data and input tokens are cheap.
I don't disagree that very numerical tasks like revenue forecasting are not a good fit for LLMs. But neither did a lot of data scientist concerns themselves with such things (compared to business analysts and the like). Software to achieve this has been commoditized.
As others have mentioned, one big issue is that every company does these things differently and just because someone texts you a link doesn't mean it's phishing, even though it feels shady. In Australia I have had calls by immigration officers on supressed numbers that wanted PII over the phone without being able to tell me what the purpose of the call is.
The average person self hosts literally nothing, why would it be different for inference? Which benefits severely from economies of scale and efficient 24/7 utlization
I think people are mainly confused because the AirPod Pros are quite competitively priced compared to other higher end offerings. The Max are so far off the market that it doesn't seem to make any sense and it seems unlikely that apple couldn't make up for lost margins with higher volume. Maybe they just literally can't/don't want to produce many of the Max and price them accordingly.
I'm delighted to see somebody else who refuses to use the pompous syntax Apple promote for these things. Entire thread full of people uncritically accepting that they should be referred to in actual conversation as AirPods Max as if it's a term that deserves more grammatical respect than most of them would give to attorneys general.
> This system was one of the oldest IT systems in NAV, and ran in production for 51 years, from when the National Insurance Scheme was introduced in 1967. In January 2018, Presys was put into production, which together with Pesys became the successor to DSF. At that point, DSF was also shut down.
The system is written in PL/I.
It's like the Apollo 11 code, but for social services.
We have an airtag in our cargo bike, connected to our ipad (neither my wife nor I have an Iphone). It never actually makes a sound and we can reliably track on the ipad. what gives? I never thought about this.
Except of course rollout will not be atomic anyway and making changes in a single commit might lead Devs to make changes without thinking about backwards compat
Even if the rollout was atomic to the servers, you will still have old clients with cached old front ends talking to updated front ends. Depending on the importance of the changes in question, you can sometimes accept breakage or force a full UI refresh. But that should be a conscious decision. It’s better to support old clients as the same time as new clients and deprecate the old behavior and remove it over time. Likewise, if there’s a critical change where you can’t risk new front ends breaking when talking to old front ends (what if you had to rollback), you can often deploy support for new changes, and activate the UI changes in a subsequent release or with a feature flag.
I think it’s better to always ask your devs to be concerned about backwards compatibility, and sometimes forwards compatibility, and to add test suites if possible to monitor for unexpected incompatible changes.
Rollout should be within a minute. Let's say you ship one thing a day and 1/3 things involve a backwards-incompatible api change. That's 1 minute of breakage per 3 days. Aka it's broken 0.02% of the time. Life is too short to worry about such things
You might have old clients for several hours, days or forever(mobile). This has to be taken into account, for example by aggressively forcing updates which can be annoying for users, especially if their hardware doesn't support updating.
The LLM solution will be much more flexible because prompts can change more easily than training data and input tokens are cheap.
reply