'At this point you are probably going to raise the obvious objection of “isn’t GPT impractically expensive compared to running a normal ML model”, and the answer is of course yes.'
Its also not the only LLM around, and some perfectly good models (like Facebook's llama) will be usable on your own hardware or a modest on-demand cloud service with an accelerator.
Again, I get really annoyed when writers resign themselves to OpenAI's API and pricing... this is not a monopoly we are being forced into.
'At this point you are probably going to raise the obvious objection of “isn’t GPT impractically expensive compared to running a normal ML model”, and the answer is of course yes.'
Its also not the only LLM around, and some perfectly good models (like Facebook's llama) will be usable on your own hardware or a modest on-demand cloud service with an accelerator.
Again, I get really annoyed when writers resign themselves to OpenAI's API and pricing... this is not a monopoly we are being forced into.