Without proper telemetry and performance metrics you will get to do this in a few more months again
The 'one weird trick' could've been spotted in a graphical bundle analyser. But are they not caching npm packages somewhere, seems like an awful waste downloading from the npm registry over and over? I would think it would be parsing four different versions of the AWS sdk that was so slow.
Sadly Grafana (cloud) comes at a cost too. Anyone struggles with this horrible active metrics based pricing too? Not only Grafana Cloud but others do it like that too.
We moved shitloads to self hosted Thanos. While this comes with its own drawbacks obv, I think it was worth it.
I'm really surprised that 300ms at startup would result in 25% fewer pods.... What % reduction in the total startup time is that?
Is it possible the prior measurement happened during a high traffic period and the post measurement happened in a low traffic period?
I really don't understand spinning up a whole pod just for a request
Wouldn't it be cheaper to just keep a pod up with a service running?
If scaleability is an issue just plop a load balancer in front of it and scale them up with load but surely you can't need a whole pod for every single one of those millions of requests right?
> Checkly is a synthetic monitoring tool that lets teams monitor their API’s and sites continually, and find problems faster.
>With some users sending *millions of request a day*, that 300ms added up to massive overall compute savings
No shit, right?
I do not understand how cloud proponents talk about the he costs of self hosting but then get into situations like this.
Spending serious engineering time to wrangle with the complexities of cloud orchestration is not something that should be taken lightly.
Cloud services should be required to have a black-box Surgeon’s General warning.
[flagged]
many of the tricks we learned in the late 90s - 2000s can no longer be pulled off. We used to download jar files over the net. Running a major prop trading platform meant 1000s of dependencies. You’d have swing and friends for front end tables, sax xml parsers, various numerical libraries, logging modules- all of this shit downloaded in the jar when the customer impatiently waited to trade some 100MM worth of fx. We learned how to cut down on dependencies. Built tools to massively compress class files. Tradeoff 1 jar with lots of little jars that downloaded on demand. Better yet, cache most of these jars so they wouldn’t need to download every single time. It became a fine art at one point - the difference between a rookie and a professional was that the latter could not just write a spiffy java frontend, but actually deploy it in prod so customers wouldn’t even know there was a startup time - it would just start like instantly. then that whole industry just vanished overnight- poof!
now i write ml code and deploy it on a docker in gcp and the same issues all over again. you import pandas gbq and pretty much the entire google bq set of libraries is part of the build. throw in a few stadard ml libs and soon you are looking at upwards of 2 seconds in Cloud Run startup time. You pay premium for autoscaling, for keeping one instance warm at all times, for your monitoring and metrics, on and on. i am yet to see startup times below 500ms. you can slice the cake any which way, you still pay the startup cost penalty. quite sad.
I really enjoyed this read!
One thing that wasn't clear to me, is that if running NPM to install dependencies on pod startup is slow, why not pre build an image with dependencies already installed, and deploy that instead?