We don’t use Kubernetes

  • To others wondering what this company does:

    > Ably is a Pub/Sub messaging platform that companies can use to develop realtime features in their products.

    At this time, https://status.ably.com/ is reporting all green.

    Althought their entire website is returning 500 errors, including the blog.

    It is very hard not to point the irony of the situation.

    In general I would not be so critic, but this is a company claiming to run highly available, mission critical distributed computing systems. Yet, they publish a popular blog article and it brings down their enire web presence?

  • > To move to Kubernetes, an organization needs a full engineering team just to keep the Kubernetes clusters running

    Perhaps my team runs a simpler cluster, but we have been running a Kubernetes cluster for 2+ years as a team of 2 and it has been nothing less than worth it

    The way the author describes the costs of moving to Kubernetes makes me think that they don't have the experience with Kubernetes to actually realize the major benefits over the initial costs

  • I wanted to refrain from commenting because honestly I’m not the biggest fan of relatively opaque complexity and kubernetes tries its hardest to be this. (Cloud providers doing magic things with annotations for example)

    But, I have to say that kubernetes is not the devil. Lock-in, is the devil.

    I recently underwent the task of getting us off of AWS, which was not as painful as it could have been (I talk about it here[0])

    But the thing is: I like auto healing, auto scaling and staggered rollouts.

    I had previously implemented/deployed this all myself using custom C++ code, salt and a lot of python glue. It worked super well but it was also many years of testing and trial and error.

    Doing all of that again is an insane effort.

    Kubernetes is 80% of the same stuff if your workload fits in it, but you have to learn the edge cases, which of course increases tremendously from the standard: python, Linux, terraform stuff most operators know.

    Anyway.

    I’m not saying go for it. But don’t replace it with lock-in.

    [0]: https://www.gcppodcast.com/post/episode-265-sharkmob-games-w...

  • The risk with using Docker in production is that you can end up building your own bad version of Kubernetes over time. K8s is fairly complex but it solves a lot of useful problems such as zero downtime upgrades, service discovery, declarative config etc

  • Ok, where to start...

    > Packing servers has the minor advantage of using spare resources on existing machines instead of additional machines for small-footprint services. It also has the major disadvantage of running heterogeneous services on the same machine, competing for resources. ...

    Have a look at your CPU/MEM resource distributions, specifically the tails. That 'spare' resource is often 25-50% of resource used for the last 5% of usage. Cost optimization on the cloud is a matter of raising utilization. Have a look at your pods' use covariance and you can find populations to stochastically 'take turns' on that extra CPU/RAM.

    > One possible approach is to attempt to preserve the ā€œone VM, one serviceā€ model while using Kubernetes. The Kubernetes minions don’t have to be identical, they can be virtual machines of different sizes, and Kubernetes scheduling constraints can be used to run exactly one logical service on each minion. This raises the question, though: if you are running fixed sets of containers on specific groups of EC2 instances, why do you have a Kubernetes layer in there instead of just doing that?

    The real reason is your AWS bill. Remember that splitting up a large .metal into smaller VMs means that you're paying the CPU/RAM bill for a kernel + basic services multiple times for the same motherboard. Static allocation is inefficient when exposed to load variance. Allocating small VMs to reduce the sizes of your static allocations costs a lot more overhead than tuning your pod requests and scheduling prefs.

    Think of it like trucks for transporting packages. Yes you can pay AWS to rent you just the right truck, in the right number for each package you want to carry. Or you can just rent big-rigs and carry many, many packages. You'll have to figure out how to pack them in the trailer, and to make sure they survive the vibration of the trip, but you will almost certainly save money.

    EDIT: Formatting

  • This page is returning a 500 for me. Perhaps they should use Kubernetes.

  • I wouldn't follow the path that this company took. I am a solo developer, use Kubernetes on Google cloud and I couldn't be happier. Parts of my application runs on AWS, taking advantage of what AWS does better, such as SES (Simple Email Service).

    All I had to learn is Docker and Kubernetes. If Kubernetes didn't exist I would have had to learn myriad tools and services, cloud-specific tools and services, and my application would be permanently wedded to one cloud.

    Thanks to Kubernetes my application can be moved to another cloud in whole or in part. Kubernetes is so well designed, it is the kind of thing you learn just because it is well designed. I am glad I invested the time. The knowledge I acquired is both durable and portable.

  • "No, we don’t use Kubernetes - because we bodged together a homegrown version ourselves for some reason"

    Jokes aside, it sounds like they should just use ECS instead.

  • Interestingly enough I normally recommend people avoid Kubernetes, if they don't have a real need, which most don't.

    This is one of the first cases where I think that maybe Kubernetes would be the right solution, and it's an article about not using it. While there's a lot of information in the article, there might be some underlaying reason why this isn't a good fit for Kubernetes.

    One thing that is highlighted very well is that fact that Kubernetes is pretty much just viewed as orchestration now. It's no longer amount utilizing your hardware better (in fact it uses more hardware in a many cases).

  • I find it awfully sad and depressing how people on the internet pile onto a company that tells an opposing view without reading or discussing further into their choices.

    Instead it is easier to to critique the low hanging fruit point rather than discussing their actual reason for not using this 'kubernetes' software.

    So is their blog the main product? If not, then the 'their blog gone down lol' quips are irrelevant.

    I found their post rather interesting and didn't suffer any issues the rest of the 'commenters' are facing.

  • Count how many times the words "custom $something" are mentioned in this article and you have a pretty strong case for using Kubernetes.

  • ably ceo: Our website is throwing 500 errors!!!

    ably cto: Go talk to the CMO, I can't help.

    ably ceo: What!!

    ably cto: Remember when you said the website was "Totally under the control of the CMO" and "I should mind my own business"? Well I don't even have a Wordpress login. I literally can't help.

  • We use AWS ElasticBeanstalk with Docker images for our app infrastructure and it has served us well. I think it ends up being similar in that it pushes docker images to EC2 containers. It may not be cutting edge but affords us all the conveniences of docker images for deps while not needing the deep knowledge (and team resources) Kubernetes often requires.

  • I'm all for not using Kubernetes (or any other tech) just because, but seeing their website giving 500.. I can't help but feel all the k8s' laughter. :(

    Google Cache doesn't work either https://webcache.googleusercontent.com/search?q=cache:YECd_I...

    Luckily there's an Internet Archive https://web.archive.org/web/20210720134229/https://ably.com/...

  • Using Docker on EC2 but not Kubernetes is like using a car but deciding you will never use the gas or break pedals. At that point might as well walk (simpler) or fly (use Kubernetes).

    They have essentially semi mocked Kubernetes without any of the benefits.

  • If you use AWS, just use Fargate. Fargate is the parts of Kubernetes you actually want, without all the unnecessary complexity, and with all the AWS ecosystem as an option. It even autoscales better than Kubernetes. It's cheaper, it's easier, it's better. If you can't use Fargate, use ECS. But for God's sake, don't poorly re-invent your own version of ECS. If you're on AWS, and not using the AWS ecosystem, you're probably wasting time and money.

    And if you eventually need to use Kubernetes, you can always spin up EKS. Just don't rush to ship on the Ever Given when an 18 wheeler works fine.

  • "we would be doing mostly the same things, but in a more complicated way" is a pretty good summary considering their use cases seem to be well covered by autoscaling groups (which, incidentally, are a thing other clouds have)

    It's OK to not use k8s. We should normalize that.

  • This reads as they don't have any experience with it and decided to roll their own. I had no experience with kube 3 months ago, we had rancher 1 cluster with 30-50 services, and I just migrated it, just me. Ended up on eks with CNI (pod networking) - Using the lb controller with ip target types and a targetgroupbinding for ingress and its great. Each pod gets its own secondary ip on the ec2 instance automatically. I'm deploying rancher as a management ui.

    I also now have a k3s cluster at home. The learning curve was insane, and I hated it all for about 8 weeks but then it all just clicked and it's working great. The arrogance to roll your own without assessing the standard fully speaks volumes. Candidates figured that out and saw the red flag.. Writing your own image bootstrapper... What about all the other features, plus the community and things h things like helm charts.

  • Site down. Perhaps they forgot to set autoscaling on the pods.

  • I don't use K8s either, but we at least use an orchestration tool.

    ECS Fargate has been awesome. No reason to add the complexity of K8s/EKS. We're all in on AWS and everything works together.

    But this... you guys re-invented the wheel. You're probably going to find it's not round in certain spots too.

  • Good. Kubernetes is a swiss army knife, and I just need a fork and a spoon. Sure, the swiss army knife comes with a fold-out fork and a spoon, but then I have to build up institutional knowledge around where they are, and how to avoid stabbing myself with the adjacent blades.

  • "How we made our own container orchestration infrastructure over AWS. It doesn't have all Kubernetes features, but hey! we can then train our own employees on how it works.

    And we got to maintain the code all by ourselves too! It might take a bit too long to implement a new feature, but hey! its ours!"

    Really, Kubernetes is complex, but the problem it solves it even more complex.

    If you are ok solving a part of the problem, nice. You just built a competitor to google. Good luck hiring people who come in already knowing how to operate it.

    Good luck trying to keep it modern and useful too.

    But I totally understand the appeal.

  • I think that Kubernetes has advantages for many small services but a few large services are still worth managing directly on bare machines/VMs.

    Where I disagree with this article is on Kubernetes stability and manageability. The caveat is that GKE is easy to manage and EKS is straightforward but not quite easy. Terraform with a few flags for the google-gke module can manage dozens of clusters with helm_release resources making the clusters production-ready with very little human management overhead. EKS is still manageable but does require a bit more setup per cluster, but it all lives in the automation and can be standardized across clusters.

    Daily autoscaling is one of those things that some people can get away with, but most won't save money. For example, prices for reservations/commitments are ~65% of on-demand. Can a service really scale so low during off-hours that average utilization from peak machine count is under 35%? If so, then autoscale aggressively and it's totally worth it. Most services I've seen can't actually achieve that and instead would be ~60% utilized over a whole day (mostly global customer bases). The exception is if you can scale (or run entirely with loose enough SLOs) into spot or preemptible instances which should be about as cheap as committed instances at the risk of someday not being available.

  • The hard thing of kubernetes, is that it sounds easy to do something/anything. It does, and because it's "easy" people tend to skip to understand how it's working: That's where the problem occurs.

    It _forces_ you to become a "yaml engineer" and to forget the other part of the systems. I was interviewed by a company and when I replied the next step I could do was to write some operators for the ops things, they simply rejected because I'm too experienced lolz

  • > This has been asked by current and potential customers, by developers interested in our platform, and by candidates interviewing for roles at Ably. We have even had interesting candidates walk away from job offers citing the fact that we don’t use Kubernetes as the reason!

    I celebrate a diversity in opinion on infrastructure but… if I was a CTO/VP of engineering and I read that line, that would be enough to convince me to use kubernetes.

  • ECS has continued to be great for us. I haven't run Kubernetes in production, but from my perspective, we have everything we would need from K8S with only a fraction of the effort. I've also been able to do some fun things via the AWS API that may have been challenging or even impossible with K8S (again, I may be naive here)

  • I think the killer feature of Kubernetes is really the infrastructure as code part -- it just makes it very easy to spin services up or down as desired without thinking too hard about it. But as the article alludes to, if you're comfortable with the lock-in, you can get that from your cloud provider with tighter integration.

  • > if you are still deploying to plain EC2 instances, you might as well be feeding punch cards.

    This type of language isn't something that should come out of a company, and may be a signal that there are other reasons developers refused to offer their services other than they just don't use K8s.

  • I think many people new to Kubernetes get intimidated by its perceived complexity. It has so many resources, huge manifests, and a billion tools, with more coming online by the day. I was a huge Kubernetes hater for a while because of this, but I grew to love it and wouldn't recommend anything else now.

    I'm saying this because while their architecture seems reasonable, albeit crazy expensive (though I'd say it's small-scale if they use network CIDRs and tags for service discovery), it also seems like they wrote this without even trying to use Kubernetes. If they did, it isn't expressed clearly by this post.

    For instance, this:

    > Writing YAML files for Kubernetes is not the only way to manage Infrastructure as Code, and in many cases, not even the most appropriate way.

    and this:

    > There is a controller that will automatically create AWS load balancers and point them directly at the right set of pods when an Ingress or Service section is added to the Kubernetes specification for the service. Overall, this would not be more complicated than the way we expose our traffic routing instances now.

    > The hidden downside here, of course, is that this excellent level of integration is completely AWS-specific. For anyone trying to use Kubernetes as a way to go multi-cloud, it is therefore not very helpful.

    Sound like theoretical statements rather than ones driven by experience.

    Few would ever use raw YAMLs to deploy Kubernetes resources. Most would use tools like Helm or Kustomize for this purpose. These tools came online relatively soon after Kubernetes saw growth and are battle-tested.

    One would also know that while ingress controllers _can_ create cloud-provider-specific networking appliances, swapping them out for other ingress controllers is not only easy to do, but, in many cases, it can be done without affecting other Ingresses (unless they are using controller-specific functionality).

    I'd also ask them to reconsider is how they are using Docker images as a deployment package. They're using Docker images as a replacement for tarballs. This is evidenced by them using EC2 instances to run their services. I can see how they arrived at this (Docker images are just filesystem layers compressed as a gzipped tarball), but because images were meant to be used by containers, dealing with where Docker puts those images and moving things around must be a challenge.

    I would encourage them to try running their services on Docker containers. The lift is pretty small, but the amount of portability they can gain is massive. If containers legitimately won't work for them, then they should try something like Ansible for provisioning their machines.

  • Since it is down, wayback machine: https://web.archive.org/web/20210720134229/https://ably.com/...

  • > We have even had interesting candidates walk away from job offers citing the fact that we don’t use Kubernetes as the reason!

    This is not too surprising. Candidates want to join companies that are perceived to be hip and with it technology-wise in order to further their own resume.

  • I'm accessing this website/blog from southeast asia. Its working perfectly.

    Ably using ghost blogging platform (https://ghost.ably.com) seeing requests in network console.

  • They are not in a line of business where they might be deplatformed, shut down, or that AWS exists as a major liability. They also do not require any form of multi cloud redundancy that is not already being served by AWS.

  • That great now live with your bad/good design patterns on your own for the life cycle of your product or service, you get non of the shared knowledge and advancement through the k8s community or platform.

  • Scaling is a sexy problem to have. Most places crate problems to solve just for the appeal. They're spending angel investor money, not their own, so prolonging the time to production is in their favor.

  • Off topic but: The chat icon obscures the x to close the cookie banner. Safari on iPhone X.

  • sure you can do homegrown and probably be more efficient and cost savy, but you can't beat the amount of tooling and that new sysops you will have to train to those new tools compared to a pure k8 platform...

  • "no, we don't use Kubernetes, we are stuck in AWS ecosystem"

  • Employers: We are looking for an applicant with 3/5/7/10/20 years of experience in the following technologies.

    Also Employers: Some of our applicants turned down our job offer, when we revealed that we don't use specific technologies!

  • I'm honestly curious if this is a prank from ably.com or what the content of the page was.

  • Um....are we getting trolled by some content marketing SEO hack here?

    https://www.sebastianbuza.com/2021/07/20/no-we-dont-use-kube...

    Another pub/sub startup publishing almost the identical blog post also today, July 20, 2021.

  • If people put as much effort into learning Kubernetes as they did into writing blog posts about ā€œhow complexā€ it is...... we’d all be better off.

  • typical circle of (IT) live. Start your own "solution" end up with a kubernetes sort a like clone which is better understood by you

  • Half the comments haven't read the post, and the other half are "should have used kubernetes".

    Funny isn't it when a company writes a post explaining why they don't use a technology that everyone who loves _said technology_ comes on and tells them why they should.

  • typical not invented here syndrome.

    seems like a me too, i know more than you type of project. looks geared towards cryptos though.

  • The piss races like we use x or we don't use y in engineering sound hilarious for some reason. such discussions focus on the process, but few talk about the results ĀÆ\_(惄)_/ĀÆ