Turboscale
Last updated
Was this helpful?
Last updated
Was this helpful?
Turboscale is Nalpeiron's term for our highly scalable architecture, which can handle billions of transactions automatically. This is a proprietary development we developed to allow every part of our infrastructure to grow and shrink on demand.
After 20+ years handling huge volumes of variable traffic, we invested millions in building modern infrastructure that allows for incredible flexibility in traffic volumes at any time. We have learned that traffic can change a great deal over a day or a month, and this allows us to keep our uptime promise no matter how many millions of end users are connected to our platform.
Unlike our competitors, who can't easily handle large traffic volumes, we can handle an almost unlimited flow of traffic and data at any time, 24/7/365.
One of the key trends we've seen—and in fact pioneered—is the move toward cloud-based software licensing. In our environment, we focus on multi-tenant clusters as our primary go-to-market solution, though we now also offer a single-tenant edition. In both environments, we've built scalability into every aspect of the architecture.
What does this mean? At our core, our architecture is built around Docker containers managed through Kubernetes that automatically scale. As demand increases, we spin up additional Docker workers to handle specific workload aspects. With Kubernetes management, you get a seamless experience.
We also have an auto-scaling database for both performance and storage. As additional capacity is needed, it automatically scales and spins up workers to handle database queries. Similarly, our API handlers use Docker containers that scale up and down as needed, ensuring low-latency access and excellent performance regardless of traffic volume on the multi-tenant cluster.
The user interface—including the dashboard and end-user portal—is built around these same APIs. Whether you have tens of customers or tens of millions, every part of our stack will scale through this turbo-scale approach.
Zentitle will automatically scale to demand across the stack.
Initially, HPA adds new instances (pods) of individual applications (e.g., Licensing API and Core Service).
As the whole cluster reaches capacity, Karpenter adds new nodes.
The reverse is true as demand reduces.
AWS RDS Aurora is used to auto-scale the database.
HPA (Horizontal Pod Autoscaling)
• Observe services for CPU/Memory pressure and modify instances as needed
• Horizontal scaling means deploying more Pods to respond to increased load. This is different from vertical scaling, which for Kubernetes would mean assigning more resources (e.g., memory or CPU) to the Pods that are already running for the workload.
Karpenter
• Observe the whole Kubernetes cluster and add/remove nodes (AWS EC2 virtual machines) to the cluster as demand requires
• Karpenter automatically launches the compute to handle your cluster's applications.