Skip to main content
The human cost of digital transformation, revealed: Download our special report
Read more
arrow icon

Scaling data ingestion infrastructure with dynamic EKS platform

Adaptavist delivered a dynamic AWS EKS platform enabling a SaaS provider to ingest 125,000 data logs per minute across 30 endpoints with self-service provisioning and optimised costs.
Two people with a plug

Requirements at a glance

  • Handle tens of thousands of records per second
  • Enable on-demand endpoint provisioning without manual intervention
  • Dynamically provision underlying infrastructure alongside application endpoints
  • Maintain strict multi-tenancy and security boundaries
  • Optimise costs through intelligent, needs-based scaling
Industry: Software ISV
Solution: Event-driven EKS platform with dynamic infrastructure provisioning
Result: Unified, self-service platform replacing fragmented endpoint management
Key metric: 125,000 logs per minute processed across 30 self-provisioned endpoints

Summary

A leading SaaS provider required scalable infrastructure for its new Data Integration Gateway application. The gateway needed to ingest, route, and store high volumes of data whilst enabling customers to provision endpoints on demand. Adaptavist engineered a dynamic EKS platform leveraging Crossplane, Karpenter, and KEDA to deliver self-service capabilities, event-driven autoscaling, and declarative infrastructure management, eliminating manual provisioning overhead whilst maintaining cost efficiency.

The challenge

The organisation faced the complexity of managing a high-traffic data ingestion gateway requiring both application scalability and infrastructure flexibility. Traditional approaches would necessitate maintaining numerous distinct environments, each demanding separate configuration, monitoring, and operational overhead.
The platform needed to support dynamic endpoint provisioning, allowing end-users to create new data ingestion points without engineering intervention. Each endpoint required its own infrastructure stack—including compute, storage, and networking resources—whilst maintaining strict isolation between tenants for security and performance.
Beyond provisioning, the platform required intelligent scaling capabilities that responded to actual workload patterns rather than relying on static capacity planning. The organisation needed to balance performance requirements with cost efficiency, avoiding both resource overprovisioning and performance degradation during traffic spikes.

The solution

Our team engineered a sophisticated EKS-based platform combining GitOps deployment practices, event-driven autoscaling, and declarative infrastructure management to deliver a fully automated, self-service data ingestion gateway.
Foundational platform architecture
chevron icon

Foundational platform architecture

We established a robust, highly available core using Amazon EKS, ensuring business continuity through multi-Availability Zone deployment. The platform's architecture provided the resilience and scalability necessary for mission-critical data ingestion workloads whilst maintaining operational simplicity through managed Kubernetes control planes.

Safe deployment practices

We implemented canary-based deployment strategies using ArgoCD, enabling gradual rollouts with real-time monitoring and rapid rollback capabilities. This GitOps approach ensured deployment consistency whilst minimising risk during application updates, allowing the team to deliver features confidently and frequently.

Intelligent, multi-dimensional scaling

The platform achieved cost-efficient scalability through dual autoscaling mechanisms. Karpenter dynamically provisioned and deprovisioned EC2 nodes based on actual pod demand, eliminating idle capacity. KEDA (Kubernetes Event-driven Autoscaling) scaled applications based on external metrics and events rather than simplistic CPU and memory thresholds, ensuring responsiveness to real workload patterns.
This sophisticated scaling approach proved essential for data ingestion workloads where traditional resource-based metrics failed to capture actual demand. Request-based metrics provided accurate scaling signals, maintaining performance during traffic variations whilst minimising unnecessary resource consumption.

Dynamic infrastructure provisioning

Crossplane integration, utilising Terraform providers, enabled developers to manage cloud resources directly through Kubernetes-native manifests. This approach enabled endpoint provisioning to trigger the automatic creation of supporting infrastructure—databases, message queues, and storage buckets—without requiring manual intervention or separate infrastructure workflows.
The declarative model ensured infrastructure remained synchronised with application requirements whilst leveraging Terraform's extensive ecosystem for consistent, version-controlled provisioning across the entire stack.

Robust multi-tenancy

We established strict isolation by deploying each endpoint within its own dedicated Kubernetes namespace. This architecture provided strong security boundaries, prevented resource contention between tenants, and simplified the application of tenant-specific policies and controls. The namespace-based approach enabled granular resource management whilst maintaining operational simplicity.

Foundational platform architecture

We established a robust, highly available core using Amazon EKS, ensuring business continuity through multi-Availability Zone deployment. The platform's architecture provided the resilience and scalability necessary for mission-critical data ingestion workloads whilst maintaining operational simplicity through managed Kubernetes control planes.

Safe deployment practices

We implemented canary-based deployment strategies using ArgoCD, enabling gradual rollouts with real-time monitoring and rapid rollback capabilities. This GitOps approach ensured deployment consistency whilst minimising risk during application updates, allowing the team to deliver features confidently and frequently.

Intelligent, multi-dimensional scaling

The platform achieved cost-efficient scalability through dual autoscaling mechanisms. Karpenter dynamically provisioned and deprovisioned EC2 nodes based on actual pod demand, eliminating idle capacity. KEDA (Kubernetes Event-driven Autoscaling) scaled applications based on external metrics and events rather than simplistic CPU and memory thresholds, ensuring responsiveness to real workload patterns.
This sophisticated scaling approach proved essential for data ingestion workloads where traditional resource-based metrics failed to capture actual demand. Request-based metrics provided accurate scaling signals, maintaining performance during traffic variations whilst minimising unnecessary resource consumption.

Dynamic infrastructure provisioning

Crossplane integration, utilising Terraform providers, enabled developers to manage cloud resources directly through Kubernetes-native manifests. This approach enabled endpoint provisioning to trigger the automatic creation of supporting infrastructure—databases, message queues, and storage buckets—without requiring manual intervention or separate infrastructure workflows.
The declarative model ensured infrastructure remained synchronised with application requirements whilst leveraging Terraform's extensive ecosystem for consistent, version-controlled provisioning across the entire stack.

Robust multi-tenancy

We established strict isolation by deploying each endpoint within its own dedicated Kubernetes namespace. This architecture provided strong security boundaries, prevented resource contention between tenants, and simplified the application of tenant-specific policies and controls. The namespace-based approach enabled granular resource management whilst maintaining operational simplicity.

The result and business impact

Our EKS platform implementation transformed endpoint management from fragmented environments into a unified, self-service platform, delivering measurable improvements across operational efficiency, developer productivity, and cost management.
Production-scale data ingestion
The platform successfully supports 30 configured endpoints streaming approximately 125,000 data logs per minute. This production-scale operation demonstrates the platform's capacity to handle demanding data ingestion workloads whilst maintaining reliability and performance consistency.
Self-service capabilities
End-users gained the ability to configure endpoints independently, eliminating the need for manual engineering intervention. This self-service model dramatically reduced provisioning lead times from days to minutes whilst eliminating operational bottlenecks and enabling rapid business experimentation.
Cost optimisation through intelligent provisioning
Strategic implementation of needs-based provisioning through KEDA and Karpenter eliminated instance overprovisioning. The platform's event-driven scaling ensured resources matched actual demand patterns, delivering substantial cost savings compared to traditional static capacity allocation approaches.
Two people holding cogs
Developer focus on business value
By providing a secure, highly available, and comprehensively observed container environment, the platform freed development teams to concentrate exclusively on feature enhancement. Engineering efforts shifted from infrastructure management to business-critical capabilities, including rate limiting, advanced routing logic, and data transformation features.
Unified platform economics
The transformation from numerous distinct environments to one consolidated platform delivered strategic advantages beyond immediate cost savings. Standardisation across configuration, monitoring, logging, and security reduced operational complexity whilst improving reliability through consistent practices.
Operational efficiency improved significantly through the use of shared tools and processes. Teams managed a single platform rather than maintaining multiple isolated application environments, reducing cognitive load while improving incident response capabilities through unified observability.
Resource optimisation through Kubernetes features—including resource limits, quotas, and intelligent autoscaling—enabled more efficient capacity utilisation. The platform's consolidation facilitated strategic resource allocation, which was previously impossible with fragmented infrastructure.
Accelerated delivery velocity
Robust CI/CD pipelines targeting the EKS cluster enabled faster deployment and iteration cycles. The GitOps approach ensured changes moved through consistent validation processes whilst maintaining complete audit trails for compliance and troubleshooting.
Three people and a graph

Key learnings and best practices

Crossplane and Terraform provider alignment
Establishing effective alignment between Crossplane and Terraform providers for infrastructure provisioning required careful consideration of state management and lifecycle orchestration. Success demanded understanding both technologies' operational models and designing integration patterns that leveraged their respective strengths.
Request-based scaling metrics
Traditional CPU and memory-based scaling proved insufficient for data ingestion workloads. Implementing request-based metrics through KEDA provided accurate scaling signals that reflected actual application demand, ensuring responsiveness whilst avoiding unnecessary resource consumption during idle periods.
Canary deployment metrics design
Defining and exposing appropriate metrics to enable effective canary deployment strategies required close collaboration between the platform and application teams. Success metrics needed to capture both technical performance and business outcomes, enabling confident progressive rollouts with clear success criteria.
Multi-tenancy architecture considerations
Achieving robust multi-tenancy whilst maintaining proper separation of concerns demanded thoughtful namespace design and policy enforcement. The solution struck a balance between security isolation and operational simplicity, ensuring that tenant boundaries remained strong without creating excessive management overhead.

Looking forward

The platform's intelligent autoscaling and declarative infrastructure model position the organisation for continued growth. Future enhancements include advanced observability features for tenant-specific insights, expanded self-service capabilities for infrastructure customisation, and integration with additional AWS services for enhanced data processing capabilities. The established foundation enables seamless scaling as data volumes increase and new use cases emerge.
Woman on a skateboard

Ready to achieve cost savings with an intelligently scaled EKS platform designed for your unique workloads?

Contact Adaptavist today to discuss how we can help.