cms.teleglobals.com

Teleglobal International Deploys Agentic AI-Governed Voice Platform on AWS for Auxy AI

Teleglobal International Deploys Agentic AI-Governed Voice Platform on AWS for Auxy AI

Executive Summary 

Auxy AI builds an always-on AI Voice Agent platform that handles enterprise inbound and outbound calls without human involvement, covering lead qualification, appointment scheduling, and real-time service interactions. 

The platform had a working product but no production-grade infrastructure to run it on. Manual provisioning consumed 3 to 5 days per environment, compliance gaps were blocking enterprise deals, and there was no AI infrastructure for model development. 

Teleglobal International designed and deployed a complete AWS platform with embedded GenAI capabilities. The Transcribe to Bedrock to Polly voice pipeline was live in production from day one. 

87% Faster provisioning 10-15 Eng. days recovered 30% Sprint capacity freed 5 CI/CD governance gates 6 Security services at launch 
  • Amazon Bedrock voice inference pipeline live from day one 
  • Compliance audit prep cut from weeks to hours via automated CloudTrail 
  • 14 AWS services deployed in a single engagement 

About Auxy AI 

Auxy AI provides an always-on AI Voice Agent platform for enterprise use cases. 

The platform handles inbound and outbound calls autonomously, covering: 

  • Lead qualification during voice calls 
  • Automated responses to frequently asked questions 
  • Appointment scheduling and booking 
  • Real-time service interactions without human involvement 

As enterprise adoption grew, the platform needed a secure, scalable AWS foundation capable of meeting the governance standards enterprise contracts demand. 

The Challenge 

Auxy AI had a working voice AI product but no production-grade infrastructure. Four gaps were blocking enterprise growth. 

  1. No Production Infrastructure 

Manual provisioning consumed 3 to 5 days per environment, absorbing around 30% of each sprint before any code reached production. 

  1. Configuration Drift 

Dev, Test, and Production environments diverged over time. This surfaced defects in production that were expensive to diagnose and fix. 

  1. Compliance Blocking Enterprise Deals 

There was no audit logging, encryption baseline, or threat detection in place. Enterprise deals were stalling at security review stage. 

  1. No AI Infrastructure for Model Development 

Any future voice model work would require a complete infrastructure rebuild from scratch. There was no foundation to build on. 

Deployment Approach 

Teleglobal applied a two-phase approach to ensure security was built in, not retrofitted. 

  • Phase 1: Core infrastructure established, hardened, and compliance-aligned before any AI workload was introduced 
  • Phase 2: GenAI layer delivered — Amazon Bedrock integration, the Transcribe to Bedrock to Polly voice pipeline, SageMaker training infrastructure, and CI/CD governance automation 

This sequencing meant GenAI workloads inherited the security posture from day one. 

Model Selection 

Step 1- Evaluation Criteria 

Teleglobal defined five criteria the chosen model had to meet for Auxy AI’s voice platform: 

  • Real-time inference latency: must support low-latency conversational responses suitable for live voice calls 
  • Native AWS integration: must integrate directly with Amazon Transcribe (speech-to-text) and Amazon Polly (voice synthesis) without additional middleware 
  • Data residency: all inference must remain within ap-south-1, with no voice or call data leaving the AWS boundary 
  • Compliance audit trail: must support CloudTrail-compatible audit logging for enterprise security reviews 
  • Conversational reasoning quality: must handle multi-turn dialogue, intent recognition, and contextual responses for voice agent use cases 

Step 2 – Models Evaluated 

Three options were shortlisted and evaluated against Auxy AI’s requirements: 

  • Amazon Bedrock (Claude Haiku) via private endpoint: managed foundation model, native AWS service 
  • Azure OpenAI Service (GPT-4o mini): externally hosted, Microsoft Azure platform 
  • Google Dialogflow CX: purpose-built voice AI, Google Cloud platform 
Parameter Amazon Bedrock (Claude Haiku) ✔ Selected Azure OpenAI (GPT-4o mini) Google Dialogflow CX 
AWS-native (Transcribe + Polly) ✔ Direct, no middleware needed ✘ Requires custom integration layer ✘ Google Cloud only, not AWS-native 
Data stays in AWS boundary ✔ Yes, ap-south-1 end-to-end ✘ No, data routes to Azure ✘ No, data routes to Google Cloud 
CloudTrail audit integration ✔ Native, automatic ✘ External, no CloudTrail ✘ External, no CloudTrail 
Real-time voice latency ✔ Optimised for low-latency inference ⚠ Viable but cross-cloud adds overhead ✔ Purpose-built for voice 
Multi-turn conversational reasoning ✔ Strong, context-aware responses ✔ Strong ⚠ Limited by predefined flow structure 
Fine-tunable on custom data ✔ Via SageMaker pipeline ⚠ Limited customisation options ✘ No custom model training 
Predictable cost at scale ✔ AWS infrastructure pricing ⚠ Per-token billing, cross-cloud ⚠ Per-session billing 
IAM and compliance controls ✔ Native AWS IAM, KMS, GuardDuty ✘ External, separate compliance posture ✘ External, separate compliance posture 

Step 3 – Why Amazon Bedrock Was Selected 

Amazon Bedrock was the only option that met all five criteria without requiring additional infrastructure, middleware, or a separate compliance posture. 

  • Azure OpenAI: Azure OpenAI rejected: all inference routes to Microsoft Azure, directly violating Auxy AI’s data residency requirement. There is no native path to CloudTrail audit logging, and integrating with Amazon Transcribe and Polly would require a custom middleware layer. 
  • Dialogflow CX: Google Dialogflow CX rejected: purpose-built for voice but runs on Google Cloud, not AWS. No CloudTrail integration, no SageMaker fine-tuning path, and conversation data would leave the AWS boundary entirely. 
  • Bedrock selected: Amazon Bedrock selected: native integration with Transcribe and Polly enabled the complete voice pipeline (speech-to-text, reasoning, voice synthesis) to run end-to-end within ap-south-1. All audit trails are captured automatically via CloudTrail. The same SageMaker infrastructure supports future fine-tuning on Auxy AI’s proprietary conversation recordings. 

The Solution 

Infrastructure 

  • Three isolated EKS environments: Production (2 nodes), Development, Testing 
  • Kubernetes HPA absorbing voice traffic bursts automatically 
  • Application Load Balancers per environment; AWS WAF at ingress 
  • Certificate Manager handling SSL/TLS across all environments 

GenAI Voice Pipeline 

The voice AI pipeline runs end-to-end within ap-south-1 for data residency compliance: 

  • Amazon Transcribe: real-time speech-to-text from incoming voice calls 
  • Amazon Bedrock (Claude Haiku): conversational reasoning and intent recognition 
  • Amazon Polly: voice synthesis returning natural speech to the caller 

SageMaker is deployed from day one for voice model hosting, training pipelines, and fine-tuning on Auxy AI’s conversation recordings. Zero additional infrastructure required for future model improvement. 

Agentic CI/CD Governance 

AWS Step Functions and GitHub Actions orchestrate five release governance gates on every deployment: 

  • Gate 1 Security: vulnerability scan 
  • Gate 2 Compliance: policy validation 
  • Gate 3 Artifact integrity: immutable ECR image push 
  • Gate 4 Deployment: EKS rollout with health checks 
  • Gate 5 Safety: automated rollback on anomaly detection 

Zero manual release management. Compliance evidence generated automatically with every deployment. 

Database and Storage 

  • Amazon RDS (MySQL) with Multi-AZ failover and point-in-time recovery 
  • ElastiCache for sub-millisecond session caching during live voice calls 
  • Amazon S3 storing conversation recordings and training data, feeding directly into the SageMaker training pipeline 

Security — Six Services Active at Launch 

  • AWS IAM: least-privilege access, zero static credentials at launch 
  • AWS KMS: encryption across all data stores 
  • AWS Secrets Manager: secure credential and config management 
  • AWS WAF: external traffic protection at ingress 
  • Amazon GuardDuty: continuous threat monitoring and detection 
  • AWS CloudTrail: 100% API activity captured across all environments 

AWS Services Used 

AI and Voice Pipeline 

  • Amazon Bedrock: Claude Haiku for conversational reasoning 
  • Amazon Transcribe: real-time speech-to-text 
  • Amazon Polly: voice synthesis 
  • Amazon SageMaker: voice model training, hosting, and MLOps 

Container and Compute 

  • Amazon EKS: Kubernetes orchestration across three environments 
  • Amazon ECR: container image registry with immutable image tagging 

Data and Storage 

  • Amazon RDS (MySQL): Multi-AZ relational database with PITR 
  • Amazon ElastiCache: sub-millisecond session caching 
  • Amazon S3: conversation recordings, training data, model artefacts 

Security and Governance 

  • AWS IAM, KMS, Secrets Manager, WAF, GuardDuty, CloudTrail 
  • AWS Certificate Manager: SSL/TLS across all environments 

Monitoring and CI/CD 

  • Amazon CloudWatch: container health, metrics, application logs 
  • AWS Step Functions + GitHub Actions: five-gate CI/CD governance pipeline 

Results 

Metric Result 
Environment provisioning time 87% reduction (3 to 5 days to under 4 hours) 
Engineering days recovered 10 to 15 days per release cycle 
Sprint capacity freed ~30% freed from infrastructure toil 
CI/CD governance 5 automated gates on every deployment, zero manual releases 
Security at launch 6 AWS security services active, zero static credentials 
Compliance audit prep Weeks reduced to hours via automated CloudTrail 
Voice pipeline Transcribe to Bedrock to Polly live from day one 
Enterprise onboarding Unblocked; security controls demonstrable on demand 
AI infrastructure SageMaker live from day one, voice model training ready 
AWS services deployed 14 services in a single engagement 
Platform availability 24/7 continuous AI voice operations 

“This engagement changed our operational posture in a way that is directly visible to our enterprise clients. We release with confidence, demonstrate security controls on request, and our infrastructure scales with demand without engineering intervention. The platform Teleglobal delivered is not just stable — it is the foundation we are building our next phase of product development on.” 

— VP of Engineering, Auxy AI

What’s Next 

The platform is live and GenAI-enabled. The roadmap runs on infrastructure already in place: 

  • Voice model fine-tuning on SageMaker using Auxy AI’s proprietary conversation recordings 
  • DORA metrics dashboards for deployment frequency and mean time to recovery 
  • Canary releases with automated rollback and regional AZ expansion for enterprise SLA coverage 
  • Advanced monitoring and analytics for deeper insight into AI voice interaction patterns