The Customer Support Ticketing System was originally built as a single, monolithic application, which had difficulty handling increased traffic and scaling efficiently. To overcome these issues, we migrated to a microservices architecture. This change allowed us to scale the system, improve performance, and make it easier to manage.
Day-to-Day Work:
- EC2: Hosted the main services of the system, allowing us to quickly scale computing power during high-demand periods
- Amazon S3: Stored customer tickets and related files securely, with fast access for support teams
- Amazon RDS: Managed structured data like customer ticket details and user information, ensuring secure and reliable data storage
- Elastic Load Balancer (ELB): Balanced the traffic across multiple servers to prevent overload during peak hours, ensuring smooth service for users
- Amazon CloudWatch: Monitored the system's performance and alerted us to any issues, helping us fix problems quickly before they impacted users
- AWS IAM: Controlled who could access the system, ensuring only authorized users could see sensitive data
- AWS Route 53: Managed domain names, ensuring customers could access the system without any interruptions
- Terraform: Automated the setup of AWS resources, reducing manual errors and ensuring consistent infrastructure across different environments
- Docker: Containerized each part of the system (like ticket management and notifications) so they could be deployed easily and independently
- Kubernetes (AWS EKS): Managed the containers, ensuring they scaled automatically based on demand and stayed available
- Jenkins: Automated the process of building, testing, and deploying new features, making updates faster and more reliable