Company overview:
TraceLinks software solutions and Opus Platform help the pharmaceutical industry digitize their supply chain and enable greater compliance visibility and decision making. It reduces disruption to the supply of medicines to patients who need them anywhere in the world.
Founded in 2009 with the simple mission of protecting patients today Tracelink has 8 offices over 800 employees and more than 1300 customers in over 60 countries around the world. Our expanding product suite continues to protect patients and now also enhances multi-enterprise collaboration through innovative new applications such as MINT.
Tracelink is recognized as an industry leader by Gartner and IDC and for having a great company culture by Comparably.
Role summary
Were seeking a motivated and passionate Site Reliability Engineering (SRE) leader with strong expertise in programming distributed systems AWS infrastructure and services and this role youll help evolve our SRE teams Kubernetes and Service Mesh architecture while also supporting the integration of AI workloads both within Kubernetes and via managed services.
The SRE function plays a critical role in maintaining system visibility ensuring platform scalability and enhancing operational efficiency. As part of this youll help drive AIOps initiatives leveraging AI tools and automation to proactively detect diagnose and remediate issues enhancing the reliability and performance of TraceLinks global platform. As an SRE leader youll have the opportunity to apply your technical strengths shape platform reliability strategies and collaborate closely with engineering teams across the organization. Youll work as part of a globally distributed inclusive team focused on AWS-based cloud infrastructure.
Key Responsibilities
SRE Leadership:
Guide a team of SREs through weekly sprint planning and execution helping them stay focused on delivery and long-term goals.
Build a team environment centered around trust ownership and continuous learning.
Partner with engineers across Platform and Application product teams to ensure whats pushed to production is stable secure and reliable.
Stay directly involved in technical work contributing to the codebase and leading by example in solving complex infrastructure challenges.
Core SRE:
Collaborate with development teams product owners and stakeholders to define enforce and track SLOs and manage error budgets.
Improve system reliability by designing for failure testing edge cases and monitoring key metrics.
Boost performance by identifying bottlenecks optimizing resource usage and reducing latency across services.
Build scalable systems that handle growth in traffic or data without compromising performance.
AI Ops:
Design and implement scalable deployment strategies optimized for large language models like LLaMA Claude Cohere and others.
Set up continuous monitoring for model performance ensuring robust alerting systems are in place to catch anomalies or degradation.
Stay current with advancements in MLOps and Generative AI proactively introducing innovative practices to strengthen AI infrastructure and delivery.
Monitoring and Alerting:
Proactively identify and resolve issues by leveraging monitoring systems to catch early signals before they impact operations.
Design and maintain alerting mechanisms that are clear actionable and tuned to avoid unnecessary noise or alert fatigue.
Continuously improve system observability to enhance visibility reduce false positives and support faster incident response.
Apply best practices for alert thresholds and monitoring configurations to ensure reliability and maintain system health.
Incorporate agentic capabilities to monitor and proactively resolve system issues before they impact customers
Cost Management
Monitor infrastructure usage to identify waste and reduce unnecessary spending.
Optimize resource allocation by using right-sized instances auto-scaling and spot instances where appropriate.
Implement cost-aware design practices during architecture and deployment planning.
Track and analyze monthly cloud costs to ensure alignment with budget and forecast.
Collaborate with teams to increase cost visibility and promote ownership of cloud spend.
Required Qualifications:
Bachelors degree in computer science Engineering or related field.
7 years in SRE DevOps or cloud infrastructure; 3 years managing SRE/DevOps teams responsible for large-scale highly available microservice-based systems.
Deep knowledge of core operating system concepts networking fundamentals and systems management.
Strong understanding of cloud-native deployment and management practices especially in AWS.
Strong expertise with AWS services from both a technical and cost optimization perspective.
Hands-on experience with Terraform/OpenTofu Helm Docker Kubernetes Prometheus and Istio.
Proficiency in diagnosing and resolving container performance issues using modern tools and techniques.
Hands-on experience with MLOps tools (Kubeflow MLflow SageMaker Vertex AI or equivalent).
Familiarity with ML concepts: model lifecycle feature stores drift detection and monitoring.
Experience deploying monitoring and scaling AI/ML models including LLM-based and agentic AI applications in production.
Skilled in modern DevOps/SRE practices including CI/CD build and release pipelines.
Experience with mature development processes including source control security best practices and automated deployment.
Familiarity with MLOps practices including the deployment monitoring and scaling of AI/ML models in production particularly LLM-based applications.
Excellent written and verbal communication skills.
Strong analytical and problem-solving abilities with a bias for proactive issue identification and resolution.
Preferred Qualifications:
Experience managing large-scale ML inference workloads including LLM and agentic AI in production.
Knowledge of distributed training frameworks (TensorFlow PyTorch).
Hands-on development experience in Python and/or Golang.
Experience managing SRE teams for 24/7 follow-the-sun operations.
Familiarity with service mesh patterns beyond Istio (e.g. Linkerd Consul).
Experience managing GPU-enabled infrastructure and optimizing model-serving performance.
Background in designing or implementing disaster recovery and business continuity plans.
Prior experience in a regulated or compliance-heavy industry (e.g. healthcare finance life sciences).
Please see the Tracelink Privacy Policyfor more information on how Tracelink processes your personal information during the recruitment process and if applicable based on your location how you can exercise your privacy rights. If you have questions about this privacy notice or need to contact us in connection with your personal data including any requests to exercise your legal rights referred to at the end of this notice please contact .
Required Experience:
Senior Manager
TraceLink is the only network creation platform company that builds integrated business ecosystems with multienterprise applications - the true foundation for digitalization - delivering customer-centric agility and resiliency for end-to-end supply networks and leveraging the collecti ... View more