Job Description
We aspire to be the premier research-intensive biopharmaceutical company. Were at the forefront of research to deliver innovative health solutions that advance the prevention and treatment of diseases in people and animals. As a Senior Data Observability Engineer in the Central Data & Analytics Office you will help shape the foundational capabilities that make enterprise data reliable measurable and scalable. Embedded in Core Data & Engineering and working with a global team you will design and operate observability and optimization capabilities used by product and delivery teams across platforms. Your work enables faster detection of issues better performance and cost visibility and continuous improvement across data pipelines and products.
Responsibilities
- Build test deploy and operate core data observability capabilities.
- Participate in all phases of the software development lifecycle for the data observability solution.
- Define and implement metrics logs alerts and signals to make data workloads observable reliable and secure.
- Specify platform requirements standards and telemetry for cloud CI/CD and runtime environments to ensure reliable secure and cost-efficient operation of data products.
- Provide L3 production support and act as an engineering subject matter expert to support adoption and troubleshooting.
- Develop engineering guides document engineering designs best practices and runbooks.
- Work within global Agile/Scrum teams participate in planning sprint ceremonies and cross-functional reviews.
- Evaluate and validate new COTS products features within the CDAO ecosystem.
- Work with COTS product vendors on their solutions enhancements and integration into CDAO ecosystem.
- Define and automate scalable onboarding patterns for new data connection types within the data observability platform.
- Build maintain and improve Infrastructure-as-Code modules GitOps flows and CI/CD pipelines for repeatable auditable deployments across environments.
- Prepare for operation and deploy serverless components (Lambda EventBridge Kinesis) and object/data storage (S3); Glue crawlers and other ETL/metadata components.
- Implement and deploy components for platform health reliability and performance monitoring; build dashboards alerts and runbooks allowing iterations on platform services SLOs/SLIs.
- Optimize infrastructure costs and performance through rightsizing autoscaling savings plans/commitments and architecture improvements.
Qualifications
Required
- Proven experience as an AWS platform/infrastructure engineer supporting data workloads.
- Strong Infrastructure-as-Code and GitOps skills: Terraform Flux Helm GitHub (repos and actions).
- Hands-on experience designing and enforcing IAM roles/policies VPC/subnet design security groups and network ACLs.
- Practical experience with Kubernetes on AWS (including Fargate) container deployments RBAC and Helm charts.
- Experience with serverless patterns and services: Lambda EventBridge Kinesis.
- Familiarity with S3 and AWS Glue (including Glue crawlers) and how they support data pipelines.
- Experience with infrastructure monitoring and observability (metrics logs tracing) and building dashboards/alerts.
- Demonstrated experience optimizing cloud costs and performance at the infrastructure layer.
- Solid skills in Python and SQL for automation tooling and supporting data teams.
- Comfortable working in Agile/Scrum environments and collaborating with cross-functional global teams.
Preferred
- BSc in IT Engineering Computer Science or related field.
- Experience with data observability tooling or frameworks (great advantage).
- Hands-on Apache Airflow experience (deployment DAG troubleshooting scaling metadata understanding).
- Knowledge of Grafana Labs stack (Grafana Loki Tempo Agent) or similar observability ecosystems.
- Experience implementing policy-as-code security automation or compliance guardrails.
- Familiarity with SRE practices (SLOs/SLIs incident response) and platform reliability engineering.
What we offer:
- Exciting work in a great team global projects international environment
- Opportunity to learn and grow professionally within the company globally
- Hybrid working model flexible role pattern
- Competitive salary & incentive pay
- Pension and health insurance contributions
- Internal reward system and referral scheme
- 5 weeks annual leave 5 sick days 15 days of certified sick leave paid above statutory requirements annually 40 paid hours annually for volunteering activities 12 weeks of parental contribution
- Cafeteria for tax-free benefits according to your choice (meal vouchers Lítačka sport culture health travel etc.) Multisport Card
- Vodafone Raiffeisen Bank Foodora and discount programmes
- Up-to-date laptop and iPhone
- Parking in the garage showers refreshments massage chairs library music corner
Ready to take up the challenge Apply now! Know anybody who might be interested Refer this job!
Required Skills:
Data Engineering Data Infrastructure Data Lifecycle Data Modeling Data Science Data Visualization Data Warehouse Development Design Applications Engineering Processes On Time Deliveries Security Analytics Senior Program Management Software Configurations Software Development Software Development Life Cycle (SDLC) Solution Architecture System Designs System Integration Systems Engineering Testing
Preferred Skills:
Current Employees apply HERE
Current Contingent Workers apply HERE
Search Firm Representatives Please Read Carefully
Merck & Co. Inc. Rahway NJ USA also known as Merck Sharp & Dohme LLC Rahway NJ USA does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place introductions are position specific. Please no phone calls or emails.
Employee Status:
Regular
Relocation:
No relocation
VISA Sponsorship:
No
Travel Requirements:
No Travel Required
Flexible Work Arrangements:
Hybrid
Shift:
1st - Day
Valid Driving License:
No
Hazardous Material(s):
n/a
Job Posting End Date:
04/3/2026
*A job posting is effective until 11:59:59PM on the day BEFOREthe listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date.
Required Experience:
Senior IC
Job DescriptionWe aspire to be the premier research-intensive biopharmaceutical company. Were at the forefront of research to deliver innovative health solutions that advance the prevention and treatment of diseases in people and animals. As a Senior Data Observability Engineer in the Central Data &...
Job Description
We aspire to be the premier research-intensive biopharmaceutical company. Were at the forefront of research to deliver innovative health solutions that advance the prevention and treatment of diseases in people and animals. As a Senior Data Observability Engineer in the Central Data & Analytics Office you will help shape the foundational capabilities that make enterprise data reliable measurable and scalable. Embedded in Core Data & Engineering and working with a global team you will design and operate observability and optimization capabilities used by product and delivery teams across platforms. Your work enables faster detection of issues better performance and cost visibility and continuous improvement across data pipelines and products.
Responsibilities
- Build test deploy and operate core data observability capabilities.
- Participate in all phases of the software development lifecycle for the data observability solution.
- Define and implement metrics logs alerts and signals to make data workloads observable reliable and secure.
- Specify platform requirements standards and telemetry for cloud CI/CD and runtime environments to ensure reliable secure and cost-efficient operation of data products.
- Provide L3 production support and act as an engineering subject matter expert to support adoption and troubleshooting.
- Develop engineering guides document engineering designs best practices and runbooks.
- Work within global Agile/Scrum teams participate in planning sprint ceremonies and cross-functional reviews.
- Evaluate and validate new COTS products features within the CDAO ecosystem.
- Work with COTS product vendors on their solutions enhancements and integration into CDAO ecosystem.
- Define and automate scalable onboarding patterns for new data connection types within the data observability platform.
- Build maintain and improve Infrastructure-as-Code modules GitOps flows and CI/CD pipelines for repeatable auditable deployments across environments.
- Prepare for operation and deploy serverless components (Lambda EventBridge Kinesis) and object/data storage (S3); Glue crawlers and other ETL/metadata components.
- Implement and deploy components for platform health reliability and performance monitoring; build dashboards alerts and runbooks allowing iterations on platform services SLOs/SLIs.
- Optimize infrastructure costs and performance through rightsizing autoscaling savings plans/commitments and architecture improvements.
Qualifications
Required
- Proven experience as an AWS platform/infrastructure engineer supporting data workloads.
- Strong Infrastructure-as-Code and GitOps skills: Terraform Flux Helm GitHub (repos and actions).
- Hands-on experience designing and enforcing IAM roles/policies VPC/subnet design security groups and network ACLs.
- Practical experience with Kubernetes on AWS (including Fargate) container deployments RBAC and Helm charts.
- Experience with serverless patterns and services: Lambda EventBridge Kinesis.
- Familiarity with S3 and AWS Glue (including Glue crawlers) and how they support data pipelines.
- Experience with infrastructure monitoring and observability (metrics logs tracing) and building dashboards/alerts.
- Demonstrated experience optimizing cloud costs and performance at the infrastructure layer.
- Solid skills in Python and SQL for automation tooling and supporting data teams.
- Comfortable working in Agile/Scrum environments and collaborating with cross-functional global teams.
Preferred
- BSc in IT Engineering Computer Science or related field.
- Experience with data observability tooling or frameworks (great advantage).
- Hands-on Apache Airflow experience (deployment DAG troubleshooting scaling metadata understanding).
- Knowledge of Grafana Labs stack (Grafana Loki Tempo Agent) or similar observability ecosystems.
- Experience implementing policy-as-code security automation or compliance guardrails.
- Familiarity with SRE practices (SLOs/SLIs incident response) and platform reliability engineering.
What we offer:
- Exciting work in a great team global projects international environment
- Opportunity to learn and grow professionally within the company globally
- Hybrid working model flexible role pattern
- Competitive salary & incentive pay
- Pension and health insurance contributions
- Internal reward system and referral scheme
- 5 weeks annual leave 5 sick days 15 days of certified sick leave paid above statutory requirements annually 40 paid hours annually for volunteering activities 12 weeks of parental contribution
- Cafeteria for tax-free benefits according to your choice (meal vouchers Lítačka sport culture health travel etc.) Multisport Card
- Vodafone Raiffeisen Bank Foodora and discount programmes
- Up-to-date laptop and iPhone
- Parking in the garage showers refreshments massage chairs library music corner
Ready to take up the challenge Apply now! Know anybody who might be interested Refer this job!
Required Skills:
Data Engineering Data Infrastructure Data Lifecycle Data Modeling Data Science Data Visualization Data Warehouse Development Design Applications Engineering Processes On Time Deliveries Security Analytics Senior Program Management Software Configurations Software Development Software Development Life Cycle (SDLC) Solution Architecture System Designs System Integration Systems Engineering Testing
Preferred Skills:
Current Employees apply HERE
Current Contingent Workers apply HERE
Search Firm Representatives Please Read Carefully
Merck & Co. Inc. Rahway NJ USA also known as Merck Sharp & Dohme LLC Rahway NJ USA does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place introductions are position specific. Please no phone calls or emails.
Employee Status:
Regular
Relocation:
No relocation
VISA Sponsorship:
No
Travel Requirements:
No Travel Required
Flexible Work Arrangements:
Hybrid
Shift:
1st - Day
Valid Driving License:
No
Hazardous Material(s):
n/a
Job Posting End Date:
04/3/2026
*A job posting is effective until 11:59:59PM on the day BEFOREthe listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date.
Required Experience:
Senior IC
View more
View less