Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailNot Disclosed
Salary Not Disclosed
1 Vacancy
About Infinitive:
Infinitive is a data and AI consultancy that enables its clients to modernize monetize and operationalize their data to create lasting and substantial value. We possess deep industry and technology expertise to drive and sustain adoption of new capabilities. We match our people and personalities to our clients culture while bringing the right mix of talent and skills to enable high return on investment.
Infinitive has been named Best Small Firms to Work For by Consulting Magazine 7 times most recently in 2024. Infinitive has also been named a Washington Post Top Workplace Washington Business Journal Best Places to Work and Virginia Business Best Places to Work.
We are seeking a highly skilled DevOps and Data Engineer to design deploy and optimize infrastructure and data pipelines while implementing bestinclass observability practices using New Relic. This role is ideal for someone with a strong background in CI/CD containerization cloud infrastructure and data engineering workflows who can also ensure platform reliability and performance through monitoring and alerting solutions.
DevOps & Infrastructure:
Design and maintain CI/CD pipelines using tools like Jenkins GitHub Actions or GitLab CI.
Manage cloud infrastructure (AWS Azure or GCP) using infrastructureascode tools like Terraform or CloudFormation.
Automate deployment scaling and monitoring of containerized applications (Docker Kubernetes).
Data Engineering:
Build and maintain data ingestion pipelines using Python PySpark or Spark.
Integrate data pipelines with cloud data lakes (e.g. S3 GCS) and warehouses (e.g. Snowflake Redshift BigQuery).
Ensure data quality transformation and scheduling through orchestration tools (e.g. Airflow Cloud Composer).
Observability & Monitoring:
Implement and manage observability strategies using New Relic for application performance monitoring logs metrics and alerts.
Define and track SLAs SLOs and error budgets to uphold platform reliability.
Provide actionable insights by analyzing telemetry data from distributed systems.
Collaboration & Operations:
Work closely with software data and product teams to ensure smooth integration and high system performance.
Participate in incident response root cause analysis and postmortem documentation.
Drive process improvement for deployment automation cost optimization and system resilience.
5 years of experience in DevOps Site Reliability or Data Engineering roles.
Handson experience with New Relic for observability and incident diagnostics.
Proficient in cloud platforms (AWS GCP or Azure) especially for infrastructure and data pipeline setup.
Strong coding/scripting in Python Bash or Go.
Experience with CI/CD pipelines Docker Kubernetes and IaC tools like Terraform.
Experience with data processing frameworks such as Spark PySpark or Kafka.
Familiarity with log aggregation tools and practices (e.g. ELK Stack Fluentd).
Experience with New Relic integrations for Kubernetes serverless and custom instrumentation.
Prior exposure to Databricks Airflow or similar platforms.
Knowledge of security best practices and compliance standards (SOC2 HIPAA etc..
Experience with cost monitoring and optimization for observability and cloud spend.
AWS Certified DevOps Engineer or GCP Professional Data Engineer
New Relic Certified Performance Pro or equivalent
Required Experience:
Senior IC
Full-Time