Long Description
About Capgemini
Capgemini is a global business and technology transformation partner helping organizations accelerate their dual transition to a digital and sustainable world while creating tangible impact for enterprises and society.
- Global Presence: 340000 team members in more than 50 countries
- Heritage: Over 55 years of trusted expertise
- Services: End-to-end solutions from strategy and design to engineering
- Core Capabilities: AI Generative AI Cloud Data and deep industry expertise
- 2024 Global Revenue: 22.1 billion
Position: Data Streaming Engineer
As a Data Streaming Engineer you will build and maintain robust scalable Kubernetes-based event streaming platforms. You will ensure reliability and efficiency in production and development environments enabling smooth deployments and accurate event reporting.
Key Responsibilities
1. Kubernetes Infrastructure Management
- Design provision and maintain Kubernetes clusters across cloud on-premises and hybrid environments
- Monitor cluster performance and optimize resource utilization
- Troubleshoot issues and implement solutions to ensure high availability and stability
2. Splunk Operations & Engineering
- Manage Splunk infrastructure (indexers search heads)
- Troubleshoot ingestion issues latency and indexing delays
- Develop and maintain SPL queries saved searches alerts and dashboards
- Perform capacity planning performance tuning and upgrade planning
- Implement data onboarding strategies and field extractions for new log sources
3. Cribl Pipeline Management
- (Preferred) Experience with Cribl
- Design build and optimize Cribl pipelines for log routing transformation filtering and enrichment
- Integrate Cribl with various data sources and destinations
- Automate pipeline deployments and configuration using CI/CD and GitOps practices
4. Production Support
- Participate in incident response and resolution to minimize downtime
- Continuously analyze and improve performance reliability and security of production environments
Qualifications
- Bachelors degree in Engineering Computer Science IT or related field
- Minimum 5 years of professional experience
- Excellent English communication skills (verbal and written)
Technical Expertise:
- Kubernetes: Deep understanding of architecture concepts and best practices; hands-on experience managing clusters at scale
- Splunk or ELK: Experience with log data platforms data organization and analysis
- GitLab: Familiarity with CI/CD features and pipeline scripting
- Cloud Platforms: AWS Azure GCP experience
- Troubleshooting: Strong problem-solving and root cause analysis skills
Soft Skills:
- Strong collaboration and communication skills
- Ability to work effectively with cross-functional teams
Long Description About CapgeminiCapgemini is a global business and technology transformation partner helping organizations accelerate their dual transition to a digital and sustainable world while creating tangible impact for enterprises and society.Global Presence: 340000 team members in more than ...
Long Description
About Capgemini
Capgemini is a global business and technology transformation partner helping organizations accelerate their dual transition to a digital and sustainable world while creating tangible impact for enterprises and society.
- Global Presence: 340000 team members in more than 50 countries
- Heritage: Over 55 years of trusted expertise
- Services: End-to-end solutions from strategy and design to engineering
- Core Capabilities: AI Generative AI Cloud Data and deep industry expertise
- 2024 Global Revenue: 22.1 billion
Position: Data Streaming Engineer
As a Data Streaming Engineer you will build and maintain robust scalable Kubernetes-based event streaming platforms. You will ensure reliability and efficiency in production and development environments enabling smooth deployments and accurate event reporting.
Key Responsibilities
1. Kubernetes Infrastructure Management
- Design provision and maintain Kubernetes clusters across cloud on-premises and hybrid environments
- Monitor cluster performance and optimize resource utilization
- Troubleshoot issues and implement solutions to ensure high availability and stability
2. Splunk Operations & Engineering
- Manage Splunk infrastructure (indexers search heads)
- Troubleshoot ingestion issues latency and indexing delays
- Develop and maintain SPL queries saved searches alerts and dashboards
- Perform capacity planning performance tuning and upgrade planning
- Implement data onboarding strategies and field extractions for new log sources
3. Cribl Pipeline Management
- (Preferred) Experience with Cribl
- Design build and optimize Cribl pipelines for log routing transformation filtering and enrichment
- Integrate Cribl with various data sources and destinations
- Automate pipeline deployments and configuration using CI/CD and GitOps practices
4. Production Support
- Participate in incident response and resolution to minimize downtime
- Continuously analyze and improve performance reliability and security of production environments
Qualifications
- Bachelors degree in Engineering Computer Science IT or related field
- Minimum 5 years of professional experience
- Excellent English communication skills (verbal and written)
Technical Expertise:
- Kubernetes: Deep understanding of architecture concepts and best practices; hands-on experience managing clusters at scale
- Splunk or ELK: Experience with log data platforms data organization and analysis
- GitLab: Familiarity with CI/CD features and pipeline scripting
- Cloud Platforms: AWS Azure GCP experience
- Troubleshooting: Strong problem-solving and root cause analysis skills
Soft Skills:
- Strong collaboration and communication skills
- Ability to work effectively with cross-functional teams
View more
View less