Total Number of Openings
1
Chevron is accepting online applications for the position Data Engineer - Enterprise AIthrough January 20 2026 at 11:59 p.m. (Central Time).
Join Chevrons Enterprise AI team to build the next generation of intelligent data solutions that power advanced analytics and AI-driven decision-making. As a Data Engineer you will apply software engineering principles to design and implement scalable high-performance data and AI solutions that enable agentic AI systems machine learning predictive modeling and real-time insights across global operations. This role is ideal for engineers passionate about modern data architectures cloud-native technologies and applying AI principles to enterprise-scale challenges.
You will deploy and maintain fully automated data transformation pipelines that integrate diverse storage and computation technologies to handle a wide range of data types and volumes. A successful Data Engineer designs data products and pipelines that are resilient to change modular flexible scalable reusable and cost-effectiveensuring our data ecosystem is future proof for rapidly changing world of AI.
Key Responsibilities:
- Architect and Optimize Data Pipelines:
Design develop and maintain robust ETL/ELT pipelines leveraging Databricks (including Databricks Genie for AI-assisted development) Azure Data Factory Azure Synapse and Azure Fabric. Architect solutions with a holistic AI foundation ensuring pipelines and frameworks are built to support agentic AI systems machine learning and generative AI at scale.
- Enable AI-Ready Data:
Build modular reusable data assets and products optimized for AI workloads ensuring data quality lineage governance and interoperability across multiple AI applications.
- Collaborate Across Disciplines:
Partner with AI delivery teams including software engineers AI engineers and applied scientists to deliver AI-ready datasets and features that accelerate model development and deployment.
- Performance and Scalability:
Optimize pipelines for big data processing using Spark Delta Lake and Databricks-native capabilities ensuring scalability and reliability for enterprise-scale AI workloads.
- Cloud-Native Engineering:
Implement best practices for CI/CD infrastructure-as-code and DevOps using Azure DevOps Git and Ansible while integrating with Databricks workflows for seamless deployment and reuse.
- Innovation and Continuous Learning:
Stay ahead of emerging technologies in data engineering AI/ML and cloud ecosystems leveraging AI tools like Databricks Genie and Agent Bricks and emerging tools and services within Azure AI Foundry and Fabric to accelerate development and maintain cutting-edge reusable solutions.
Required Qualifications:
- Bachelors degree in computer science Engineering or a related field (or equivalent experience) and able to demonstrate high proficiency in programming fundamentals.
- At least 5 years of proven experience as a Data Engineer or similar role dealing with data and ETL processes in cloud-based data platforms (Azure preferred)
- Strong hands-on experience with Databricks (Lakehouse Delta Lake Unity Catalog) and Microsoft Azure services (Azure Data Factory Azure Synapse Azure Blob Storage and Azure Data Lake Gen 2)
- Strong understanding data modeling data governance and software engineering principles and how they apply to data engineering (e.g. CI/CD version control testing).
- Strong problem-solving skills and attention to detail.
- Excellent communication and collaboration skills.
Preferred Qualifications:
- Demonstrated learning agility in emerging data and AI tools and services.
- Experience integrating AI/ML pipelines or feature engineering workflows into data platforms.
- Strong experience in Python is preferred but experience in other languages such as Scala Java C# etc is accepted.
- Experience building spark applications utilizing PySpark.
- Experience with file formats such as Parquet Delta Avro.
- Experience efficiently querying API endpoints as a data source.
- Understanding of the Azure environment and related services such as subscriptions resource groups etc.
- Understanding of Git workflows in software development.
- Using Azure DevOps pipeline and repositories to deploy and maintain solutions.
- Understanding of Ansible and how to use it in Azure DevOps pipelines.
Relocation Options:
Relocation may be considered.
International Considerations:
Expatriate assignments will not be considered.
Chevron regrets that it is unable to sponsor employment Visas or consider individuals on time-limited Visa status for this position.
U.S. Regulatory notice:
Chevron is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race color religious creed sex (including pregnancy) sexual orientation gender identity gender expression national origin or ancestry age mental or physical disability medical condition reproductive health decision-making military or veteran status political preference marital status citizenship genetic information or other characteristics protected by applicable law.
We are committed to providing reasonable accommodations for qualified individuals with disabilities. If you need assistance or an accommodation please email us at .
Chevron participates in E-Verify in certain locations as required by law.