drjobs
Middle Data Engineer
drjobs
Middle Data Engineer
drjobs Middle Data Engineer العربية

Middle Data Engineer

Employer Active

1 Vacancy
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs

Job Location

drjobs

Houston - USA

Monthly Salary

drjobs

Not Disclosed

drjobs

Salary Not Disclosed

Vacancy

1 Vacancy

Job Description

Req ID : 2591593

AgileEngine is a topranking provider of software solutions to Fortune 500 Global 500 and Future 50 companies. Listed on Inc. 5000 among the fastestgrowing US companies we are always open to talented software UX and data experts in the Americas Europe and Asia.


If you like a challenging environment where you re working with the best and are encouraged to learn and experiment daily there s no better place guaranteed! :)


What you will do


  • Lift and Shift ETL pipelines from Legacy to New environments;

  • Monitor data pipelines identify bottlenecks optimize data processing and storage for performance and costeffectiveness;

  • Analyze sources and build Cloud Data Warehouse and Data Lake solution;

  • Collaborate effectively with crossfunctional teams including data scientists analysts software engineers and business stakeholders.


Must haves


  • 3 years of professional experience in a Data Engineering role;

  • Proficiency in programming languages commonly used in data engineering such as Python SQL and optionally Scala for working with data processing frameworks like Spark and libs like Pandas;

  • Proficiency in designing deploying and managing data pipelines using Apache Airflow for workflow orchestration and scheduling;

  • Ability to design develop and optimize ETL processes to move and transform data from various sources into the data warehouse ensuring data quality reliability and efficiency;

  • Knowledge of big data technologies and frameworks such as Apache Spark for processing large volumes of data efficiently;

  • Extensive handson experience with various AWS services relevant to data engineering including but not limited to Amazon MWAA Amazon S3 Amazon RDS Amazon EMR AWS Lambda AWS Glue Amazon Redshift AWS Data Pipeline Amazon DynamoDB;

  • Deep understanding and practical experience in building and optimizing cloud data warehousing solutions;

  • Ability to monitor data pipelines identify bottlenecks and optimize data processing and storage for performance and costeffectiveness;

  • Excellent communication skills to collaborate effectively with crossfunctional teams including data scientists analysts software engineers and business stakeholders;

  • Bachelor s degree in computer science/engineering or other technical field or equivalent experience.


Nice to haves


  • Familiarity with the fintech industry understanding of financial data regulatory requirements and business processes specific to the domain;

  • Documentation skills to document data pipelines architecture designs and best practices for knowledge sharing and future reference;

  • GCP services relevant to data engineering;

  • Snowflake;

  • OpenSearch Elasticsearch;

  • Jupyter for analyze data;

  • Bitbucket Bamboo;

  • Terraform.

The benefits of joining us

  • Professional growth

Accelerate your professional journey with mentorship TechTalks and personalized growth roadmaps.


  • Competitive compensation

We match your evergrowing skills talent and contributions with competitive USDbased compensation and budgets for education fitness and team activities.


  • A selection of exciting projects

Join projects with modern solutions development and toptier clients that include Fortune 500 enterprises and leading product brands.


  • Flextime

Tailor your schedule for an optimal worklife balance by having the options of working from home and going to the office whatever makes you the happiest and most productive.



3+ years of professional experience in a Data Engineering role; Proficiency in programming languages commonly used in data engineering such as Python, SQL, and optionally Scala for working with data processing frameworks like Spark and libs like Pandas; Proficiency in designing, deploying, and managing data pipelines using Apache Airflow for workflow orchestration and scheduling; Ability to design, develop, and optimize ETL processes to move and transform data from various sources into the data warehouse, ensuring data quality, reliability, and efficiency; Knowledge of big data technologies and frameworks such as Apache Spark for processing large volumes of data efficiently; Extensive hands-on experience with various AWS services relevant to data engineering, including but not limited to Amazon MWAA, Amazon S3, Amazon RDS, Amazon EMR, AWS Lambda, AWS Glue, Amazon Redshift, AWS Data Pipeline, Amazon DynamoDB; Deep understanding and practical experience in building and optimizing cloud data warehousing solutions; Ability to monitor data pipelines, identify bottlenecks, and optimize data processing and storage for performance and cost-effectiveness; Excellent communication skills to collaborate effectively with cross-functional teams including data scientists, analysts, software engineers, and business stakeholders; Bachelor s degree in computer science/engineering or other technical field, or equivalent experience.

Employment Type

Full Time

Company Industry

About Company

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.