This is a remote position.
We are seeking a Senior Data Engineer (ETL/ELT) to join our team. Y ou will be responsible for designing building and maintaining scalable data infrastructure and pipelines. You will collaborate with crossfunctional teams to ensure the availability reliability and efficiency of data systems enabling datadriven decisionmaking across the organization.
Responsibilties:
- Design develop and maintain robust ETL/ELT pipelines to process and transform large datasets efficiently.
- Optimize data architecture and storage solutions to support analytics machine learning and business intelligence.
- Work with cloud platforms (AWS) to implement scalable data solutions.
- Ensure data quality integrity and security across all data pipelines.
- Collaborate with data scientists analysts and software engineers to support datadriven initiatives.
- Monitor and troubleshoot data workflows to ensure system performance and reliability.
- Create APIs to provide analytical information to our clients.
Requirements
- Bachelors or Masters degree in Computer Science Engineering or a related field.
- 5 years of experience in data engineering or a related field.
- Strong proficiency in SQL and database technologies with (e.g. PostgreSQL MySQL Snowflake BigQuery).
- Experience with data pipeline orchestration tools (e.g. Apache Airflow Prefect Dagster).
- Proficiency in programming languages such as Python and Scala.
- Handson experience with AWS cloud data services.
- Familiarity with big data processing frameworks like Apache Spark.
- Knowledge of data modeling warehousing concepts and distributed computing.
- Experience implementing CI/CD for data pipelines.
- Realtime data processing and streaming architectures (RisingWave Kafka Flink).
- Database performance tuning and query optimization.
- Strong problemsolving skills and the ability to work independently and collaboratively.
- ETL/ELT pipeline development and automation.
- Cloud computing and infrastructure management on AWS (nice to have).
Benefits
- Work Location: Remote
- 5 days working
5+ years of experience in data engineering or a related field. Strong proficiency in SQL and database technologies with (e.g., PostgreSQL, MySQL, Snowflake, BigQuery). Experience with data pipeline orchestration tools (e.g., Apache Airflow, Prefect, Dagster). Proficiency in programming languages such as Python and Scala. Hands-on experience with AWS cloud data services. Familiarity with big data processing frameworks like Apache Spark. Knowledge of data modeling, warehousing concepts, and distributed computing. Experience implementing CI/CD for data pipelines. Real-time data processing and streaming architectures (RisingWave, Kafka, Flink). Database performance tuning and query optimization. Strong problem-solving skills and the ability to work independently and collaboratively. ETL/ELT pipeline development and automation. Cloud computing and infrastructure management on AWS (nice to have
Education
Bachelor's or Master's degree in Computer Science, Engineering, or a related field.