Help shape a modern cloud-native data platform from the ground up.
Were building a next-generation AWS-based Lakehouse and are looking for a hands-on Senior Data Engineer who thrives at the intersection of architecture and this role youll take high-level designs and turn them into production-ready ingestion pipelines Iceberg tables and data marts that power analytics and downstream data consumers at scale. This project is for a large government agency in the Washington DC metropolitan area. This position is hybrid and must be available to work onsite as needed.
Youll work closely with our Principal Data Architect playing a critical role in translating architectural vision into reliable performant systems. If you enjoy solving complex data problems working with modern open table formats and building platforms that handle large-scale real-world data this role is for you.
What Youll Do
- Design and build scalable ETL/ELT pipelines from Oracle and other source systems into an AWS Lakehouse.
- Implement row-level updates using Apache Iceberg MERGE and UPDATE patterns.
- Own the lifecycle of Iceberg tables including partitioning schema evolution compaction and snapshot management.
- Develop batch and incremental ingestion workflows including full extracts and CDC-based pipelines.
- Create and maintain processing and data marts that support editing imputation and data dissemination.
- Optimize query and catalog performance across Glue Catalog Athena EMR and Spark.
- Ensure strong data quality lineage and governance across the platform.
- Collaborate closely with the Principal Data Architect to operationalize designs and continuously improve the platform.
What Were Looking For
- 37 years of hands-on data engineering experience.
- Strong experience building on AWS including S3 Glue EMR Athena Lambda and Step Functions.
- Deep practical experience with Apache Iceberg including:
- Partitioning compaction and schema evolution
- Row-level operations (MERGE INTO updates deletes)
- Snapshot and table version management
- Advanced SQL skills and strong experience with Spark (PySpark or Scala).
- Must be AWS Certified.
- Proven ability to build and operate pipelines for large-scale (multi-TB) datasets.
- Solid understanding of batch incremental and CDC ingestion patterns.
- Experience implementing data quality checks and governance best practices.
- Solid communication skills. Able to work well with teams as well as independently.
Nice to Have
- Experience migrating from Oracle or other RDBMS platforms to cloud-native data architectures.
- Exposure to other Lakehouse formats such as Delta Lake or Apache Hudi.
- Familiarity with AI/ML-assisted data cleaning or imputation techniques.
- Experience working with government systems and architectures.
Why This Role
- Build a modern Lakehouse platform using todays best-in-class open technologies.
- Work closely with senior technical leadership and have real influence on design and implementation.
- Solve meaningful data engineering challenges at scale.
- Opportunity to grow as a technical leader while remaining deeply hands-on.
Synectics is an Equal Opportunity Employer.
Required Experience:
Senior IC
Help shape a modern cloud-native data platform from the ground up.Were building a next-generation AWS-based Lakehouse and are looking for a hands-on Senior Data Engineer who thrives at the intersection of architecture and this role youll take high-level designs and turn them into production-ready i...
Help shape a modern cloud-native data platform from the ground up.
Were building a next-generation AWS-based Lakehouse and are looking for a hands-on Senior Data Engineer who thrives at the intersection of architecture and this role youll take high-level designs and turn them into production-ready ingestion pipelines Iceberg tables and data marts that power analytics and downstream data consumers at scale. This project is for a large government agency in the Washington DC metropolitan area. This position is hybrid and must be available to work onsite as needed.
Youll work closely with our Principal Data Architect playing a critical role in translating architectural vision into reliable performant systems. If you enjoy solving complex data problems working with modern open table formats and building platforms that handle large-scale real-world data this role is for you.
What Youll Do
- Design and build scalable ETL/ELT pipelines from Oracle and other source systems into an AWS Lakehouse.
- Implement row-level updates using Apache Iceberg MERGE and UPDATE patterns.
- Own the lifecycle of Iceberg tables including partitioning schema evolution compaction and snapshot management.
- Develop batch and incremental ingestion workflows including full extracts and CDC-based pipelines.
- Create and maintain processing and data marts that support editing imputation and data dissemination.
- Optimize query and catalog performance across Glue Catalog Athena EMR and Spark.
- Ensure strong data quality lineage and governance across the platform.
- Collaborate closely with the Principal Data Architect to operationalize designs and continuously improve the platform.
What Were Looking For
- 37 years of hands-on data engineering experience.
- Strong experience building on AWS including S3 Glue EMR Athena Lambda and Step Functions.
- Deep practical experience with Apache Iceberg including:
- Partitioning compaction and schema evolution
- Row-level operations (MERGE INTO updates deletes)
- Snapshot and table version management
- Advanced SQL skills and strong experience with Spark (PySpark or Scala).
- Must be AWS Certified.
- Proven ability to build and operate pipelines for large-scale (multi-TB) datasets.
- Solid understanding of batch incremental and CDC ingestion patterns.
- Experience implementing data quality checks and governance best practices.
- Solid communication skills. Able to work well with teams as well as independently.
Nice to Have
- Experience migrating from Oracle or other RDBMS platforms to cloud-native data architectures.
- Exposure to other Lakehouse formats such as Delta Lake or Apache Hudi.
- Familiarity with AI/ML-assisted data cleaning or imputation techniques.
- Experience working with government systems and architectures.
Why This Role
- Build a modern Lakehouse platform using todays best-in-class open technologies.
- Work closely with senior technical leadership and have real influence on design and implementation.
- Solve meaningful data engineering challenges at scale.
- Opportunity to grow as a technical leader while remaining deeply hands-on.
Synectics is an Equal Opportunity Employer.
Required Experience:
Senior IC
View more
View less