Data Engineer (Financial Services Enterprise Data Platform) Location: Onsite / Hybrid / Remote
Employment Type: Full-Time (W2 Only)
No Corp-to-Corp (C2C)
About the Role We are seeking a Data Engineer with strong experience in Oracle and Hadoop ecosystems to support large-scale data platforms within a financial services environment. This role focuses on building and optimizing high-performance data pipelines enabling data integration analytics and regulatory reporting across enterprise systems.
You will work in a highly regulated environment ensuring data solutions meet security governance and compliance standards.
Key Responsibilities - Design develop and maintain data pipelines using Oracle and Hadoop technologies
- Build and optimize ETL/ELT processes for large-scale structured and unstructured data
- Work with Hadoop ecosystem tools (Hive Spark HDFS Sqoop etc.)
- Develop and optimize SQL/PLSQL queries stored procedures and performance tuning
- Integrate data from multiple sources including transactional systems data warehouses and external feeds
- Ensure data quality consistency and governance across platforms
- Support data migration transformation and data modeling activities
- Collaborate with business and analytics teams to deliver data-driven solutions
- Ensure compliance with financial regulations audit requirements and data security standards
Required Skills - 5 years of experience in data engineering / big data development
- Strong expertise in Oracle (SQL PL/SQL performance tuning)
- Hands-on experience with Hadoop ecosystem (Hive Spark HDFS MapReduce)
- Experience building ETL/ELT pipelines and data integration workflows
- Strong knowledge of data modeling and data warehousing concepts
- Experience working in enterprise environments (financial services preferred)
- Understanding of data governance security and compliance requirements
Preferred Qualifications - Experience with cloud platforms (AWS Azure GCP)
- Experience with data streaming tools (Kafka Flink)
- Familiarity with data lake / lakehouse architectures
- Experience with Informatica Talend or similar ETL tools
- Exposure to regulatory reporting and financial datasets
Interested Candidates Please share your updated resume along with:
- Full Name
- Contact Number
- Email ID
- Current Location
- Work Authorization
- Total Years of Experience
- Availability to Start
- Expected Salary
Data Engineer (Financial Services Enterprise Data Platform) Location: Onsite / Hybrid / Remote Employment Type: Full-Time (W2 Only) No Corp-to-Corp (C2C) About the Role We are seeking a Data Engineer with strong experience in Oracle and Hadoop ecosystems to support large-scale data platforms wit...
Data Engineer (Financial Services Enterprise Data Platform) Location: Onsite / Hybrid / Remote
Employment Type: Full-Time (W2 Only)
No Corp-to-Corp (C2C)
About the Role We are seeking a Data Engineer with strong experience in Oracle and Hadoop ecosystems to support large-scale data platforms within a financial services environment. This role focuses on building and optimizing high-performance data pipelines enabling data integration analytics and regulatory reporting across enterprise systems.
You will work in a highly regulated environment ensuring data solutions meet security governance and compliance standards.
Key Responsibilities - Design develop and maintain data pipelines using Oracle and Hadoop technologies
- Build and optimize ETL/ELT processes for large-scale structured and unstructured data
- Work with Hadoop ecosystem tools (Hive Spark HDFS Sqoop etc.)
- Develop and optimize SQL/PLSQL queries stored procedures and performance tuning
- Integrate data from multiple sources including transactional systems data warehouses and external feeds
- Ensure data quality consistency and governance across platforms
- Support data migration transformation and data modeling activities
- Collaborate with business and analytics teams to deliver data-driven solutions
- Ensure compliance with financial regulations audit requirements and data security standards
Required Skills - 5 years of experience in data engineering / big data development
- Strong expertise in Oracle (SQL PL/SQL performance tuning)
- Hands-on experience with Hadoop ecosystem (Hive Spark HDFS MapReduce)
- Experience building ETL/ELT pipelines and data integration workflows
- Strong knowledge of data modeling and data warehousing concepts
- Experience working in enterprise environments (financial services preferred)
- Understanding of data governance security and compliance requirements
Preferred Qualifications - Experience with cloud platforms (AWS Azure GCP)
- Experience with data streaming tools (Kafka Flink)
- Familiarity with data lake / lakehouse architectures
- Experience with Informatica Talend or similar ETL tools
- Exposure to regulatory reporting and financial datasets
Interested Candidates Please share your updated resume along with:
- Full Name
- Contact Number
- Email ID
- Current Location
- Work Authorization
- Total Years of Experience
- Availability to Start
- Expected Salary
View more
View less