Job Title: Data Engineer (12 Years Experience)
Location: Remote / Any Location within the USA
Job Summary We are seeking a highly experienced Data Engineer with 12 years of experience in developing scalable big data solutions optimizing distributed data processing and building analytical reporting dashboards. The ideal candidate will have strong expertise in Apache Spark Scala SQL and Power BI along with hands-on experience with cloud data platforms.
Key Responsibilities -
Design develop and maintain scalable big data pipelines and applications using Apache Spark and Scala.
-
Write complex SQL queries for data transformation analytics and reporting use cases.
-
Develop enhance and optimize Power BI dashboards and visual reports for business users.
-
Collaborate with cross-functional teams including analytics business and engineering to integrate data pipelines and reporting systems.
-
Ensure data quality accuracy security and compliance across multiple environments.
-
Optimize distributed data processing workflows and support performance improvements.
-
Troubleshoot system and pipeline failures resolving performance bottlenecks.
-
Contribute to data governance documentation and best practices for data engineering solutions.
Required Qualifications -
12 years of experience as a Data Engineer or similar role.
-
Strong hands-on proficiency with Apache Spark and Scala.
-
Advanced SQL experience for data manipulation and analysis.
-
Experience building and maintaining Power BI dashboards and reports.
-
Solid understanding of distributed computing technologies (Hadoop Hive etc.).
-
Proven ability to manage large-scale datasets and optimize ETL/ELT workflows.
-
Bachelors or Masters degree in Computer Science Engineering or a related discipline.
-
Strong analytical problem-solving and communication skills.
Preferred / Highly Desirable Skills -
Expertise in Spark performance tuning and optimization techniques.
-
Knowledge of cluster resource management and resolving performance issues.
-
Experience with Azure cloud services (Azure Databricks Azure Data Lake Synapse Analytics).
-
Exposure to AWS or GCP cloud ecosystems.
-
Experience working in the financial domain or with ERP systems (SAP Oracle ERP).
-
Understanding of compliance and regulatory frameworks for financial data processing.
Job Title: Data Engineer (12 Years Experience) Location: Remote / Any Location within the USA Job Summary We are seeking a highly experienced Data Engineer with 12 years of experience in developing scalable big data solutions optimizing distributed data processing and building analytical reporting d...
Job Title: Data Engineer (12 Years Experience)
Location: Remote / Any Location within the USA
Job Summary We are seeking a highly experienced Data Engineer with 12 years of experience in developing scalable big data solutions optimizing distributed data processing and building analytical reporting dashboards. The ideal candidate will have strong expertise in Apache Spark Scala SQL and Power BI along with hands-on experience with cloud data platforms.
Key Responsibilities -
Design develop and maintain scalable big data pipelines and applications using Apache Spark and Scala.
-
Write complex SQL queries for data transformation analytics and reporting use cases.
-
Develop enhance and optimize Power BI dashboards and visual reports for business users.
-
Collaborate with cross-functional teams including analytics business and engineering to integrate data pipelines and reporting systems.
-
Ensure data quality accuracy security and compliance across multiple environments.
-
Optimize distributed data processing workflows and support performance improvements.
-
Troubleshoot system and pipeline failures resolving performance bottlenecks.
-
Contribute to data governance documentation and best practices for data engineering solutions.
Required Qualifications -
12 years of experience as a Data Engineer or similar role.
-
Strong hands-on proficiency with Apache Spark and Scala.
-
Advanced SQL experience for data manipulation and analysis.
-
Experience building and maintaining Power BI dashboards and reports.
-
Solid understanding of distributed computing technologies (Hadoop Hive etc.).
-
Proven ability to manage large-scale datasets and optimize ETL/ELT workflows.
-
Bachelors or Masters degree in Computer Science Engineering or a related discipline.
-
Strong analytical problem-solving and communication skills.
Preferred / Highly Desirable Skills -
Expertise in Spark performance tuning and optimization techniques.
-
Knowledge of cluster resource management and resolving performance issues.
-
Experience with Azure cloud services (Azure Databricks Azure Data Lake Synapse Analytics).
-
Exposure to AWS or GCP cloud ecosystems.
-
Experience working in the financial domain or with ERP systems (SAP Oracle ERP).
-
Understanding of compliance and regulatory frameworks for financial data processing.
View more
View less