Lead Data Engineer – Palantir & PySpark

Not Interested
Bookmark
Report This Job

profile Job Location:

Jersey, NJ - USA

profile Monthly Salary: Not Disclosed
Posted on: 14 days ago
Vacancies: 1 Vacancy

Job Summary

Job Title:  Lead Data Engineer Palantir & PySpark

Location: Remote

 

 

Job Summary:

We are seeking a highly skilled Data Engineer with hands-on experience in Palantir (Foundry preferred) PySpark and exposure to reinsurance or insurance data environments. The ideal candidate will play a key role in building scalable data pipelines optimizing ETL workflows and enabling advanced analytics and reporting capabilities. This role requires a strong technical foundation in data engineering combined with an understanding of the reinsurance business domain.

 

Key Responsibilities:

  • Design develop and maintain data pipelines and ETL workflows using PySpark SQL and Palantir Foundry.
  • Collaborate with data architects business analysts and actuarial teams to understand reinsurance data models and transform complex datasets into usable formats.
  • Build and optimize data ingestion transformation and validation processes to support analytical and reporting use cases.
  • Work within the Palantir Foundry platform to design robust workflows manage datasets and ensure efficient data lineage and governance.
  • Ensure data security compliance and governance in line with industry and client standards.
  • Identify opportunities for automation and process improvement across data systems and integrations.

 

Required Skills & Qualifications:

  • 610 years of overall experience in data engineering roles.
  • Strong hands-on expertise in PySpark (dataframes RDDs performance optimization).
  • Proven experience working with Palantir Foundry or similar data integration platforms.
  • Good understanding of reinsurance including exposure claims and policy data structures.
  • Proficiency in SQL Python and working with large datasets in distributed environments.
  • Experience with cloud platforms (AWS Azure or GCP) and related data services (e.g. S3 Snowflake Databricks).
  • Knowledge of data modeling metadata management and data governance frameworks.
  • Familiarity with CI/CD pipelines version control (Git) and Agile delivery methodologies.

 

Preferred Skills:

  • Experience with data warehousing and reporting modernization projects in the reinsurance domain.
  • Exposure to Palantir ontology design and data operationalization.
  • Working knowledge of APIs REST services and event-driven architecture.
  • Understanding of actuarial data flows submission processes and underwriting analytics is a plus.

 

 

 

 

 

 

 

Thanks

Afrah Faiza

Arthur Grand Technologies Inc

Arthur Grand Technologies is an Equal Opportunity Employer (including disability/vets)

 

 


Additional Information :

All your information will be kept confidential according to EEO guidelines.


Remote Work :

Yes


Employment Type :

Contract

Job Title:  Lead Data Engineer Palantir & PySparkLocation: Remote  Job Summary:We are seeking a highly skilled Data Engineer with hands-on experience in Palantir (Foundry preferred) PySpark and exposure to reinsurance or insurance data environments. The ideal candidate will play a key role in build...
View more view more

Key Skills

  • Apache Hive
  • S3
  • Hadoop
  • Redshift
  • Spark
  • AWS
  • Apache Pig
  • NoSQL
  • Big Data
  • Data Warehouse
  • Kafka
  • Scala

About Company

Arthur Grand Technologies (www.arthurgrand.com) is in the business of providing staffing and technology consulting services. We have doubled our revenue year over year for the past 5 years. This speaks to the long-lasting relationship and customer satisfaction that we have built in th ... View more

View Profile View Profile