Data Engineer Palantir

Acunor Inc

Not Interested
Bookmark
Report This Job

profile Job Location:

Dallas, IA - USA

profile Monthly Salary: Not Disclosed
Posted on: 14 hours ago
Vacancies: 1 Vacancy

Job Summary

Job Title: Data Engineer
Location: Dallas TX (3 days/week)
Responsibilities:
  • Data Integration & ETL: Build and manage scalable data pipelines to ingest data from diverse sources (ERP CRM APIs S3 SQL databases) into Foundry.
  • Ontology Modeling: Define and maintain the Ontology-the platforms semantic layer-which maps technical data to real-world business objects (e.g. Aircraft Customer or Invoice).
  • Pipeline Development: Write and optimize data transformations using PySpark SQL or Java within Foundrys Code Repositories.
  • Application Building: Develop front-end operational applications and interactive dashboards using low-code/pro-code tools like Workshop and Slate.
  • AIP Integration: Implement Artificial Intelligence Platform (AIP) features such as LLM-backed functions and agents to automate workflows.
  • Data Governance & Security: Configure granular access controls data health monitors and lineage tracking to ensure compliance and reliability.
Core Technical Skills
  • Languages: High proficiency in Python (PySpark) and SQL is mandatory. Knowledge of Java TypeScript or JavaScript is often required for front-end customization.
  • Big Data: Understanding of distributed computing (Spark) data warehousing concepts and schema design (Star Snowflake etc.).
  • DevOps: Experience with Git-based version control CI/CD practices and debugging complex data workflows.
  • Cloud Architecture: Familiarity with AWS Azure or GCP environments where Foundry is typically hosted.
Job Title: Data Engineer Location: Dallas TX (3 days/week) Responsibilities: Data Integration & ETL: Build and manage scalable data pipelines to ingest data from diverse sources (ERP CRM APIs S3 SQL databases) into Foundry. Ontology Modeling: Define and maintain the Ontology-the platforms sem...
View more view more

Key Skills

  • Apache Hive
  • S3
  • Hadoop
  • Redshift
  • Spark
  • AWS
  • Apache Pig
  • NoSQL
  • Big Data
  • Data Warehouse
  • Kafka
  • Scala