Data Engineer (GCP)

Applaudo Studios

Not Interested
Bookmark
Report This Job

profile Job Location:

San Salvador - El Salvador

profile Monthly Salary: Not Disclosed
Posted on: 2 days ago
Vacancies: 1 Vacancy

Job Summary

About You

You are a Data Engineer passionate about building scalable reliable and high-performing data solutions that enable analytics reporting and data-driven decision-making. You enjoy working with modern cloud-based data platforms and designing robust data architectures that support evolving business needs.

You bring a proactive autonomous and detail-oriented mindset with strong problem-solving skills and a solid technical foundation in data engineering practices. You are comfortable working across batch and streaming environments optimizing data pipelines and ensuring data quality governance scalability and performance.

You thrive in collaborative environments partnering with technical and non-technical stakeholders to deliver accessible secure and well-structured data solutions. You are committed to continuous improvement and enjoy contributing to the evolution of scalable cloud-based data platforms and engineering best practices.

  • This position is open to candidates based in El Salvador Colombia or Mexico.
  • This is a 4-month project with possible extension up to 6 months.

You bring to Applaudo the following competencies:

  • Bachelors degree in Computer Science Software Engineering Data Engineering or a related field.
  • Strong proficiency in SQL (advanced level) fundamental requirement.
  • Strong coding experience with Python for data engineering and pipeline development.
  • Hands-on experience with cloud data platforms particularly GCP services such as BigQuery Dataflow Dataform Pub/Sub GCS Firestore and Spanner.
  • Proven experience designing building and maintaining scalable ETL/ELT pipelines.
  • Strong understanding of data modeling concepts and modern data architectures including data lakes data warehouses and lakehouse solutions.
  • Experience developing batch and streaming data pipelines.
  • Experience with workflow orchestration version control CI/CD practices and infrastructure-as-code approaches.
  • Knowledge of monitoring logging debugging and troubleshooting data systems.
  • Experience with performance tuning scalability and cost optimization of cloud data workloads.
  • Familiarity with data governance data lineage metadata management and security/access control practices.
  • Experience supporting analytics and reporting use cases through well-structured and governed datasets.
  • Familiarity with BI and analytics tools such as Looker Studio and Looker Platform is a plus.
  • Understanding of KPI definition reporting needs and data consumption patterns is desirable.
  • Strong analytical thinking troubleshooting and problem-solving skills.
  • Strong communication and collaboration abilities with both technical and business stakeholders.
  • Highly proactive autonomous and self-driven approach to work.
  • English proficiency (B2 or higher).

You will be accountable for the following responsibilities:

  • Design build and maintain scalable batch and streaming data pipelines using GCP technologies.
  • Develop ingestion frameworks and integrate data from internal and external sources.
  • Implement and optimize robust ETL/ELT processes ensuring reliability scalability fault tolerance and performance.
  • Monitor troubleshoot and maintain high availability and integrity of data pipelines and workflows.
  • Contribute to the design and evolution of the organizations cloud-based data platform architecture.
  • Implement and optimize data storage and processing solutions using services such as BigQuery Dataflow Dataform Pub/Sub GCS Firestore and Spanner.
  • Ensure scalability cost efficiency and performance optimization across data workloads.
  • Support CI/CD practices and infrastructure automation for data engineering workflows.
  • Develop and maintain scalable reusable and standardized data models aligned with best practices.
  • Implement data validation quality checks and monitoring frameworks to ensure data reliability and governance.
  • Maintain data lineage metadata technical documentation and governance standards.
  • Enforce data governance security and access management policies.
  • Provide high-quality structured and governed datasets for reporting analytics and downstream consumers.
  • Collaborate with analysts stakeholders and cross-functional teams to ensure data usability accessibility and alignment with business requirements.
  • Support the definition of data contracts SLAs and data quality standards.
  • Contribute to the development and optimization of curated datasets and analytical data products.
  • Assist in exploratory analysis and performance optimization of analytical queries.
  • Promote best practices in data engineering quality governance and platform scalability.

Qualifications :

  • Strong experience with SQL and Python for data engineering.
  • Hands-on experience with GCP data services (BigQuery Dataflow Pub/Sub GCS etc.).
  • Experience building and maintaining scalable ETL/ELT pipelines.
  • Experience with batch and streaming data processing.
  • English proficiency (B2 or higher).

Additional Information :

About Us

We Are Engineered Different.

At Applaudo talented people design build and scale meaningful AI-powered solutions that create real business impact. As an AI-native organization we collaborate across design development cloud data and artificial intelligence to turn ideas into scalable products that transform how companies operate make decisions and grow.

We are building a high-performance culture grounded in five values: Empowering Excellence Collaborative Teamwork Unsolicited Respect Consistent Transparency and Efficient Communication. These define how we work how we support one another and how we hold ourselves accountable.

Applaudo is a place for people who want to learn fast take ownership and work alongside strong teams they are proud to belong to. Joining us means being part of an organization that is evolving intentionally investing in modern ways of working and leading AI-native transformation at scale.


Remote Work :

No


Employment Type :

Full-time

About YouYou are a Data Engineer passionate about building scalable reliable and high-performing data solutions that enable analytics reporting and data-driven decision-making. You enjoy working with modern cloud-based data platforms and designing robust data architectures that support evolving busi...
View more view more