drjobs Python Developers

Python Developers

Employer Active

1 Vacancy
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
Send me jobs like this
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs
Jobs by Experience drjobs

8years

Job Location drjobs

Mclean - USA

Monthly Salary drjobs

Not Disclosed

drjobs

Salary Not Disclosed

Vacancy

1 Vacancy

Job Description

Do you love a career where you Experience Grow & Contribute at the same time while earning at least 10 above the market If so we are excited to have bumped onto you.


We are an IT Solutions Integrator/Consulting Firm helping our clients hire the right professional for an exciting long term project. Here are a few details.



Requirements

We are seeking a highly skilled and motivated Python Developer with strong expertise in PySpark and AWS to join our data engineering team. The ideal candidate will be responsible for building scalable data pipelines transforming large volumes of data and deploying data solutions in cloud environments. You will collaborate with crossfunctional teams to design develop and implement highperformance reliable and scalable data processing systems.


Key Responsibilities:

  • Design develop and maintain efficient reusable and reliable Python code.

  • Develop scalable data processing pipelines using PySpark for structured and semistructured data.

  • Build and automate data workflows and ETL pipelines using AWS services such as S3 Glue Lambda EMR and Step Functions.

  • Optimize data processing for performance scalability and reliability.

  • Participate in architecture design discussions and contribute to technical decisionmaking.

  • Integrate with data sources like RDBMS NoSQL and REST APIs.

  • Implement data quality checks monitoring and logging for production pipelines.

  • Work closely with data analysts architects and DevOps teams to ensure seamless data flow and integration.

  • Perform unit testing debugging and performance tuning of code.

  • Maintain documentation for all developed components and processes.


Required Skills and Qualifications:

  • Bachelor s or Master s degree in Computer Science Engineering or a related field.

  • 4 years of experience in Python programming for data engineering or backend development.

  • Strong handson experience with PySpark (RDD DataFrame APIs Spark SQL performance tuning).

  • Proficient in using AWS services like S3 Glue Lambda EMR Athena and CloudWatch.

  • Good understanding of distributed computing and parallel data processing.

  • Experience working with largescale datasets and batch/streaming data pipelines.

  • Familiarity with SQL and data modeling concepts.

  • Knowledge of CI/CD tools and source control (e.g. Git Jenkins).

  • Solid understanding of software engineering best practices and Agile methodologies.


Preferred Qualifications:

  • AWS certification (e.g. AWS Certified Developer or Data Analytics Specialty).

  • Experience with containerization (Docker) and orchestration tools (Kubernetes).

  • Familiarity with data lake and data warehouse concepts (e.g. Redshift Snowflake).

  • Exposure to Apache Airflow or other workflow orchestration tools.

We are seeking a highly skilled and motivated Python Developer with strong expertise in PySpark and AWS to join our data engineering team. The ideal candidate will be responsible for building scalable data pipelines transforming large volumes of data and deploying data solutions in cloud environments. You will collaborate with crossfunctional teams to design develop and implement highperformance reliable and scalable data processing systems.


Key Responsibilities:

  • Design develop and maintain efficient reusable and reliable Python code.

  • Develop scalable data processing pipelines using PySpark for structured and semistructured data.

  • Build and automate data workflows and ETL pipelines using AWS services such as S3 Glue Lambda EMR and Step Functions.

  • Optimize data processing for performance scalability and reliability.

  • Participate in architecture design discussions and contribute to technical decisionmaking.

  • Integrate with data sources like RDBMS NoSQL and REST APIs.

  • Implement data quality checks monitoring and logging for production pipelines.

  • Work closely with data analysts architects and DevOps teams to ensure seamless data flow and integration.

  • Perform unit testing debugging and performance tuning of code.

  • Maintain documentation for all developed components and processes.


Required Skills and Qualifications:

  • Bachelor s or Master s degree in Computer Science Engineering or a related field.

  • 4 years of experience in Python programming for data engineering or backend development.

  • Strong handson experience with PySpark (RDD DataFrame APIs Spark SQL performance tuning).

  • Proficient in using AWS services like S3 Glue Lambda EMR Athena and CloudWatch.

  • Good understanding of distributed computing and parallel data processing.

  • Experience working with largescale datasets and batch/streaming data pipelines.

  • Familiarity with SQL and data modeling concepts.

  • Knowledge of CI/CD tools and source control (e.g. Git Jenkins).

  • Solid understanding of software engineering best practices and Agile methodologies.


Preferred Qualifications:

  • AWS certification (e.g. AWS Certified Developer or Data Analytics Specialty).

  • Experience with containerization (Docker) and orchestration tools (Kubernetes).

  • Familiarity with data lake and data warehouse concepts (e.g. Redshift Snowflake).

  • Exposure to Apache Airflow or other workflow orchestration tools.



Benefits



Python/Pyspark, AWS

Education

B.E/

Employment Type

Full Time

About Company

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.