A senior (15 REQUIRED) Python Backend developer with extensive experience working in Banking or Capital Markets and with Azure Databricks. Candidates need Strong proficiency in Python and Python web frameworks (FastAPI Pydantic SQLAlchemy/SQLModel) as well as demonstrated experience building RESTful APIs. They must be proficient in writing and optimizing PySpark jobs/notebooks for ETL and data transformation experience with CI/CD for Databricks notebooks and jobs
Required Location: Hybrid/Midtown New York City 3 days a week.
Job Description:
We are seeking a hands-on Senior Backend Developer with over 15 years of experience specializing in Python to design develop and maintain high-performance web applications and data pipelines. The ideal candidate will have deep expertise in building RESTful APIs working with modern Python frameworks and developing robust ETL solutions on cloud platforms such as Azure. You will collaborate with cross-functional teams to implement new features and ensure seamless integration of backend components.
Key Responsibilities
Backend Development:
o Design develop and maintain scalable and secure backend systems for web applications.
o Build RESTful APIs using Python and modern frameworks (e.g. FastAPI) ensuring robust and maintainable code.
o Collaborate with front-end and DevOps teams to deliver end-to-end solutions.
Data Engineering & ETL:
o Design develop and optimize ETL pipelines for data transformation and loading into databases or data warehouses.
o Automate data quality workflows using PySpark and Databricks to deliver clean reliable data.
o Build and orchestrate scalable ingestion processes using Azure Data Factory (ADF) and Databricks.
o Integrate structured semi-structured and unstructured data sources into unified platforms.
o Architect efficient data storage solutions leveraging relational and NoSQL databases for both real-time and historical analytics.
DevOps & Cloud Integration:
o Work with Azure services (Functions Logic Apps Key Vault ADF) for data orchestration and automation.
o Implement CI/CD pipelines using Azure DevOps or similar platforms.
o Ensure version control best practices using Git or Azure DevOps.
Required Qualifications
Bachelors degree in Computer Science Engineering or equivalent experience.
10 years of professional experience as a Backend or Full Stack Developer.
Strong proficiency in Python and Python web frameworks (FastAPI Pydantic SQLAlchemy/SQLModel).
Demonstrated experience building RESTful APIs.
Advanced SQL skills with a proven track record of optimizing queries and database interactions.
Experience with Azure cloud services especially Azure Data Factory and Databricks.
Practical knowledge of DevOps build/release and CI/CD processes.
Familiarity with version control systems (Git Azure DevOps).
Excellent communication skills with the ability to thrive in a fast-paced collaborative environment.
Skills
Python Backend Development:
Strong expertise in Python 3.x with a focus on backend systems.
Experience with Python web frameworks (FastAPI Flask Django or similar).
Data validation and serialization using Pydantic.
ORM experience (SQLAlchemy SQLModel or similar).
RESTful API design implementation documentation (OpenAPI/Swagger).
Unit integration and end-to-end testing of APIs (pytest unittest).
Security best practices (authentication authorization API security).
Asynchronous programming with Python (async/await asyncio).
Performance optimization caching strategies and error handling.
Experience with Docker and containerized backend deployments.
Databricks & Data Engineering:
Experience with Databricks for large-scale data processing and analytics.
Proficient in writing and optimizing PySpark jobs/notebooks for ETL and data transformation.
Strong understanding of distributed computing concepts.
Working knowledge of data lake architectures and Delta Lake.
Building scalable data pipelines using Azure Data Factory and Databricks.
Automation of data quality checks monitoring and logging.
Integration with cloud data sources (Azure Blob Data Lake Storage SQL/NoSQL DBs).
Data modeling and schema design for analytical workloads.
Experience with CI/CD for Databricks notebooks and jobs.
Knowledge of workspace administration cluster management and job orchestration in Databricks
A senior (15 REQUIRED) Python Backend developer with extensive experience working in Banking or Capital Markets and with Azure Databricks. Candidates need Strong proficiency in Python and Python web frameworks (FastAPI Pydantic SQLAlchemy/SQLModel) as well as demonstrated experience building RESTful...
A senior (15 REQUIRED) Python Backend developer with extensive experience working in Banking or Capital Markets and with Azure Databricks. Candidates need Strong proficiency in Python and Python web frameworks (FastAPI Pydantic SQLAlchemy/SQLModel) as well as demonstrated experience building RESTful APIs. They must be proficient in writing and optimizing PySpark jobs/notebooks for ETL and data transformation experience with CI/CD for Databricks notebooks and jobs
Required Location: Hybrid/Midtown New York City 3 days a week.
Job Description:
We are seeking a hands-on Senior Backend Developer with over 15 years of experience specializing in Python to design develop and maintain high-performance web applications and data pipelines. The ideal candidate will have deep expertise in building RESTful APIs working with modern Python frameworks and developing robust ETL solutions on cloud platforms such as Azure. You will collaborate with cross-functional teams to implement new features and ensure seamless integration of backend components.
Key Responsibilities
Backend Development:
o Design develop and maintain scalable and secure backend systems for web applications.
o Build RESTful APIs using Python and modern frameworks (e.g. FastAPI) ensuring robust and maintainable code.
o Collaborate with front-end and DevOps teams to deliver end-to-end solutions.
Data Engineering & ETL:
o Design develop and optimize ETL pipelines for data transformation and loading into databases or data warehouses.
o Automate data quality workflows using PySpark and Databricks to deliver clean reliable data.
o Build and orchestrate scalable ingestion processes using Azure Data Factory (ADF) and Databricks.
o Integrate structured semi-structured and unstructured data sources into unified platforms.
o Architect efficient data storage solutions leveraging relational and NoSQL databases for both real-time and historical analytics.
DevOps & Cloud Integration:
o Work with Azure services (Functions Logic Apps Key Vault ADF) for data orchestration and automation.
o Implement CI/CD pipelines using Azure DevOps or similar platforms.
o Ensure version control best practices using Git or Azure DevOps.
Required Qualifications
Bachelors degree in Computer Science Engineering or equivalent experience.
10 years of professional experience as a Backend or Full Stack Developer.
Strong proficiency in Python and Python web frameworks (FastAPI Pydantic SQLAlchemy/SQLModel).
Demonstrated experience building RESTful APIs.
Advanced SQL skills with a proven track record of optimizing queries and database interactions.
Experience with Azure cloud services especially Azure Data Factory and Databricks.
Practical knowledge of DevOps build/release and CI/CD processes.
Familiarity with version control systems (Git Azure DevOps).
Excellent communication skills with the ability to thrive in a fast-paced collaborative environment.
Skills
Python Backend Development:
Strong expertise in Python 3.x with a focus on backend systems.
Experience with Python web frameworks (FastAPI Flask Django or similar).
Data validation and serialization using Pydantic.
ORM experience (SQLAlchemy SQLModel or similar).
RESTful API design implementation documentation (OpenAPI/Swagger).
Unit integration and end-to-end testing of APIs (pytest unittest).
Security best practices (authentication authorization API security).
Asynchronous programming with Python (async/await asyncio).
Performance optimization caching strategies and error handling.
Experience with Docker and containerized backend deployments.
Databricks & Data Engineering:
Experience with Databricks for large-scale data processing and analytics.
Proficient in writing and optimizing PySpark jobs/notebooks for ETL and data transformation.
Strong understanding of distributed computing concepts.
Working knowledge of data lake architectures and Delta Lake.
Building scalable data pipelines using Azure Data Factory and Databricks.
Automation of data quality checks monitoring and logging.
Integration with cloud data sources (Azure Blob Data Lake Storage SQL/NoSQL DBs).
Data modeling and schema design for analytical workloads.
Experience with CI/CD for Databricks notebooks and jobs.
Knowledge of workspace administration cluster management and job orchestration in Databricks
View more
View less