Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailNot Disclosed
Salary Not Disclosed
1 Vacancy
Hybrid onsite Dearborn MI 2-3 days a week onsite; come September it will be 4 days a week
Hackerank is required.
3 openings
Hybrid onsite Dearborn MI 2-3 days a week onsite; come September it will be 4 days a week
12 month contract.
Skills Based Assessment: Skills Based Assessment: General Coding Proficiency
HACKERANK CODING IS REQUIRED. You will be providing me the email address to send the link of Hackerank. The sooner they take Hackerank and I get results the sooner I talk to them and submit.
Tell him/her to be on the lookout for an email coming from
Title: Assessment InvitationFrom: Magnit Global Hiring Team <>
REVIEW JD MAKE SURE REQUIRED SKILLS (highlighted red) ARE ON THE RESUME IF NOT DONT SEND.
Software Engineer Senior #1030349
Job Description:
Design data solutions in the cloud or on premises using the latest data services products technology and industry best practices
Experience migrating legacy data environments with a focus performance and reliability
Data Architecture contributions include assessing and understanding data sources data models and schemas and data workflows
Ability to assess understand and design ETL jobs data pipelines and workflows
BI and Data Visualization include assessing understanding and designing reports creating dynamic dashboards and setting up data pipelines in support of dashboards and reports
Data Science focus on designing machine learning AI applications MLOps pipelines
Addressing technical inquiries concerning customization integration enterprise architecture and general feature / functionality of data products
Experience in crafting data lake house solutions in GCP. This includes relational & vector databases data warehouses data lakes and distributed data systems.
Must have PySpark API processing knowledge utilizing resilient distributed datasets (RDSS) and data frames
Skills Required:
Design data solutions in the cloud or on premises using the latest data services products technology and industry best practices
Experience migrating legacy data environments with a focus performance and reliability
Data Architecture contributions include assessing and understanding data sources data models and schemas and data workflows
Ability to assess understand and design ETL jobs data pipelines and workflows
BI and Data Visualization include assessing understanding and designing reports creating dynamic dashboards and setting up data pipelines in support of dashboards and reports
Data Science focus on designing machine learning AI applications MLOps pipelines
Addressing technical inquiries concerning customization integration enterprise architecture and general feature / functionality of data products
Experience in crafting data lake house solutions in GCP.
This includes relational & vector databases data warehouses data lakes and distributed data systems.
Must have PySpark API processing knowledge utilizing resilient distributed datasets (RDSS) and data frames
Skills Preferred:
Ability to write bash python and groovy scripts to help configure and administer tools
Experience installing applications on VMs monitoring performance and tailing logs on Unix
PostgreSQL Database administration skills are preferred
Python experience and experience developing REST APIs
Experience Required:
10 years
Education Required:
Bachelors Degree Computer Science Computer Information Systems or equivalent experience
Education Preferred:
Masters Data Science Preferred
Additional Information :
..****HYBRID****
Required Experience:
Senior IC
Contract