HPC Storage Architect
Location: Dallas TX
Overview
This organization is backed by dedicated leadership and investment with a clear mission as it operates at the bleeding edge of technology. Its goal is to scale and enhance high-performance computing (HPC) and cloud infrastructure that supports clients research production and delivery enabling breakthroughs that shape the industries of tomorrow. Its engineers build critical infrastructure to eliminate friction in scientific research simulations analysis and decision-making accelerating discovery and driving faster innovation.
As an HPC Storage Solutions Architect you will design integrate and optimize high-performance storage architectures that underpin HPC AI/ML and large-scale data-intensive workloads. You will act as a trusted advisor to customers guiding them through the full solution lifecycle from requirements discovery and workload analysis through solution design proof-of-concept and deployment to workflow optimization and long-term adoption.
This is a customer-facing technical role with a focus on building scalable resilient and efficient storage systems. You will ensure that customer workloads from simulation and AI training to data pipelines and scientific research are matched with the right file systems protocols and architectures to deliver maximum performance. You will also collaborate closely with product engineering and vendor partners influencing both platform evolution and the adoption of emerging storage technologies.
Key Responsibilities
Customer Engagement & Advisory
- Serve as the primary storage subject matter expert (SME) for customers deploying or scaling HPC environments.
- Partner with customers to capture workload storage requirements performance objectives and capacity planning needs.
- Lead proof-of-concept and benchmarking initiatives validating performance scalability and resiliency of storage designs.
- Conduct workflow assessments and storage usage reviews recommending optimizations to improve throughput latency and cost efficiency.
- Represent the team at customer workshops technical reviews and industry forums building strong technical relationships.
- Stay up to date with emerging storage technologies protocols and data management practices providing future insight to customers on adoption strategies.
Architecture & Design
- Design and document end-to-end storage architectures including parallel/distributed file systems (e.g. Lustre GPFS Ceph VAST) object storage and tiered solutions.
- Implement integration strategies across compute networking and orchestration layers to ensure seamless end-to-end performance.
- Develop architecture blueprints reusable design patterns and integration guides to standardize HPC storage solutions.
- Work with infrastructure-as-code and automation practices (e.g. Ansible Terraform) to deliver consistent repeatable storage environments.
Collaboration & Innovation
- Collaborate with engineering product and operations teams to refine storage offerings and influence platform strategy.
- Partner with vendors (e.g. Dell VAST HPE Rubrik) to integrate new features evaluate emerging technologies and provide customer-driven feedback into vendor roadmaps.
Required Experience
- Demonstrated experience in storage solution architecture HPC storage engineering or large-scale distributed storage design.
- Strong technical expertise in parallel and distributed file systems (Lustre GPFS Ceph VAST) and object storage platforms.
- Hands-on experience with multi-petabyte storage systems including design deployment and scaling.
- Skilled in Linux storage stack tuning file system protocols (NFS SMB POSIX) and performance optimization.
- Experience implementing storage automation and infrastructure-as-code practices (e.g. Ansible Terraform).
- Proven ability to troubleshoot and optimize storage workflows for HPC AI/ML or data-intensive workloads.
- Strong customer-facing communication skills with the ability to explain complex storage architectures to technical and non-technical stakeholders.
Preferred Experience
- Experience delivering HPC or AI/ML workloads on large-scale high-performance storage environments.
- Familiarity with data protection backup and recovery technologies integrated with HPC storage.
- Exposure to multi-vendor storage ecosystems including collaboration with leading hardware and software providers.
- Experience with workflow optimization for data pipelines simulation workloads or scientific computing.
- Bachelors or Masters degree in Computer Science Engineering or related technical field.
- Relevant Storage and systems certifications such as NetApp NCIE Dell EMC Proven Professional Red Hat RHCE or cloud platform certifications like AWS Solutions Architect.
HPC Storage ArchitectLocation: Dallas TXOverviewThis organization is backed by dedicated leadership and investment with a clear mission as it operates at the bleeding edge of technology. Its goal is to scale and enhance high-performance computing (HPC) and cloud infrastructure that supports clients ...
HPC Storage Architect
Location: Dallas TX
Overview
This organization is backed by dedicated leadership and investment with a clear mission as it operates at the bleeding edge of technology. Its goal is to scale and enhance high-performance computing (HPC) and cloud infrastructure that supports clients research production and delivery enabling breakthroughs that shape the industries of tomorrow. Its engineers build critical infrastructure to eliminate friction in scientific research simulations analysis and decision-making accelerating discovery and driving faster innovation.
As an HPC Storage Solutions Architect you will design integrate and optimize high-performance storage architectures that underpin HPC AI/ML and large-scale data-intensive workloads. You will act as a trusted advisor to customers guiding them through the full solution lifecycle from requirements discovery and workload analysis through solution design proof-of-concept and deployment to workflow optimization and long-term adoption.
This is a customer-facing technical role with a focus on building scalable resilient and efficient storage systems. You will ensure that customer workloads from simulation and AI training to data pipelines and scientific research are matched with the right file systems protocols and architectures to deliver maximum performance. You will also collaborate closely with product engineering and vendor partners influencing both platform evolution and the adoption of emerging storage technologies.
Key Responsibilities
Customer Engagement & Advisory
- Serve as the primary storage subject matter expert (SME) for customers deploying or scaling HPC environments.
- Partner with customers to capture workload storage requirements performance objectives and capacity planning needs.
- Lead proof-of-concept and benchmarking initiatives validating performance scalability and resiliency of storage designs.
- Conduct workflow assessments and storage usage reviews recommending optimizations to improve throughput latency and cost efficiency.
- Represent the team at customer workshops technical reviews and industry forums building strong technical relationships.
- Stay up to date with emerging storage technologies protocols and data management practices providing future insight to customers on adoption strategies.
Architecture & Design
- Design and document end-to-end storage architectures including parallel/distributed file systems (e.g. Lustre GPFS Ceph VAST) object storage and tiered solutions.
- Implement integration strategies across compute networking and orchestration layers to ensure seamless end-to-end performance.
- Develop architecture blueprints reusable design patterns and integration guides to standardize HPC storage solutions.
- Work with infrastructure-as-code and automation practices (e.g. Ansible Terraform) to deliver consistent repeatable storage environments.
Collaboration & Innovation
- Collaborate with engineering product and operations teams to refine storage offerings and influence platform strategy.
- Partner with vendors (e.g. Dell VAST HPE Rubrik) to integrate new features evaluate emerging technologies and provide customer-driven feedback into vendor roadmaps.
Required Experience
- Demonstrated experience in storage solution architecture HPC storage engineering or large-scale distributed storage design.
- Strong technical expertise in parallel and distributed file systems (Lustre GPFS Ceph VAST) and object storage platforms.
- Hands-on experience with multi-petabyte storage systems including design deployment and scaling.
- Skilled in Linux storage stack tuning file system protocols (NFS SMB POSIX) and performance optimization.
- Experience implementing storage automation and infrastructure-as-code practices (e.g. Ansible Terraform).
- Proven ability to troubleshoot and optimize storage workflows for HPC AI/ML or data-intensive workloads.
- Strong customer-facing communication skills with the ability to explain complex storage architectures to technical and non-technical stakeholders.
Preferred Experience
- Experience delivering HPC or AI/ML workloads on large-scale high-performance storage environments.
- Familiarity with data protection backup and recovery technologies integrated with HPC storage.
- Exposure to multi-vendor storage ecosystems including collaboration with leading hardware and software providers.
- Experience with workflow optimization for data pipelines simulation workloads or scientific computing.
- Bachelors or Masters degree in Computer Science Engineering or related technical field.
- Relevant Storage and systems certifications such as NetApp NCIE Dell EMC Proven Professional Red Hat RHCE or cloud platform certifications like AWS Solutions Architect.
View more
View less