Overview
Public Storage is the worlds best owner and operator of self-storage facilities serving millions of customers across 3000 locations. Public Storages Data and AI organization operates like a high-velocity startup inside the enterprisemodern cloud stack rapid iteration small expert teams and direct impact on revenue-critical decisions every day. Our platform is built on Google Cloud (BigQuery Vertex AI Pub/Sub DataFlow Cloud Run GKE/Terraform) dbt cloud Airflow/Cloud Composer and modern CI/CD practices. We build solutions that driver significant business impact across both digital and physical. Engineers on our team work end-to-end: designing systems shipping production workloads influencing architecture and shaping how AI is applied at national scale.
We build for both short and long-term we are a dynamic high-velocity engineering team that moves quickly from idea to production. This is a role for someone who wants to own key parts of the data & ML platform make immediate impact and thrive in an environment where requirements evolve decisions matter and results are visible.
You are a passionate full-stack data & ML engineer who loves to write code build systems fun and spirited debates about the right architecture for the specific use case. In addition to tech skills we believe in teaching the soft leadership skills you need to advance your career over the long term.
Data Engineering & Pipeline Development (Primary) (60%)
- Architect build and maintain batch and streaming pipelines using BigQuery dbt Airflow/Cloud Composer and Pub/Sub
- Define and implement layered data models semantic layers and modular pipelines that scale as use-cases evolve
- Establish and enforce data-quality observability lineage and schema governance practices
- Drive efficient BigQuery design (clustering partitioning cost-awareness) for structured tabluar data primarily and unstructured data (web logs call center transcripts images/videos etc) when the use case requires it.
- Leverage ML/DS capabilities in BQML for anomaly detection and disposition
- You will be accountable for delivering reliable performant pipelines that enable downstream ML and analytics
ML/AI Platform Engineering (20%)
- Transform prototype notebooks / models into production-grade versioned testable Python packages
- Deploy and manage training and inference workflows on GCP (Cloud Run GKE Vertex AI) with CI/CD version tracking rollback capabilities
- Evaluate new products from GCP or vendors; build internal toolkits shared libraries and pipeline templates that accelerate delivery across teams
- You will enable the ML team to ship faster with fewer failure-modes
Applied AI & Real-Time Decisioning (20%)
- Support real-time eventdriven inference and streaming feature delivery for mission-critical decisions such as but not limited to real time recommendation systems dynamice A/B testing and agentic AI interfacing
- Contribute to internal LLM-based assistants retrieval-augmented decision models and automation agents as the platform evolves
- Implement model monitoring drift detection alerting and performance tracking frameworks
Cross-Functional Collaboration
- Partner with data scientists and engineers to operationalize models semantic layers and pipelines into maintainable production systems
- Work with pricing digital product analytics and business teams to stage rollouts support experiments and define metric-driven success
- Participate in architecture reviews mentor engineers and drive technical trade-offs with clarity
Qualifications :
- MS in CS 4 years experience or BS in CS 6 years experience.
- 3 years hands-on building data pipelines in a code-first environment (Python SQL dbt)
- At least 1 year experience in real-time or event-driven systems (Pub/Sub DataFlow batch/streaming frameworks)
- At least 2 years owning technical decisions or leading engineering direction
- Demonstrated expert proficiency in:
- SQL & BigQuery (schema design query performance modeling)
- dbt (semantic modeling macros testing frameworks)
- Airflow/Cloud Composer (DAG patterns retries alerting SLAs)
- GCP fundamentals (IAM networking container deployments)
- Python (structure testing packaging)
- You can explain why you chose a data model how you improved it what it mattered and how you measured it
Preferred Experience
- GCP or AWS cloud experience
- Experience with ML monitoring or platform tooling (MLflow Evidently Vertex AI)
- Knowledge of semantic search vector embeddings or LLM orchestration (RAG workflows)
- Domain familiarity in pricing recommendations forecasting or large-scale customer analytics
- Awareness of geospatial data / map-based modeling (nice to have)
- Some JavaScript experience (for lightweight UI/prototyping)
Why This Role Will Excite You
- Youll work in a fast-moving environment where decisions arent put on hold and youll see your work drive real business outcomes
- Youll own meaningful parts of the platform not just small tasks or legacy maintenance
- Youll build end-to-end: from raw data ingestion to ML inference and finally into production integrations
- Youll learn fastnew challenges new tools new domains every quarter
Additional Information :
Workplace
- One of our values pillars is to work as OneTeam and we believe that there is no replacement for in-person collaboration but understand the value of some flexibility. Public Storage teammates are expected to work in the office five days each week with the option to take up to three flexible remote days per month.
- Our office is based in Plano TX - 2201 K. Ave Plano TX 75074
Public Storage is an equal opportunity employer and embraces diversity. We do not discriminate on the basis of race color religion sex sexual orientation gender identity national origin age disability or any other protected status. All qualified candidates are encouraged to apply.
**Sponsorship for Work Authorization is not available for this posting. Candidates must be authorized to work in the U.S. without requiring sponsorship now or in the future.**
REF3470M
Remote Work :
No
Employment Type :
Full-time
OverviewPublic Storage is the worlds best owner and operator of self-storage facilities serving millions of customers across 3000 locations. Public Storages Data and AI organization operates like a high-velocity startup inside the enterprisemodern cloud stack rapid iteration small expert teams and d...
Overview
Public Storage is the worlds best owner and operator of self-storage facilities serving millions of customers across 3000 locations. Public Storages Data and AI organization operates like a high-velocity startup inside the enterprisemodern cloud stack rapid iteration small expert teams and direct impact on revenue-critical decisions every day. Our platform is built on Google Cloud (BigQuery Vertex AI Pub/Sub DataFlow Cloud Run GKE/Terraform) dbt cloud Airflow/Cloud Composer and modern CI/CD practices. We build solutions that driver significant business impact across both digital and physical. Engineers on our team work end-to-end: designing systems shipping production workloads influencing architecture and shaping how AI is applied at national scale.
We build for both short and long-term we are a dynamic high-velocity engineering team that moves quickly from idea to production. This is a role for someone who wants to own key parts of the data & ML platform make immediate impact and thrive in an environment where requirements evolve decisions matter and results are visible.
You are a passionate full-stack data & ML engineer who loves to write code build systems fun and spirited debates about the right architecture for the specific use case. In addition to tech skills we believe in teaching the soft leadership skills you need to advance your career over the long term.
Data Engineering & Pipeline Development (Primary) (60%)
- Architect build and maintain batch and streaming pipelines using BigQuery dbt Airflow/Cloud Composer and Pub/Sub
- Define and implement layered data models semantic layers and modular pipelines that scale as use-cases evolve
- Establish and enforce data-quality observability lineage and schema governance practices
- Drive efficient BigQuery design (clustering partitioning cost-awareness) for structured tabluar data primarily and unstructured data (web logs call center transcripts images/videos etc) when the use case requires it.
- Leverage ML/DS capabilities in BQML for anomaly detection and disposition
- You will be accountable for delivering reliable performant pipelines that enable downstream ML and analytics
ML/AI Platform Engineering (20%)
- Transform prototype notebooks / models into production-grade versioned testable Python packages
- Deploy and manage training and inference workflows on GCP (Cloud Run GKE Vertex AI) with CI/CD version tracking rollback capabilities
- Evaluate new products from GCP or vendors; build internal toolkits shared libraries and pipeline templates that accelerate delivery across teams
- You will enable the ML team to ship faster with fewer failure-modes
Applied AI & Real-Time Decisioning (20%)
- Support real-time eventdriven inference and streaming feature delivery for mission-critical decisions such as but not limited to real time recommendation systems dynamice A/B testing and agentic AI interfacing
- Contribute to internal LLM-based assistants retrieval-augmented decision models and automation agents as the platform evolves
- Implement model monitoring drift detection alerting and performance tracking frameworks
Cross-Functional Collaboration
- Partner with data scientists and engineers to operationalize models semantic layers and pipelines into maintainable production systems
- Work with pricing digital product analytics and business teams to stage rollouts support experiments and define metric-driven success
- Participate in architecture reviews mentor engineers and drive technical trade-offs with clarity
Qualifications :
- MS in CS 4 years experience or BS in CS 6 years experience.
- 3 years hands-on building data pipelines in a code-first environment (Python SQL dbt)
- At least 1 year experience in real-time or event-driven systems (Pub/Sub DataFlow batch/streaming frameworks)
- At least 2 years owning technical decisions or leading engineering direction
- Demonstrated expert proficiency in:
- SQL & BigQuery (schema design query performance modeling)
- dbt (semantic modeling macros testing frameworks)
- Airflow/Cloud Composer (DAG patterns retries alerting SLAs)
- GCP fundamentals (IAM networking container deployments)
- Python (structure testing packaging)
- You can explain why you chose a data model how you improved it what it mattered and how you measured it
Preferred Experience
- GCP or AWS cloud experience
- Experience with ML monitoring or platform tooling (MLflow Evidently Vertex AI)
- Knowledge of semantic search vector embeddings or LLM orchestration (RAG workflows)
- Domain familiarity in pricing recommendations forecasting or large-scale customer analytics
- Awareness of geospatial data / map-based modeling (nice to have)
- Some JavaScript experience (for lightweight UI/prototyping)
Why This Role Will Excite You
- Youll work in a fast-moving environment where decisions arent put on hold and youll see your work drive real business outcomes
- Youll own meaningful parts of the platform not just small tasks or legacy maintenance
- Youll build end-to-end: from raw data ingestion to ML inference and finally into production integrations
- Youll learn fastnew challenges new tools new domains every quarter
Additional Information :
Workplace
- One of our values pillars is to work as OneTeam and we believe that there is no replacement for in-person collaboration but understand the value of some flexibility. Public Storage teammates are expected to work in the office five days each week with the option to take up to three flexible remote days per month.
- Our office is based in Plano TX - 2201 K. Ave Plano TX 75074
Public Storage is an equal opportunity employer and embraces diversity. We do not discriminate on the basis of race color religion sex sexual orientation gender identity national origin age disability or any other protected status. All qualified candidates are encouraged to apply.
**Sponsorship for Work Authorization is not available for this posting. Candidates must be authorized to work in the U.S. without requiring sponsorship now or in the future.**
REF3470M
Remote Work :
No
Employment Type :
Full-time
View more
View less