About Anthropic
Anthropics mission is to create reliable interpretable and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers engineers policy experts and business leaders working together to build beneficial AI systems.
About the Role
Data Infrastructure designs operates and scales secure privacy-respecting systems that power data-driven decisions across Anthropic. Our mission is to provide data processing storage and access that are trusted fast and easy to use.
Were looking for infrastructure engineers who thrive working at the intersection of data systems security and scalability. Youll tackle diverse challenges ranging from building financial reporting pipelines to architecting access control systems to ensuring cloud storage reliability. This role offers the opportunity to work directly with data scientists analysts and business stakeholders while diving deep into cloud infrastructure primitives.
What Youll Work On
Depending on your background and interests you may focus on areas such as:
- Data Governance & Access Control: Design and implement robust access control systems ensuring only authorized users can access sensitive data. Build infrastructure for permission management audit logging and compliance requirements. Work on IAM policies ACLs and security controls that scale across thousands of users and systems.
- Financial Data Infrastructure: Build and maintain data pipelines and warehouses powering business-critical reporting. Ensure data integrity accuracy and availability for complex financial systems including third party revenue ingestion pipelines; manage the external relationships as needed to drive upstream dependencies. Own the reliability of systems processing revenue usage and business metrics.
- Cloud Storage & Reliability: Architect disaster recovery backup and replication systems for petabyte-scale data. Ensure high availability and durability of data stored in cloud object storage (GCS S3). Build systems that protect against data loss and enable rapid recovery.
- Data Platform & Tooling: Scale data processing infrastructure using technologies like BigQuery BigTable Airflow dbt and Spark. Optimize query performance manage costs and enable self-service analytics across the organization.
You May Be a Good Fit If You:
- Have 8 years of software engineering experience with 3 years building data infrastructure storage systems or related distributed systems
- Have deep experience with at least one of:
- Cloud data platforms (BigQuery Redshift Snowflake) and orchestration tools (Airflow dbt)
- Access control systems IAM authentication/authorization at scale
- Distributed storage systems object storage (S3 GCS) disaster recovery
- Strong proficiency in programming languages like Python Go Java or similar
- Experience with infrastructure-as-code (Terraform Pulumi) and cloud platforms (GCP AWS)
- Can navigate complex technical tradeoffs between performance cost security and maintainability
- Have excellent collaboration skills - you work well with both technical and non-technical stakeholders
- Are comfortable with ambiguity and can independently scope and drive large projects
Strong Candidates May Also Have:
- Experience with security and compliance requirements (ITGC GDPR financial controls)
- Background in data warehousing ETL/ELT pipelines or analytics infrastructure
- Experience with Kubernetes containerization and cloud-native architectures
- Track record of improving data reliability availability or cost efficiency at scale
- Knowledge of column-oriented databases OLAP systems or big data processing frameworks
- Experience working in fintech financial services or highly regulated environments
- Security engineering background with focus on data protection and access controls
Technologies We Use:
- Data: BigQuery BigTable Airflow Cloud Composer dbt Spark Segment Fivetran
- Storage: GCS S3
- Infrastructure: Terraform Kubernetes GCP AWS
- Languages: Python Go SQL
Deadline to apply:None. Applications will be reviewed on a rolling basis.
The expectedbase compensation for this position is below. Our total compensation package for full-time employees includes equity benefits and may include incentive compensation.
Annual Salary:
$405000 - $485000 USD
Logistics
Education requirements: We require at least a Bachelors degree in a related field or equivalent experience.
Location-based hybrid policy: Currently we expect all staff to be in one of our offices at least 25% of the time. However some roles may require more time in our offices.
Visa sponsorship:We do sponsor visas! However we arent able to successfully sponsor visas for every role and every candidate. But if we make you an offer we will make every reasonable effort to get you a visa and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy so we urge you not to exclude yourself prematurely and to submit an application if youre interested in this work. We think AI systems like the ones were building have enormous social and ethical implications. We think this makes representation even more important and we strive to include a range of diverse perspectives on our team.
How were different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact advancing our long-term goals of steerable trustworthy AI rather than work on smaller and more specific puzzles. We view AI research as an empirical science which has as much in common with physics and biology as with traditional efforts in computer science. Were an extremely collaborative group and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic including: GPT-3 Circuit-Based Interpretability Multimodal Neurons Scaling Laws AI & Compute Concrete Problems in AI Safety and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits optional equity donation matching generous vacation and parental leave flexible working hours and a lovely office space in which to collaborate with colleagues. Guidance on Candidates AI Usage:Learn aboutour policyfor using AI in our application process
About AnthropicAnthropics mission is to create reliable interpretable and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers engineers policy experts and business leaders working together t...
About Anthropic
Anthropics mission is to create reliable interpretable and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers engineers policy experts and business leaders working together to build beneficial AI systems.
About the Role
Data Infrastructure designs operates and scales secure privacy-respecting systems that power data-driven decisions across Anthropic. Our mission is to provide data processing storage and access that are trusted fast and easy to use.
Were looking for infrastructure engineers who thrive working at the intersection of data systems security and scalability. Youll tackle diverse challenges ranging from building financial reporting pipelines to architecting access control systems to ensuring cloud storage reliability. This role offers the opportunity to work directly with data scientists analysts and business stakeholders while diving deep into cloud infrastructure primitives.
What Youll Work On
Depending on your background and interests you may focus on areas such as:
- Data Governance & Access Control: Design and implement robust access control systems ensuring only authorized users can access sensitive data. Build infrastructure for permission management audit logging and compliance requirements. Work on IAM policies ACLs and security controls that scale across thousands of users and systems.
- Financial Data Infrastructure: Build and maintain data pipelines and warehouses powering business-critical reporting. Ensure data integrity accuracy and availability for complex financial systems including third party revenue ingestion pipelines; manage the external relationships as needed to drive upstream dependencies. Own the reliability of systems processing revenue usage and business metrics.
- Cloud Storage & Reliability: Architect disaster recovery backup and replication systems for petabyte-scale data. Ensure high availability and durability of data stored in cloud object storage (GCS S3). Build systems that protect against data loss and enable rapid recovery.
- Data Platform & Tooling: Scale data processing infrastructure using technologies like BigQuery BigTable Airflow dbt and Spark. Optimize query performance manage costs and enable self-service analytics across the organization.
You May Be a Good Fit If You:
- Have 8 years of software engineering experience with 3 years building data infrastructure storage systems or related distributed systems
- Have deep experience with at least one of:
- Cloud data platforms (BigQuery Redshift Snowflake) and orchestration tools (Airflow dbt)
- Access control systems IAM authentication/authorization at scale
- Distributed storage systems object storage (S3 GCS) disaster recovery
- Strong proficiency in programming languages like Python Go Java or similar
- Experience with infrastructure-as-code (Terraform Pulumi) and cloud platforms (GCP AWS)
- Can navigate complex technical tradeoffs between performance cost security and maintainability
- Have excellent collaboration skills - you work well with both technical and non-technical stakeholders
- Are comfortable with ambiguity and can independently scope and drive large projects
Strong Candidates May Also Have:
- Experience with security and compliance requirements (ITGC GDPR financial controls)
- Background in data warehousing ETL/ELT pipelines or analytics infrastructure
- Experience with Kubernetes containerization and cloud-native architectures
- Track record of improving data reliability availability or cost efficiency at scale
- Knowledge of column-oriented databases OLAP systems or big data processing frameworks
- Experience working in fintech financial services or highly regulated environments
- Security engineering background with focus on data protection and access controls
Technologies We Use:
- Data: BigQuery BigTable Airflow Cloud Composer dbt Spark Segment Fivetran
- Storage: GCS S3
- Infrastructure: Terraform Kubernetes GCP AWS
- Languages: Python Go SQL
Deadline to apply:None. Applications will be reviewed on a rolling basis.
The expectedbase compensation for this position is below. Our total compensation package for full-time employees includes equity benefits and may include incentive compensation.
Annual Salary:
$405000 - $485000 USD
Logistics
Education requirements: We require at least a Bachelors degree in a related field or equivalent experience.
Location-based hybrid policy: Currently we expect all staff to be in one of our offices at least 25% of the time. However some roles may require more time in our offices.
Visa sponsorship:We do sponsor visas! However we arent able to successfully sponsor visas for every role and every candidate. But if we make you an offer we will make every reasonable effort to get you a visa and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy so we urge you not to exclude yourself prematurely and to submit an application if youre interested in this work. We think AI systems like the ones were building have enormous social and ethical implications. We think this makes representation even more important and we strive to include a range of diverse perspectives on our team.
How were different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact advancing our long-term goals of steerable trustworthy AI rather than work on smaller and more specific puzzles. We view AI research as an empirical science which has as much in common with physics and biology as with traditional efforts in computer science. Were an extremely collaborative group and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic including: GPT-3 Circuit-Based Interpretability Multimodal Neurons Scaling Laws AI & Compute Concrete Problems in AI Safety and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits optional equity donation matching generous vacation and parental leave flexible working hours and a lovely office space in which to collaborate with colleagues. Guidance on Candidates AI Usage:Learn aboutour policyfor using AI in our application process
View more
View less