Network Engineer, Capacity and Efficiency
San Francisco, CA - USA
Job Summary
About Anthropic
Anthropics mission is to create reliable interpretable and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers engineers policy experts and business leaders working together to build beneficial AI systems.
About the team
The Capacity & Efficiency team sits inside Anthropics Compute organization and owns the cost utilization and attribution story for non-accelerator infrastructure the network compute and storage backbone that moves petabytes between training clusters inference fleets and object storage across clouds and regions. The scale is real the spend is large and the efficiency levers are still mostly unpulled.
We work alongside the Systems Networking team (who build and operate the fabric) and the Observability team (who own the telemetry platform). This role lives at the intersection: youll use deep networking knowledge and rigorous measurement to figure out where and how bandwidth latency and dollars are being used find optimization opportunities and land them.
About the role
Were looking for a network engineer who thinks in metrics first. You understand spine-leaf fabrics BGP SDN overlays and cloud interconnect products well enough to build them. You will instrument them model their cost-per-bit and squeeze out the inefficiency while ensuring we can move the bits to the right places in the most efficient manner. Youll own the observability and efficiency surface for Anthropics network: from per-flow telemetry on backbone routers to cost attribution that tells a research team exactly what their checkpoint sync is costing.
This is a hands-on IC role. Youll write code (Python Go) build dashboards and model capacity. Youll also influence architecture: when the data says a traffic pattern is pathological youll be in the room root causing it and fixing it.
You will be working across three areas: network telemetry observability and cost modeling and attribution. We expect you to be strong in at least two and willing to grow into the third. If youre a telemetry-first engineer whos never built a chargeback model or a traffic engineer who hasnt shipped eBPF probes apply anyway and tell us which axis you want to grow on.
What youll do
Build the network observability stack. Design and deploy telemetry pipelines sFlow/IPFIX gNMI streaming eBPF host probes that turn packet counters into per-flow per-tenant per-workload cost and utilization data. Own the SLIs for backbone and DCN fabric health.
Hunt for efficiency. Analyze inter-region traffic patterns identify hot links and stranded capacity and quantify the dollar impact. Build the models that tell us whether we should buy more capacity or move the workload.
Own QoS and traffic engineering. Design and operate traffic classification marking and shaping across the backbone. Make sure bulk checkpoint transfers dont starve latency-sensitive inference and that were not paying premium cross-region rates for traffic that could take the cheap path.
Drive cost attribution. Tie network spend egress interconnect ports transit optical leases back to the teams and workloads that generate it. Make network cost a first-class input to capacity planning and workload placement decisions.
- Influence decisions you dont own. A large fraction of this role is convincing other teams to act on what your data shows: making the case to research that a traffic pattern needs to change to finance that an interconnect tranche is worth buying to Systems Networking that a QoS policy needs rewriting. Youll partner closely with Systems Networking on fabric architecture and Observability on telemetry platform integration but the cost and efficiency wins will come from moving teams that dont report to you.
Automate. Extend our intent-based network configuration systems and write the tooling that turns your efficiency findings into safe reviewable and impactful changes.
You may be a good fit if you
Have 5 years operating large-scale production networks data center fabrics (spine-leaf Clos) backbone/WAN or hyperscaler-adjacent environments.
Are genuinely fluent across the stack: BGP (including policy and communities) ECMP VXLAN/EVPN or equivalent overlays QoS (DSCP queuing shaping) and L1/optical basics (DWDM coherent LAGs).
Know at least one major CSPs networking model deeply AWS (VPC TGW Direct Connect Gateway Load Balancer) or GCP (Shared VPC Interconnect Cloud Router Network Connectivity Center) and understand how their overlays interact with physical underlays.
Have built or operated network telemetry at scale: streaming telemetry (gNMI/OpenConfig) flow export (sFlow IPFIX NetFlow) or eBPF-based host-side instrumentation. You can reason about sampling cardinality and storage tradeoffs.
Comfortable writing Python or Go to build tooling telemetry pipelines infrastructure-as-code config management for network devices and automation that youll ship to production.
Think quantitatively by default. You reach for a notebook or a Grafana query before you reach for an opinion and you can turn messy counter data into a defensible cost model.
Communicate crisply. You can explain to a finance partner why a 10% egress reduction matters and to a network engineer why a specific ECMP imbalance is costing real money.
Strong candidates may also have
SRE experience for large-scale network infrastructure designing for reliability defining SLOs/SLIs for network services capacity planning with error budgets and incident response for network-impacting outages at scale.
Background on a cloud providers networking team or a cloud networking product team building or operating the interconnect backbone or SDN control plane from the provider side not just consuming it as a customer.
Familiarity with AI/ML infrastructure traffic patterns like collective communication (all-reduce all-gather) checkpoint/weight transfer inference serving and how these stress networks differ than traditional workloads in terms of burst behavior flow synchronization and bandwidth symmetry.
Experience with HPC fabrics like InfiniBand RoCE v2 lossless Ethernet or custom high-radix topologies and an understanding of how job placement congestion management and adaptive routing interact at scale.
Background in traffic engineering for large backbones and the operational judgment to know when TE is worth the complexity.
Hands-on time with multi-cloud connectivity: cross-cloud peering private interconnect products and the billing models that come with them.
Experience building cost/chargeback systems for shared infrastructure or FinOps exposure in a large cloud environment.
Representative projects
Build a per-flow cost attribution pipeline that traces every byte of cross-region egress back to the team and workload that generated it
Design QoS policy for the private backbone that prevents bulk checkpoint transfers from starving inference traffic
Model whether its cheaper to buy an additional 1.6Tb interconnect tranche or to re-route traffic through existing capacity
Instrument DCN fabric utilization with streaming telemetry and build the Grafana dashboards that become the teams source of truth for network observability
Why this role why now
Anthropics network footprint is growing faster than our ability to reason about it. Were turning up tens of terabits of private backbone capacity peering across clouds and moving model weights that keep getting larger. The efficiency opportunities are enormous and largely untouched this is a chance to build the measurement and optimization layer from the ground up with real budget impact and direct influence on how Anthropics infrastructure scales.
The annual compensation range for this role is listed below.
For sales roles the range provided is the roles On Target Earnings (OTE) range meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Annual Salary:
$320000 - $405000 USD
Logistics
Minimum education: Bachelors degree or an equivalent combination of education training and/or experience
Required field of study:A field relevant to the role as demonstrated through coursework training or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently we expect all staff to be in one of our offices at least 25% of the time. However some roles may require more time in our offices.
Visa sponsorship:We do sponsor visas! However we arent able to successfully sponsor visas for every role and every candidate. But if we make you an offer we will make every reasonable effort to get you a visa and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy so we urge you not to exclude yourself prematurely and to submit an application if youre interested in this work. We think AI systems like the ones were building have enormous social and ethical implications. We think this makes representation even more important and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams remember that Anthropic recruiters only contact you some cases we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money fees or banking information before your first day. If youre ever unsure about a communication dont click any linksvisit for confirmed position openings.
How were different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact advancing our long-term goals of steerable trustworthy AI rather than work on smaller and more specific puzzles. We view AI research as an empirical science which has as much in common with physics and biology as with traditional efforts in computer science. Were an extremely collaborative group and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic including: GPT-3 Circuit-Based Interpretability Multimodal Neurons Scaling Laws AI & Compute Concrete Problems in AI Safety and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits optional equity donation matching generous vacation and parental leave flexible working hours and a lovely office space in which to collaborate with colleagues. Guidance on Candidates AI Usage:Learn aboutour policyfor using AI in our application process
Required Experience:
IC
About Company
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.