Senior Data Engineering Lead
Little Rock, AR - USA
Job Summary
Job Description:
About the Role
We are building a modern cloud-native B2B data platform that delivers high-quality data attributes across three output domains Business Data Professional Data and Media/Digital Data. These are production data products used by enterprise clients at scale and engineering excellence is at the center of everything we do.
You will be stepping into ownership of a new platform initiative and taking the technical lead on where it goes from here. This is not starting from zero the direction is set but the architecture decisions engineering standards and future roadmap are yours to shape. You will work closely with Product Owners to align on priorities and translate product vision into sound scalable engineering. You will be hands on the keyboard every day writing production code debugging pipeline failures troubleshooting data quality issues and resolving process problems when they arise. This is not a role that transitions you into management; contributing code is a core expectation of the job.
The platform is Snowflake-native and we use the best of what Snowflake offers Snowpark for Python Cortex AI Tasks and Streams Data Metric Functions and Streamlit. If you have deep Snowflake experience you will hit the ground running. If your background is Redshift BigQuery or Databricks and you are motivated to go deep on Snowflake we want to talk.
We are also an AI-First engineering team. Claude GitHub Copilot and Cortex Code are standard tools in our daily workflow not experiments. If building smarter and faster with AI genuinely excites you you will fit right in here.
Why This Role
Here is what makes this role worth your attention:
- Technical ownership of a new platform initiative you are not inheriting someone elses architecture or maintaining a legacy system; you are shaping where this platform goes from here
- A role where staying in the code is expected not managed out of writing debugging and troubleshooting production systems are core daily responsibilities alongside the leadership work
- A real seat at the table you will work directly with Product Owners to influence the roadmap not just execute it. Your engineering perspective shapes what gets built and when
- An AI-First culture that is already real Claude GitHub Copilot and Cortex Code are in active daily use today not on a roadmap
- Genuine remote flexibility within a stable established enterprise dentsu has served Fortune 500 clients for over 120 years; this is not a startup that could fold
- A Snowflake environment sophisticated enough to grow your career Cortex AI DMFs Streamlit native ML Tasks and Streams RBAC you will go deep on a modern serious stack
- A strong match if you value technical depth real ownership and a modern stack and want a role where your engineering decisions have lasting impact on a product people depend on
What Youll Do
Snowflake-Native Platform Development
Our platform is 100% Snowflake-native. You will build maintain and evolve it using the full depth of what Snowflake offers:
- Write and maintain Snowpark for Python stored procedures UDFs and UDTFs as the primary pipeline engineering artifacts
- Orchestrate data workflows using Snowflake Tasks Streams and event-driven processing patterns
- Design and implement a layered architecture (RAW Staging Curated Product Audit) with full lineage and observability
- Apply Cortex AI for embedded ML intelligent anomaly detection and automated data quality enforcement
- Enforce data quality standards using Data Metric Functions (DMFs) and native Snowflake monitoring
- Build operational and analytical Streamlit applications natively within Snowflake
- Optimize warehouse sizing clustering keys query performance and resource monitors for cost and throughput
- Implement Snowflake RBAC: role design dynamic data masking and row-level access policies
- Use Zero-Copy Cloning and Time Travel for environment promotion rollback and reproducible testing
AI-First Development
AI-First is not a mindset we are building toward it is how this team operates today. You will help set the standard for how we use AI well:
- Use Claude (Anthropic) for code generation review debugging architecture documentation and sprint planning support
- Use GitHub Copilot for in-editor code completion test generation PR descriptions and refactoring
- Use Cortex Code within Snowflake for AI-assisted SQL and Snowpark development without leaving the platform
- Champion AI-first practices across the team onboard engineers to tooling define prompting standards share what works
- Use AI to accelerate test generation runbook creation anomaly diagnosis and code documentation at scale
- Continuously find new opportunities to cut cycle time and raise quality through automation
Data Engineering & Pipeline Development
- Write advanced SQL and Python for complex transformations MERGE-based incremental loads and CDC patterns
- Design dimensional data models and schema structures that support Business Professional and Media/Digital data domains
- Build pipelines processing firmographic professional and consumer-level attributes with high accuracy and completeness
- Implement professional-to-consumer identity linkage logic using established match rules and identity resolution services
- Debug and troubleshoot pipeline failures data anomalies process errors and performance bottlenecks directly this is direct hands-on work
- Build observable pipelines with alerting SLA tracking data quality gates and automated anomaly escalation
- Apply best practices for metadata management schema versioning partitioning and operational documentation
Technical Leadership
- Serve as the technical authority on the team create architecture docs integration specs and component-level design diagrams
- Run thorough code reviews that enforce quality performance security and maintainability standards across the codebase
- Roll up your sleeves and debug complex issues alongside the team an active contributor when things get difficult
- Mentor and guide data engineers including offshore team members through pairing structured review and direct feedback
- Participate in engineering interviews and help evaluate technical candidates
- Set and maintain consistent standards across coding testing deployment and observability
- Translate technical decisions clearly for product managers stakeholders and non-technical partners
Product & Partnership
- Collaborate closely with Product Owners to translate product vision into technical workstreams delivery plans and engineering roadmaps your voice shapes what gets built not just how
- Serve as the engineering counterpart to product leadership bridging technical constraints platform capabilities and business goals in every planning conversation
- Partner with identity and platform teams to understand APIs matching logic data structures and integration requirements
- Ensure engineering decisions consistently support goals around data quality availability accuracy and linkage completeness
What You Bring
The Foundation Required
These are the fundamentals we look for in every hire.
- 5 years of active hands-on data engineering writing production code debugging complex pipeline failures and troubleshooting data quality and process issues alongside experience leading technical workstreams or a small team
- Strong Python skills production-grade pipelines error handling logging testing and code structure
- Strong SQL complex transformations window functions MERGE / upsert patterns and incremental load strategies
- Experience building layered data architectures (raw curated serving) with lineage tracking and pipeline observability
- Regular active use of AI coding tools Claude GitHub Copilot or similar as part of your daily engineering practice
- Hands-on experience with CI/CD pipelines and Git-based version-controlled development workflows
- Strong written and verbal communication comfortable writing technical documentation and working across product data science and platform teams
Cloud Data Platform Snowflake Strongly Preferred
Deep Snowflake experience is a significant advantage. Strong candidates from Redshift BigQuery or Databricks who are motivated to go deep on Snowflake are encouraged to apply.
- Hands-on production experience with Snowflake Snowpark for Python Tasks Streams stored procedures and performance optimization is strongly preferred
- If your primary platform is Redshift BigQuery or Databricks: we value strong cloud data warehousing fundamentals and will invest in Snowflake-native depth with the right candidate
- Familiarity with cloud-native data ingestion patterns external stages bulk loading or streaming ingest
Bonus Points Well Invest in the Right Person
These are things we will help you develop. Having them is a plus not a requirement.
- Snowflake-native ML capabilities: Cortex AI Data Metric Functions native Anomaly Detection
- Streamlit in Snowflake for operational dashboards and self-service tooling
- Snowflake security patterns: RBAC design dynamic data masking row access policies
- Experience scaling AI-first engineering practices across a team prompting standards tooling adoption workflow documentation
- Snowflake certification (SnowPro Core or SnowPro Advanced: Data Engineer)
- Knowledge of B2B data concepts firmographic attributes professional data entity resolution is a plus not a requirement
- Experience with third-party B2B data providers such as D&B Bombora or ZoomInfo
- Experience working with distributed or offshore engineering teams
- Familiarity with data privacy consent frameworks and governance best practices
What Success Looks Like
- Reliable production-grade B2B data pipelines delivering Business Professional and Media/Digital outputs with high accuracy and SLA adherence
- Your own code is in production you are an active consistent contributor to the codebase and engaged directly when issues need to be diagnosed and resolved
- A Snowflake-native codebase Snowpark stored procedures Tasks and Streams that is well-tested documented and maintainable by the full team
- AI tools actively embedded in the teams daily workflow with measurably faster delivery cycles and higher quality output
- High accuracy and completeness in professional-to-consumer data linkages across the identity platform
- Strong technical alignment with product and identity leadership on architecture roadmaps and delivery goals
- A team that can execute confidently because standards are clear reviews are thorough and the architecture makes sense
The annual salary range for this position is $94000 - $152662. Placement within the salary range is based on a variety of factors including relevant experience knowledge skills and other factors permitted by law.
Benefits available with this position include:
Medical vision and dental insurance
Life insurance
Short-term and long-term disability insurance
401k
Flexible paid time off
At least 15 paid holidays per year
Paid sick and safe leave and
Paid parental leave.
Dentsu also complies with applicable state and local laws regarding employee leave benefits including but not limited to providing time off pursuant to the Colorado Healthy Families and Workplaces Act in accordance with its plans and policies. For further details regarding Dentsu benefits please visit .
At dentsu we believe great work happens when were connected. Our way of working combines flexibility with in-person collaboration to spark ideas and strengthen our teams. Employees who live within a commutable distance of one of our hub offices currently located in Chicago metro Detroit Los Angeles and New York City are required and expected to work from the office three days per week (two days per week for employees based in Los Angeles). Dentsu may designate other Hub offices at any time. Those who live outside a commutable range may be designated as remote depending on the role and business needs. Regardless of your work location we expect our employees to be flexible to meet the needs of our Company and clients which may include attendance in an office.
#LI-RL1
Location:
USA - Remote - GeorgiaBrand:
MerkleTime Type:
Full timeContract Type:
PermanentDentsu is committed to providing equal employment opportunities to all applicants and employees. We do this without regard to race color national origin sex sexual orientation gender identity age pregnancy childbirth or related medical conditions ancestry physical or mental disability marital status political affiliation religious practices and observances citizenship status genetic information veteran status or any other basis protected under applicable federal state or local law.
Dentsu is committed to providing reasonable accommodation to among others individuals with disabilities and disabled veterans. If you need an accommodation because of a disability to search and apply for a career opportunity with us please send an e-mail toby clicking on the link to let usknow the nature of your accommodation request and your contact information. We are here to support you.
Required Experience:
Senior IC
About Company
Dentsu is an integrated growth and transformation partner to the world’s leading organizations. Founded in 1901 in Tokyo, Japan, and now present in approximately 120 countries.