Location- Overland Park KS/ Frisco TX
Role Details
- Sr. Data Engineer
Location: Bellevue HQ or Overland Park onsite 4 days a week
Bill Rate: Sr Data Engineer
Buy Rate- $67/hr
Work Required
- Lead the architecture design and implementation of scalable modular and reusable data flow pipelines using Cribl Apache NiFi Vector and other open-source platforms ensuring consistent ingestion strategies across a complex multi-source telemetry environment.
- Develop platform-agnostic ingestion frameworks and template-driven architectures to enable reusable ingestion patterns supporting a variety of input types (e.g. syslog Kafka HTTP Event Hubs Blob Storage) and output destinations (e.g. Snowflake Splunk ADX Log Analytics Anvilogic).
- Spearhead the creation and adoption of a schema normalization strategy leveraging the Open Cybersecurity Schema Framework (OCSF) including field mapping transformation templates and schema validation logic-designed to be portable across ingestion platforms.
- Design and implement custom data transformations and enrichments using scripting languages such as Groovy Python or JavaScript while enforcing robust governance and security controls (SSL/TLS client authentication input validation logging).
- Ensure full end-to-end traceability and lineage of data across the ingestion transformation and storage lifecycle including metadata tagging correlation IDs and change tracking for forensic and audit readiness.
- Collaborate with observability and platform teams to integrate pipeline-level health monitoring transformation failure logging and anomaly detection mechanisms.
- Oversee and validate data integration efforts ensuring high-fidelity delivery into downstream analytics platforms and data stores with minimal data loss duplication or transformation drift.
- Lead technical working sessions to evaluate and recommend best-fit technologies tools and practices for managing structured and unstructured security telemetry data at scale.
- Implement data transformation logic including filtering enrichment dynamic routing and format conversions (e.g. JSON CSV XML Logfmt) to prepare data for downstream analytics platforms. (100 plus sources of data)
- Contribute to and maintain a centralized documentation repository including ingestion patterns transformation libraries naming standards schema definitions data governance procedures and platform-specific integration details.
- Coordinate with security analytics and platform teams to understand use cases and ensure pipeline logic supports threat detection compliance and data analytics requirements.
Overview
We are seeking eight Senior Data Engineers to lead efforts in orchestrating and transforming complex security telemetry data flows. These individuals will be responsible for high-level architecture governance and ensuring secure and reliable movement of data between systems particularly for legacy and non-standard log sources. There are 100 data sources including existing and new that are specific to Cyber Security workloads that are in-scope. These tasks will be performed on one or more data ingestion pipelines (Cribl Vector NiFi)
- Snowflake Administrator
Work Required
- Review Snowflake architecture design including virtual warehouse configurations storage-compute separation and performance tuning strategies and suggest and where approved implement improvements.
- Implement BDM for new business users and facilitate onboarding
- Design advanced SQL queries and optimize them for speed and scalability across large datasets.
- Implement and enforce Snowflake security protocols including role-based access control encryption data masking and compliance standards.
- Ensure cost monitoring is implemented and provide dashboards and reports
- Ensure optimizations are both recorded and take actions to implement best practices
- Build and maintain data quality monitoring dashboards to identify missing delayed malformed or duplicate events and proactively address anomalies.
Overview
We need two Snowflake Administrators to lead the development and administration of Snowflake as a core enterprise data platform. These experts will help refine the architecture (where applicable) build high-performance data solutions manage secure access and optimize the platform to support scalable analytics and reporting operations across the Cybersecurity Snowflake instance.
- Azure Data Explorer Administrator
Work Required
- Review and validate the **Azure Data Explorer (ADX)** architecture to ensure scalability resiliency and performance. Recommend and implement approved changes to cluster sizing partitioning strategies and cache policies.
- Ensure integration of data pipelines such as Vector Event Hubs Azure Blob Cribl NiFI ensuring high throughput and fault tolerance.
- Develop and maintain Kusto Query Language (KQL) functions materialized views and time-series optimizations to support advanced querying and SIEM use cases.
- Ensure all data ingestion flows are monitored end-to-end with alerting and logging for failures latency issues or schema mismatches.
- Build and maintain data quality monitoring dashboards to identify missing delayed malformed or duplicate events and proactively address anomalies.
- Implement and document data normalization practices including alignment with schema standards like OCSF when applicable.
- Configure and maintain role-based access control (RBAC) and ensure compliance with corporate data governance and security standards.
- Provide cost visibility and optimization strategies including usage tracking retention tuning and query performance analysis.
Overview
We need two Azure Data Explorer Administrators to ensure ADX is deployed configured and optimized as the core log analytics and SIEM data platform. These individuals will be responsible for implementing and tuning ingestion pipelines from multiple sources optimizing data structures and queries for performance and establishing robust monitoring for ingestion failures data anomalies and operational health. Their expertise will be critical in ensuring the reliability scalability and security of ADX in support of a modern cloud-native SIEM modernization initiative.
- Observability Platform Engineerr
Work Required
- Lead the architecture and implementation of a comprehensive observability strategy across the entire SIEM modernization ecosystem spanning data pipeline layers (Cribl Vector NiFi) event transport (Event Hubs) intermediate storage (Blob) and multiple downstream platforms (Splunk Snowflake ADX Log Analytics Anvilogic).
- Design and build end-to-end telemetry and traceability for data events as they move across platforms enabling real-time visibility into ingestion transformation routing and storage processes.
- Develop and maintain dashboards and alerting mechanisms to detect:
- Faults and failures (e.g. dropped messages ingestion lags retry loops)
- Latency or throughput bottlenecks across pipelines
- Schema mismatches or format errors
- Duplicate delayed or missing data
- Data quality anomalies at point of ingestion and final storage
- Instrument each pipeline component (e.g. Cribl workers Vector agents NiFi processors) with health and performance metrics using native exporters APIs or custom collectors.
- Ensure observability tooling is in place for Azure Event Hubs including partition health consumer group lag and throttling events.
- Monitor Blob storage utilization and access patterns to identify ingest failures access permission issues or object lifecycle gaps.
- Implement and enforce correlation IDs or tracing metadata to follow data across systems and detect where in the pipeline an issue originates.
- Integrate monitoring solutions with Grafana Azure Monitor and PowerBI to support multiple stakeholder needs (technical operational and executive-level views).
- Partner closely with Security Engineering Platform Engineering and Data Engineering to ensure observability insights are actionable and result in measurable improvements.
- Automate reporting of SLO/SLA adherence for pipeline uptime data integrity and ingestion latency.
- Design alert routing and severity classification ensuring appropriate escalation workflows via systems such as PagerDuty ServiceNow or Microsoft Teams.
Overview
We require three Senior Data Engineers to build and operationalize observability capabilities across the SIEM ecosystem. These resources will lead efforts in designing integrated monitoring solutions for tools like Cribl Vector Splunk Snowflake ADX and Log Analytics. Their work will ensure continuous visibility into system health enabling proactive fault detection and performance management. These resources will leverage either or both Grafana and PowerBI for dashboarding.
- Program Manager
Work Required
- Own and drive the creation standardization and maintenance of program-level documentation ensuring that operational processes workflows standards and procedures are comprehensive up to date and centrally accessible.
- Establish and enforce process governance across the SIEM modernization effort identifying gaps and proactively implementing process improvements to ensure operational readiness and program sustainability.
- Ensure effective usage of operational ticketing systems (e.g. JIRA) including configuration reporting and workflows that align with the broader delivery and support structure.
- Collaborate with technical leads security SMEs and delivery stakeholders to gather knowledge and translate it into scalable documentation such as runbooks intake processes decision logs or policy artifacts.
- Identify policy gaps that impact the SIEM modernization effort and lead initiatives to update or draft new policies-whether TMO-level or program-specific-driving consensus and managing approvals.
- Design and build a SharePoint-based Center of Excellence (CoE) that serves as the centralized hub for knowledge management-containing resources such as services offered intake forms operational FAQs SOPs SLAs and policy references.
- Develop and manage end-user enablement materials including training guides how-to articles and FAQ documentation to support adoption of new processes platforms and services across the security organization.
- Proactively support cross-functional program alignment by collaborating with engineering platform and security leadership to ensure consistent messaging shared priorities and an integrated roadmap.
- Operate with a technical mindset-capable of understanding the nuances of security tooling data pipelines and SIEM architecture-to effectively bridge communication between business stakeholders and technical teams.
- Drive execution and accountability by owning status tracking risk identification and mitigation strategies ensuring consistent and measurable program progress.
Overview
We are seeking two highly capable Program Managers to lead operational alignment and program governance for the SIEM Modernization initiative. These individuals will go beyond traditional project coordination roles by driving the creation of scalable processes managing operational documentation and ensuring policy and governance frameworks are in place across the initiative. The ideal candidates are technically fluent proactive and hands-on-able to synthesize input from engineers architects and security SMEs into actionable artifacts and repeatable processes. Their contributions will directly enable consistent execution improved operational maturity and sustained success of the SIEM Modernization effort.
Location- Overland Park KS/ Frisco TX Role Details Sr. Data Engineer Location: Bellevue HQ or Overland Park onsite 4 days a week Bill Rate: Sr Data Engineer Buy Rate- $67/hr Work Required Lead the architecture design and implementation of scalable modular and reusable data flow pipelines usin...
Location- Overland Park KS/ Frisco TX
Role Details
- Sr. Data Engineer
Location: Bellevue HQ or Overland Park onsite 4 days a week
Bill Rate: Sr Data Engineer
Buy Rate- $67/hr
Work Required
- Lead the architecture design and implementation of scalable modular and reusable data flow pipelines using Cribl Apache NiFi Vector and other open-source platforms ensuring consistent ingestion strategies across a complex multi-source telemetry environment.
- Develop platform-agnostic ingestion frameworks and template-driven architectures to enable reusable ingestion patterns supporting a variety of input types (e.g. syslog Kafka HTTP Event Hubs Blob Storage) and output destinations (e.g. Snowflake Splunk ADX Log Analytics Anvilogic).
- Spearhead the creation and adoption of a schema normalization strategy leveraging the Open Cybersecurity Schema Framework (OCSF) including field mapping transformation templates and schema validation logic-designed to be portable across ingestion platforms.
- Design and implement custom data transformations and enrichments using scripting languages such as Groovy Python or JavaScript while enforcing robust governance and security controls (SSL/TLS client authentication input validation logging).
- Ensure full end-to-end traceability and lineage of data across the ingestion transformation and storage lifecycle including metadata tagging correlation IDs and change tracking for forensic and audit readiness.
- Collaborate with observability and platform teams to integrate pipeline-level health monitoring transformation failure logging and anomaly detection mechanisms.
- Oversee and validate data integration efforts ensuring high-fidelity delivery into downstream analytics platforms and data stores with minimal data loss duplication or transformation drift.
- Lead technical working sessions to evaluate and recommend best-fit technologies tools and practices for managing structured and unstructured security telemetry data at scale.
- Implement data transformation logic including filtering enrichment dynamic routing and format conversions (e.g. JSON CSV XML Logfmt) to prepare data for downstream analytics platforms. (100 plus sources of data)
- Contribute to and maintain a centralized documentation repository including ingestion patterns transformation libraries naming standards schema definitions data governance procedures and platform-specific integration details.
- Coordinate with security analytics and platform teams to understand use cases and ensure pipeline logic supports threat detection compliance and data analytics requirements.
Overview
We are seeking eight Senior Data Engineers to lead efforts in orchestrating and transforming complex security telemetry data flows. These individuals will be responsible for high-level architecture governance and ensuring secure and reliable movement of data between systems particularly for legacy and non-standard log sources. There are 100 data sources including existing and new that are specific to Cyber Security workloads that are in-scope. These tasks will be performed on one or more data ingestion pipelines (Cribl Vector NiFi)
- Snowflake Administrator
Work Required
- Review Snowflake architecture design including virtual warehouse configurations storage-compute separation and performance tuning strategies and suggest and where approved implement improvements.
- Implement BDM for new business users and facilitate onboarding
- Design advanced SQL queries and optimize them for speed and scalability across large datasets.
- Implement and enforce Snowflake security protocols including role-based access control encryption data masking and compliance standards.
- Ensure cost monitoring is implemented and provide dashboards and reports
- Ensure optimizations are both recorded and take actions to implement best practices
- Build and maintain data quality monitoring dashboards to identify missing delayed malformed or duplicate events and proactively address anomalies.
Overview
We need two Snowflake Administrators to lead the development and administration of Snowflake as a core enterprise data platform. These experts will help refine the architecture (where applicable) build high-performance data solutions manage secure access and optimize the platform to support scalable analytics and reporting operations across the Cybersecurity Snowflake instance.
- Azure Data Explorer Administrator
Work Required
- Review and validate the **Azure Data Explorer (ADX)** architecture to ensure scalability resiliency and performance. Recommend and implement approved changes to cluster sizing partitioning strategies and cache policies.
- Ensure integration of data pipelines such as Vector Event Hubs Azure Blob Cribl NiFI ensuring high throughput and fault tolerance.
- Develop and maintain Kusto Query Language (KQL) functions materialized views and time-series optimizations to support advanced querying and SIEM use cases.
- Ensure all data ingestion flows are monitored end-to-end with alerting and logging for failures latency issues or schema mismatches.
- Build and maintain data quality monitoring dashboards to identify missing delayed malformed or duplicate events and proactively address anomalies.
- Implement and document data normalization practices including alignment with schema standards like OCSF when applicable.
- Configure and maintain role-based access control (RBAC) and ensure compliance with corporate data governance and security standards.
- Provide cost visibility and optimization strategies including usage tracking retention tuning and query performance analysis.
Overview
We need two Azure Data Explorer Administrators to ensure ADX is deployed configured and optimized as the core log analytics and SIEM data platform. These individuals will be responsible for implementing and tuning ingestion pipelines from multiple sources optimizing data structures and queries for performance and establishing robust monitoring for ingestion failures data anomalies and operational health. Their expertise will be critical in ensuring the reliability scalability and security of ADX in support of a modern cloud-native SIEM modernization initiative.
- Observability Platform Engineerr
Work Required
- Lead the architecture and implementation of a comprehensive observability strategy across the entire SIEM modernization ecosystem spanning data pipeline layers (Cribl Vector NiFi) event transport (Event Hubs) intermediate storage (Blob) and multiple downstream platforms (Splunk Snowflake ADX Log Analytics Anvilogic).
- Design and build end-to-end telemetry and traceability for data events as they move across platforms enabling real-time visibility into ingestion transformation routing and storage processes.
- Develop and maintain dashboards and alerting mechanisms to detect:
- Faults and failures (e.g. dropped messages ingestion lags retry loops)
- Latency or throughput bottlenecks across pipelines
- Schema mismatches or format errors
- Duplicate delayed or missing data
- Data quality anomalies at point of ingestion and final storage
- Instrument each pipeline component (e.g. Cribl workers Vector agents NiFi processors) with health and performance metrics using native exporters APIs or custom collectors.
- Ensure observability tooling is in place for Azure Event Hubs including partition health consumer group lag and throttling events.
- Monitor Blob storage utilization and access patterns to identify ingest failures access permission issues or object lifecycle gaps.
- Implement and enforce correlation IDs or tracing metadata to follow data across systems and detect where in the pipeline an issue originates.
- Integrate monitoring solutions with Grafana Azure Monitor and PowerBI to support multiple stakeholder needs (technical operational and executive-level views).
- Partner closely with Security Engineering Platform Engineering and Data Engineering to ensure observability insights are actionable and result in measurable improvements.
- Automate reporting of SLO/SLA adherence for pipeline uptime data integrity and ingestion latency.
- Design alert routing and severity classification ensuring appropriate escalation workflows via systems such as PagerDuty ServiceNow or Microsoft Teams.
Overview
We require three Senior Data Engineers to build and operationalize observability capabilities across the SIEM ecosystem. These resources will lead efforts in designing integrated monitoring solutions for tools like Cribl Vector Splunk Snowflake ADX and Log Analytics. Their work will ensure continuous visibility into system health enabling proactive fault detection and performance management. These resources will leverage either or both Grafana and PowerBI for dashboarding.
- Program Manager
Work Required
- Own and drive the creation standardization and maintenance of program-level documentation ensuring that operational processes workflows standards and procedures are comprehensive up to date and centrally accessible.
- Establish and enforce process governance across the SIEM modernization effort identifying gaps and proactively implementing process improvements to ensure operational readiness and program sustainability.
- Ensure effective usage of operational ticketing systems (e.g. JIRA) including configuration reporting and workflows that align with the broader delivery and support structure.
- Collaborate with technical leads security SMEs and delivery stakeholders to gather knowledge and translate it into scalable documentation such as runbooks intake processes decision logs or policy artifacts.
- Identify policy gaps that impact the SIEM modernization effort and lead initiatives to update or draft new policies-whether TMO-level or program-specific-driving consensus and managing approvals.
- Design and build a SharePoint-based Center of Excellence (CoE) that serves as the centralized hub for knowledge management-containing resources such as services offered intake forms operational FAQs SOPs SLAs and policy references.
- Develop and manage end-user enablement materials including training guides how-to articles and FAQ documentation to support adoption of new processes platforms and services across the security organization.
- Proactively support cross-functional program alignment by collaborating with engineering platform and security leadership to ensure consistent messaging shared priorities and an integrated roadmap.
- Operate with a technical mindset-capable of understanding the nuances of security tooling data pipelines and SIEM architecture-to effectively bridge communication between business stakeholders and technical teams.
- Drive execution and accountability by owning status tracking risk identification and mitigation strategies ensuring consistent and measurable program progress.
Overview
We are seeking two highly capable Program Managers to lead operational alignment and program governance for the SIEM Modernization initiative. These individuals will go beyond traditional project coordination roles by driving the creation of scalable processes managing operational documentation and ensuring policy and governance frameworks are in place across the initiative. The ideal candidates are technically fluent proactive and hands-on-able to synthesize input from engineers architects and security SMEs into actionable artifacts and repeatable processes. Their contributions will directly enable consistent execution improved operational maturity and sustained success of the SIEM Modernization effort.
View more
View less