Senior Cloud Governance Engineer Job at Openkyber, Georgia

QmhXNkFGQ3dDY3BKUWJvUXZ2L3dCV2RaQlE9PQ==
  • Openkyber
  • Georgia

Job Description

ONLY W2 ONLY W2 ONLY W2 ONLY W2 ONLY W2 ONLY W2 ONLY W2 ONLY W2 ONLY W2 ONLY W2 ONLY W2 ONLY W2 ONLY W2 ONLY W2 ONLY W2 ONLY W2 ONLY W2 ONLY W2 ONLY W2 ONLY W2 ONLY W2 Role: Data Pipeline Engineer Long term Remote W2 Only Skill-Set : Title: Senior Data Pipeline Engineer Top 3: Python | DataBricks | Spark Years of Experience: Senior level resource Tech Stack: Someone who lives in Databricks/Spark Understands OTEL and telemetry schemas Has built log/metric/trace pipelines Has experience with Cribl/Vector/Kafka Has worked with security / SIEM / SOC data Can automate with Terraform + CI/CD Can partner with security, cloud, SRE teams Is comfortable designing high-scale architectures Role In the Managers Words : Job description has the skill-set is laid out there open to taking someone who has stronger depth in any one area - wouldn't mind having - Python is the non-negotiable Some experience in distributed systems Some experience in modeling Core data engineering, but at a senior level. Executive tasks with minimal oversight POSITION SUMMARY Client is seeking a highly skilled and driven individual contributor to join our enterprise observability and security engineering team. This role focuses on building, scaling, and operationalizing the enterprise Observability Lakehouse that powers threat detection, incident response, and platform visibility across hybrid cloud environments. The ideal candidate will demonstrate deep expertise in Databricks, large-scale telemetry processing, and OTEL-aligned observability architectures. This position requires strong engineering rigor, the ability to design high-volume log, metric, and trace pipelines, and a passion for improving security and reliability through data. A critical aspect of the role includes partnering with Security Engineering, SRE, and Cloud teams to ensure telemetry is complete, trustworthy, and actionable. What we expect of you : Build, scale, and maintain enterprise-grade log, metric, and trace pipelines using Databricks, cloud data lakes, and distributed data processing engines. Implement ingestion and transformation workflows using Cribl, Vector, GitHub Actions, Jenkins, or similar technologies. Design and expand an Observability Lakehouse aligned to OpenTelemetry (OTEL) data models and standards. Normalize and model high-volume security and observability data for detection, forensics, and operational intelligence use cases. Develop automated ETL/ELT frameworks, Delta Lake architectures, and data quality checks for unstructured and semi-structured telemetry. Collaborate closely with Security Engineering, SRE, Cloud, and SOC teams to enhance enterprise visibility and improve detection fidelity. Build CI/CD workflows and reusable IaC-driven patterns for pipeline deployment and automation. Troubleshoot performance bottlenecks and drive continuous improvements in reliability, latency, and cost efficiency. Contribute to team knowledge sharing and engineering standards focused on observability, security, and reliability. REQUIRED QUALIFICATIONS 5+ years of experience building or supporting log, metric, or trace pipelines aligned to OTEL or similar telemetry standards in a Data, Security Data, or Observability Engineering level role. 5+ years of hands-on experience with Databricks, Spark, or large-scale distributed data processing systems. 5+ years of experience working with cloud services across AWS, Azure, or Google Cloud Platform (storage, eventing, compute, or equivalent). 5+ years of experience with SQL and Python in production data environments. PREFERRED QUALIFICATIONS Experience with Cribl, Vector, Kafka, or similar high-volume ingestion technologies. Background supporting SIEM/SOAR, detection engineering, or threat analytics platforms. Familiarity with Delta Lake, Unity Catalog, metadata management, and lineage tooling. Understanding of enterprise observability platforms (Splunk, Datadog, Elastic, etc.). Knowledge of security governance, auditing, access controls, and sensitive-data handling. Experience with IaC tooling (Terraform, ARM/Bicep, CloudFormation). Familiarity with cloud orchestration technologies (Azure Functions, AWS Lambda, Google Cloud Platform Cloud Functions, Logic Apps, Kubernetes-based platforms). Strong communication skills for both deeply technical and executive audiences. Passion for observability, security, continuous learning, and platform-level engineering.

Job Tags

Remote work,

Similar Jobs

Serverless Guru LLC.

AWS Solutions Architect (Remote) Job at Serverless Guru LLC.

 ...skilled and experienced AWS Solutions Architect to join our dynamic team. The ideal candidate will have a deep understanding of cloud computing technologies, particularly Amazon Web Services (AWS), and possess the ability to design, implement, and manage scalable, secure... 

Lighthouse Integrated Solutions

Lead Low Voltage Field Tech Job at Lighthouse Integrated Solutions

 ...self-starter individuals who can handle a variety of service call install, rollouts, cable pulls, etc. here in Birmingham and surrounding...  ...those with experience in I.T., Access control, Printer repair techs, Security and building automation, HVAC, Audio/Video, etc. You... 

Reflex Media, Inc.

Product Designer Job at Reflex Media, Inc.

 ...Summary As a Product Designer at Reflex Media, youll own and elevate the end-to-end design of core experiences across Seeking.coms...  ...shipping beautiful, modern, and luxurious consumer products. Youll set a relentlessly high bar for elegant, intuitive UX and craft... 

ISG

Georgia, Macon SIU Investigator PT Job at ISG

 ...About Us Insight Service Group (ISG) is a national investigative services firm specializing in cost containment and anti-fraud related services...  ...within our operations. Our Special Investigations Unit (SIU) plays a vital role in identifying and mitigating risks associated... 

Christus Health

Epic Analyst - ASAP Job at Christus Health

Description Summary: The Clinical Informatics Systems Analyst II is primarily responsible for assisting in the operation and administration of clinical information systems, collaborating with clinical and technical associates to enhance workflow methodology and tools. Support...