Specialist Solutions Architect - Data Engineering & Google Cloud

  • Full Time
Job expired!

FEQ424R303

This role can be remote. 

As a Specialist Solutions Architect (SSA) - Data Engineering, you will guide customers in building big data solutions on GCP Databricks spanning a vast range of use cases. You will work in a customer-facing role, collaborating with and supporting Solution Architects, and this role requires hands-on production experience with Apache Spark™ and proficiency in other data technologies. SSAs aid customers with design and successful workload implementation, aligning their technical roadmap to extend Databricks Lakehouse Platform usage. As a deep subject matter expert reporting to the Specialist Field Engineering Manager, you will further develop your technical skills through mentorship, learning, and internal training programs and establish expertise in a specialty area - this could be streaming, performance tuning, industry expertise, or more.

Impact you will make:

  • Provide technical leadership to guide strategic customers toward successful implementation on big data projects, from architectural design to data engineering to deploying models.
  • Design production-level data pipelines, including complete pipeline load performance testing and optimization.
  • Become a technical expert in an area such as data lake technology, big data streaming, or big data ingestion and workflows.
  • Support Solution Architects with more advanced aspects of the technical sale, including custom proof of concept content, estimating workload sizing, and custom architectures.
  • Provide tutorials and training to improve community adoption (including hackathons and conference presentations).
  • Contribute to the Databricks Community.

What we look for:

  • 5+ years of experience in a technical role with expertise in at least one of the following:
    • Software Engineering/Data Engineering: data ingestion, streaming technologies - such as Spark Streaming and Kafka, performance tuning, troubleshooting, and debugging Spark or other big data solutions.
    • Data Applications Engineering: Building use cases that use data - such as risk modeling, fraud detection, customer lifetime value.
  • Significant experience constructing big data pipelines in Google Cloud.
  • Experience maintaining and expanding production data systems to evolve with complex needs.
  • Deep specialty expertise in at least one of the following areas:
    • Experience scaling big data workloads that are performant and cost-effective.
    • Experience with Development Tools for CI/CD, Unit and Integration testing, Automation and Orchestration, REST API, BI tools and SQL Interfaces (e.g., Jenkins).
    • Experience designing data solutions on cloud infrastructure and services using best practices in cloud security and networking.
    • Experience implementing industry-specific data analytics use cases.
  • Production programming experience in SQL and Python, Scala, or Java.
  • 2 years of professional experience with Big Data technologies (e.g., Spark, Hadoop, Kafka) and architectures.
  • 2 years of customer-facing experience in a pre-sales or post-sales role.
  • Ability to meet expectations for technical training and role-specific outcomes within six months of hire.
  • Bachelor's degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience.
  • Willingness to travel up to 30% when necessary.

Benefits

  • Medical, Dental, and Vision coverage
  • 401(k) Plan
  • FSA, HSA and Commuter Benefit Plans
  • Equity Awards
  • Flexible Time Off
  • Paid Parental Leave
  • Family Planning benefits
  • Fitness Reimbursement
  • Annual Career Development Fund
  • Reimbursement for Home Office/Work Headphones 
  • Employee Assistance Program (EAP) 
  • Business Travel Accident Insurance
  • Mental Wellness Resources

Pay Range Transparency

Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents base salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks utilizes the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in, visit our page here.