Sr. Data Engineer

Job expired!

Join Cargill as a Sr. Data Engineer and Make a Global Impact

At Cargill, our extensive size and scale provide us with the unique ability to positively impact the world. We are committed to nourishing the world in a safe, responsible, and sustainable manner. As a family-owned company, we deliver food, ingredients, agricultural solutions, and industrial products essential for everyday life. We connect farmers with markets to help them thrive and link customers with ingredients to create meals people love. From eggs to edible oils, salt to skincare, and feed to alternative fuels, our 160,000 colleagues across 70 countries produce essential products that touch billions of lives daily. Join us at Cargill to achieve your higher purpose.

Job Purpose and Impact

The Sr. Data Engineer will design, build, and operate high-performance data-centric solutions using advanced big data capabilities for our data platform environment. In this pivotal role, you will serve as an expert for data access pathways and techniques, collaborating with analysts within the functional data analytics team. Your responsibilities include designing data structures and pipelines, implementing data transformations, combinations, and aggregations.

Key Accountabilities

  • Work with businesses, application and process owners, and product teams to define requirements and design big data and analytics solutions.
  • Participate in decision-making processes related to architecting solutions.
  • Develop technical solutions using big data and cloud-based technologies, ensuring sustainability and robustness.
  • Perform data modeling, prepare data in databases for analytics tools, and configure and develop data pipelines to optimize data assets.
  • Provide technical support throughout all phases of the solution lifecycle.
  • Build prototypes to test new concepts, contributing ideas and code to improve core software infrastructure, patterns, and standards.
  • Drive the adoption of new technologies and methods within the functional data and analytics team and mentor junior data engineers.
  • Handle complex issues independently with minimal supervision, escalating only the most challenging problems as necessary.
  • Perform other duties as assigned.

Qualifications

Minimum Qualifications
  • Bachelor's degree in a related field or equivalent experience.
  • Minimum of four years of related work experience.
  • Advanced English skills, both oral and written.
Preferred Qualifications
  • Experience with data collection and ingestion tools such as AWS Glue, Kafka Connect, and Flink.
  • Experience managing large, heterogeneous datasets with tools like Iceberg, Parquet, Avro, ORC, S3, HFDS, HIVE, and Kudu.
  • Experience with transformation and modeling tools, including SQL-based transformation frameworks, orchestration, and quality frameworks like dbt, Apache Nifi, Talend, AWS Glue, Airflow, Dagster, Great Expectations, and Oozie.
  • Experience in Big Data environments using Hadoop and Spark.
  • Experience with Cloud Platforms such as AWS, GCP, or Azure.
  • Knowledge of streaming and stream integration or middleware platforms like Kafka, Flink, JMS, or Kinesis.
  • Strong programming skills in SQL, Python, R, Java, Scala, or equivalent languages.
  • Proficiency in engineering tooling such as Docker, Git, and container orchestration services.
  • Experience with DevOps models, including best practices for code management, continuous integration, and deployment strategies.
  • Understanding of data governance considerations, including quality, privacy, and security implications for data product development and consumption.