What we offer:
What We Offer:
Career Development
Competitive Compensation and Benefits
Pay Transparency
Global Opportunities
Learn More Here: https://www.dematic.com/en-us/about/careers/what-we-offer
Dematic provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training.
The base pay range for this role is estimated to be $65,000.00 - $140,000.00 at the time of posting. Final compensation will be determined by various factors such as work location, education, experience, knowledge and skills.
#LI-DP1
Tasks and Qualifications:
What We Are Looking For:
Responsibilities:
Design, develop, and maintain scalable and efficient data pipelines to extract, transform, and load (ETL) data from various sources into data lakes and data warehouses.
Design and develop microservices to
Collaborate with data scientists, analysts, and cross functional teams to design data models, database schemas and data storage solutions.
Implement data integration and data quality processes to ensure the accuracy and reliability of data for analytics and reporting.
Optimize data storage, processing, and querying performance for large-scale datasets.
Enable advanced analytics and machine learning capabilities on the data platform.
Continuously monitor and improve data quality and data governance practices.
Stay up to date with the latest data engineering trends, technologies, and best practices.
Requirements:
Bachelor’s degree in computer science, Engineering, or a related field.
5+ of proven experience in data engineering, data warehousing, and ETL processes.
Proficiency in data engineering tools and technologies such as SQL, Python, Spark, Hadoop, Airflow, Apache Kafka and Presto.
Solid experience with table formats such as Apache Iceberg or Delta Lake.
Design and development experience with batch and real time streaming infrastructure and workloads.
Solid experience with data lineage, data quality and data observability.
Solid experience designing and developing microservices and distributed architecture.
Hands-on experience with cloud-based data platforms (e.g., AWS, Azure, GCP) preferably GCP and data lakehouse architectures.
Strong experience with container technologies such as Docker, Kubernetes.
Strong understanding of data modeling, data architecture, and data governance principles.
Excellent experience with DataOps principles and test automation.
Familiarity with data processing and querying using distributed systems and NoSQL databases.
Ability to optimize and tune data processing and storage for performance and cost-efficiency.
Excellent problem-solving skills and the ability to work on complex data engineering challenges.
Strong communication and collaboration skills to work effectively with cross-functional teams.
Previous experience mentoring and guiding junior data engineers is a plus.
Relevant certifications in data engineering or cloud technologies are desirable.
Nice to have:
Experience working in a Data Mesh architecture.
Supply Chain domain experience.