Principal Data Engineer at Amgen - Redefine What's Possible
Career Category: Information Systems
Job Description
Imagine With Us:
At Amgen, our mission—to serve patients—drives everything we do. As a global leader in biotechnology, we are innovators collaborating across the world to deliver groundbreaking products to over 10 million patients. Join us for a career that you can be proud of.
Your Role: Principal Data Engineer
In this critical role, you will become a key member of the Coverage & Pricing Intelligence (CPI) product team. Focused on improving deal modeling and forecasting technology, your efforts will help to increase revenue opportunities for Amgen's products in the U.S. market and beyond.
Responsibilities:
Data Infrastructure Development:
- Lead the design, prototype, build, testing, implementation, and DevOps processes for Coverage & Pricing's Deal Modeling, Forecasting, and Analytics Products on GCO's AWS Data Lake, Databricks, Anaplan, and Tableau platforms.
- Provide strategic leadership in Anaplan system architecture design.
- Develop and maintain data warehousing solutions to support analytics and reporting.
- Ensure the robustness, security, and optimization of data infrastructure.
Technical Leadership:
- Provide guidance and mentorship to junior data engineers.
- Lead architectural reviews and ensure best practices in data engineering.
- Collaborate with product owners, data scientists, analysts, and business stakeholders to meet data needs and deliver solutions.
- Document data solutions, develop, and maintain technical specification documentation.
Collaboration and Mentorship:
- Work with multi-functional teams to develop architectural strategies and evaluate emerging technologies.
- Facilitate technical discussions and decision-making processes.
- Support the technology stack of Commercial's Coverage, Contracting, and Pricing products on standard platforms (Model N, SAP, Salesforce, GDNA, Anaplan, AWS Redshift/Alteryx/Tableau).
Data Management:
- Design and maintain scalable ETL pipelines utilizing various technologies.
- Ensure data quality, consistency, and reliability through effective data governance practices.
- Monitor and optimize data systems for performance, scalability, and security.
- Experience with DevOps practices and building CI/CD pipelines.
Innovation and Improvement:
- Stay updated with the latest industry trends and advancements.
- Identify opportunities for process improvements and drive initiatives to enhance development lifecycle efficiency.
Qualifications:
Basic Qualifications:
- Doctorate degree and 2 years of Computer Science, Data Engineering, or a related field
- Master's degree and 4 years of Computer Science, Data Engineering or a related field
- Bachelor's degree and 6 years of Computer Science, Data Engineering, or a related field
- Associate degree and 10 years of Computer Science, Data Engineering, or a related field
- High school diploma / GED and 12 years of Computer Science, Data Engineering, or a related field
Preferred Qualifications:
- Over 5 years of experience in data engineering, focusing on large-scale data systems.
- Over 3 years of experience in designing and implementing complex data pipelines and ETL processes.
- 4 years experience in data warehousing, data modeling, and database design using Databricks and AWS/Azure stack.
- Proficiency in SQL, Python, R, and other programming languages.
- Expertise in big data technologies (e.g., Hadoop, Spark, Kafka).
- Experience with cloud data platforms (e.g., AWS, Azure, Google Cloud).
- Solid understanding of data visualization tools (e.g., Tableau, Power BI).
- Strong problem-solving skills and attention to detail.
- Excellent communication and collaboration skills.
- Experience in Agile development and DevOps frameworks (Jenkins, JIRA, GitHub).
- Experience with machine learning and analytics frameworks.
- Certification in Agile or Scaled Agile Framework (SAFe) preferred.
What You Can Expect from Us: