Join Sanofi as a High Performance Computing Expert
About the Digital R&D Data Products Team
We are an innovative global healthcare company with one purpose: we chase the miracles of science to improve people's lives. Our team, spread across 100 countries, is dedicated to transforming the practice of medicine by turning the impossible into the possible. We offer life-changing treatment options and life-saving vaccine protection to millions globally, emphasizing sustainability and social responsibility at the heart of our ambitions.
Sanofi has recently launched an ambitious digital transformation program. The Digital R&D Data Product team is a vital part of this journey, accelerating the digitalization of the R&D data landscape by building state-of-the-art Data Fabric and delivering AI-powered Data Products across R&D functions.
Why Grow with Us?
In this role, you will develop globally scalable solutions crucial for Sanofi's Digital R&D Data products, empowering AI and ML initiatives to generate value across the R&D Value Chain. Specifically, you will:
- Participate in business research, including surveys, workshops, and conducting user interviews to explore new opportunities.
- Collaborate closely with interdisciplinary teams to understand computational requirements and design tailored HPC solutions.
- Architect, deploy, and manage HPC clusters across *Nix/Linux environments, both on-premises and in the AWS Cloud.
- Ensure efficient workload distribution using job schedulers or other methods.
- Expert in server provisioning, configuration, and performance tuning for optimal resource utilization and scalability.
- Decompose user stories into tasks, estimate story points, and maintain timely updates in Jira.
- Develop and maintain robust scripts to automate tasks and streamline workflows.
- Integrate and optimize scientific applications within the HPC environment, focusing on performance and resource allocation.
- Handle DevOps tools and version control systems to ensure efficient collaborative development practices.
- Write required documentation and follow the validation process with QA/quality experts.
- Support QA during SIT/UAT phases, escalating issues, identifying risks, and needed decisions.
- Ensure software meets all requirements of quality, security, and extensibility.
- Conduct peer reviews for quality, consistency, and rigor for production-level solutions.
- Actively contribute to the Digital HPC community and define leading practices and frameworks.
- Stay updated on the company's standards, industry practices, and emerging technologies.
About You
Key Functional Requirements & Qualifications:
- Experience working with cross-functional teams to solve complex computing problems.
- Ability to learn new data and computing technologies quickly.
- Skill to manage multiple priorities in a fast-paced, constantly evolving environment.
- Strong technical analysis and problem-solving skills related to data and technology solutions.
- Excellent written, verbal, and interpersonal skills for effective communication with peers and leaders.
- Pragmatic and capable of solving complex issues with technical intuition and attention to detail.
- Service-oriented, flexible, and approachable team player.
- Fluent in English; other languages are a plus.
- Experience with pharmaceutical/healthcare industry business processes is a plus.
Key Technical Requirements & Qualifications:
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- 5+ years of experience in architecting, deploying, and managing HPC clusters for optimal performance.
- Familiarity with job schedulers and workload management systems (e.g., Sun Grid Engine, SLURM, LFS).
- AWS Cloud certification, showcasing expertise in cloud resources utilization.
- Proficiency in scripting languages like Python, R, and bash for automation and tool development.
- Strong understanding of scientific applications management, optimization, and troubleshooting.
- Experience with DevOps tools and version control systems (e.g., Git/GitHub).
- Solid grasp of storage protocols and network fundamentals (e.g., TCP/IP, DNS, firewalls, VLANs, NAS/SAN, NFS/CIFS, AWS EFS/Lustre, or HDFS).