Senior Principal AI Engineer (f/m/d)

Job expired!

Senior Principal AI Engineer (f/m/d) - NXP Semiconductors, Munich, Germany

Preferred Location: Munich, Germany
Start Date: As soon as possible
Duration: Indefinite

About NXP Semiconductors

NXP Semiconductors N.V. (NASDAQ: NXPI) is a global leader in High Performance Mixed Signal and Standard Product solutions, renowned for its expertise in RF, Analog, Power Management, Interface, Security, and Digital Processing. Our innovations cater to a broad range of sectors, including automotive, identification, wireless infrastructure, lighting, industrial, mobile, consumer, and computing applications. With a presence in over 35 countries and a workforce of over 45,000 employees, we report an annual revenue exceeding $10 billion.

Department Overview

As part of our Chief Technology Office (CTO) organization, you'll join a team of specialists at NXP's AI Competence Center (AICC). Our CTO teams are the cornerstone of product innovation, providing cutting-edge solutions for products that sense, think, connect, and act. These teams collaborate closely with internal and external stakeholders to achieve customer goals, fostering breakthroughs in Edge AI technology.

Role and Responsibilities

This role, embedded in NXP’s global AI activities, partners with business lines to develop and enable disruptive Edge AI solutions. Join our exciting community working on state-of-the-art technology and beyond. Your key responsibilities will include:

  • Supporting AI/ML use cases from NXP business lines to maximize task performance and minimize resource cost, while deriving future hardware requirements and necessary software enablement features.
  • Identifying, developing, evaluating, and integrating methods for neural network optimization and deployment, including pruning, knowledge distillation, mixed precision quantization, and compression on resource-constrained inference nodes using NXP-IP and software environments.
  • Incorporating these methods into a larger NXP framework, including data-free optimization of neural networks.
  • SW-integration of the above methods and neural networks into larger applications, open-source frameworks, and compelling demonstrators.
  • Keeping track of state-of-the-art advancements in the relevant field.
  • Interacting with NXP application partners in business lines.

Your Profile and Required Competencies

We're seeking candidates with the following qualifications and competencies:

Qualifications

  • University degree: PhD preferred in Technical Specialization, Computer Science, Electrical Engineering, or relevant disciplines, with significant experience in Machine Learning, Edge AI, and Embedded Systems.
  • 12+ years of experience in software engineering using standard development tools (e.g., git, bitbucket), unit test-driven development, using CI/CD platforms.
  • Proven track record of multiple successful AI/ML product releases and their maintenance.

Core Competencies

  • Strong experience with embedded processors, software, and machine learning accelerators.
  • Extensive experience with embedded software architectures, build systems, and version control systems.
  • Comprehensive knowledge of GNU/Linux operating systems, embedded systems, and development boards.

Advanced Skills

  • Flexibility in working with AI frameworks (TensorFlow, PyTorch), preferably through Python and C++ interfaces.
  • Familiarity with setting up and maintaining ML development environments (Jupyter, TensorBoard, ClearML, docker, etc.).
  • Understanding of AI deployment toolchains, portability, and inference engines (CUDA, TensorRT, TFLite, ONNXRT, etc.).
  • Integration of external software libraries and components, including extending build systems.
  • Solid programming experience in Python, C, C++, and scripting languages on Linux systems.

Additional Competencies

  • Experience with Deep Learning compiler frameworks for just-in-time and ahead-of-time compilation (e.g., MLIR, tvm, ONNXRT, tflite, tflite for micro, tvm micro, etc.).
  • ML-DevOps experience.
  • Experience with distributed compute frameworks (e.g., Ray) and shared compute frameworks (e.g., Slurm).
  • Knowledge