Senior Data Engineer II

  • Full Time
Job expired!

Company Description

Redica Systems is a data analytics platform designed to assist life sciences companies in enhancing their quality and staying updated with changing regulations. Our proprietary processes convert one of the most comprehensive data sets in the industry – gathered from hundreds of health agencies and unique Freedom of Information Act (FOIA) sourcing – into meaningful answers and insights that lessen regulatory and compliance risks.

Established in 2010, Redica Systems provides services to over 200 clients in Pharma, BioPharma, MedTech, and Food and Cosmetics sectors, including 19 of the top 20 Pharma companies and 9 of the 10 leading MedTech firms. Although our headquarters are in Pleasanton, CA, we operate as a geographically distributed company. More information is available at redica.com.

Job Description

We are seeking a skilled Senior Data Engineer II to join our team as we continue to develop the first exceptionally unique quality and regulatory intelligence (QRI) platform for the life science sector. The ideal candidate will have experience in leading/mentoring a team of developers while upholding high quality standards and actively involved in coding.


Core Responsibilities
● Complete comprehension of the technical architecture and different sub-systems
● Role as a leader in an Agile Scrum environment, focusing on providing sustainable, high-performance, scalable, and easily maintainable corporate solutions
● Assisting engineering managers in prioritizing technical issues
● Actively influencing technical decisions in an area of specialty
● Recommending and validating different approaches to ameliorate data reliability, efficiency, and quality
● Identifying optimal solutions for resolving data quality or consistency problems
● Ensuring the successful delivery of systems to the production environment and assisting operations and support teams in resolving production issues, as required
● Leading data acquisition from diverse sources, intelligent change monitoring, data mapping, transformation, and analysis
● Creating, testing, and maintaining architectures for data storage, databases, processing systems, and microservices
● Integrating several sub-systems or components to provide end-to-end solutions
● Integrating data pipeline with NLP/ML services

About you
● Tech Savvy: You consistently anticipate and adopt innovative techniques in business-building technology, staying current with data advancements and incorporating them into your work processes
● Manages Complexity: You're proficient at synthesizing solutions from complex information by identifying patterns and developing efficient strategies to solve data-centric problems effectively
● Decision Quality: You're known for making sound and timely decisions that propel organizational progress and uphold data integrity
● Collaborator: You're adept at engaging in collaborative problem-solving by leveraging diverse perspectives and devising innovative solutions to accomplish shared goals and data engineering initiatives
● Optimizes Work Processes: You actively look for opportunities to enhance and streamline current work processes for handling data pipelines, (Extract, Transform, Load) processes, and data warehousing
● Drives Results: You're focused on continuously improving performance and surpassing expectations to contribute to overall success and meet data-centered deliverables
● Strategic Mindset: You consistently exhibit a strategic mindset by imagining future possibilities and successfully converting them into groundbreaking data strategies, contributing to the long-term success of the organization
● Engaged: You not only share our values but also possess the essential skills required to excel at Redica.

Qualifications

● 5+ years of senior or lead developer experience with a focus on technical mentorship,
system/code architecture, and quality output
● Profound experience designing and constructing data pipelines, data APIs, and ETL/ELT processes
● Profound experience in data modeling and data warehouse concepts
● In-depth, hands-on experience in Python
● Practical experience in setting up, configuring, and maintaining SQL and no-SQL databases
(MySQL/MariaDB, PostgreSQL, MongoDB, Snowflake)
● Degree in Computer Science, Computer Engineering, or a similar technical field

Bonus Points
● Experience with the data engineering stack within AWS is a significant advantage (S3, Lake Formation, Lambda,
Fargate, Kinesis Data Streams/Data Firehose, DynamoDB, Neptune DB)
● Experience with event-driven data architectures
● Experience with the ELK stack is a significant advantage (ElasticSearch, LogStash, Kibana)

Additional Information

All your information will be kept confidential in accordance with EEO guidelines.