The Senior Databricks Developer will be responsible for implementing and maintaining solutions in the AWS Databricks platform. You will be in charge of coordinating data requests from various teams, reviewing and approving efficient methods to ingest, extract, transform, and maintaining data in a multi-hop model. You will also work with team members to mentor other developers and expand their knowledge and expertise. You will operate in a fast-paced, high-volume processing environment where quality and attention to detail are essential.
PRIMARY RESPONSIBILITIES
• Design and develop high-performance, secure Databricks solutions using Python, Spark, PySpark, Delta tables, UDP, and Kafka.
• Create high-quality technical documents, including data mapping, data processes, and operational support guides.
• Translate business requirements into data model design and technical solutions.
• Develop data ingestion pipelines using Python, Spark & PySpark to support near realtime and batch ingestion processes.
• Maintain data lake and pipeline processes, including troubleshooting issues, performance tuning, and improving data quality.
• Collaborate closely with technical leaders, product managers, and the reporting team to gather functional and system requirements.
• Work in a fast-paced environment and perform effectively in an agile development atmosphere.
KNOWLEDGE AND SKILL REQUIREMENTS
• A bachelor's degree in Computer Science, Information Systems, or an equivalent degree.
• Must have 8+ years of experience developing applications using Python, Spark, PySpark, Java, Junit, Maven, and its ecosystem.
• Must have 4+ years of hands-on experience in AWS Databricks and related technologies like MapReduce, Spark, Hive, Parquet, and AVRO.
• Good experience in the end-to-end implementation of DW BI projects, especially in data warehouse and data mart developments.
• Extensive hands-on experience with RDD, Data frame, and Dataset operations of Spark 3.x.
• Experience with the design and implementation of ETL/ELT framework for complex warehouses/marts.
• Knowledge of large datasets and experience with performance tuning and troubleshooting.
• Plus to have AWS Cloud Analytics experience in Lambda, Athena, S3, EMR, Redshift, Redshift spectrum.
• Must have RDBMS: Microsoft SQL Server, Oracle, MySQL.
• Familiarity with the Linux OS.
• Understanding of Data architecture, replication, and administration.
• Experience in working with real-time data ingestion with any streaming tool.
• Strong debugging skills to troubleshoot production issues.
• Comfortable working in a team environment.
• Hands-on experience with Shell Scripting, Java, and SQL.
• Ability to identify problems and effectively communicate solutions to peers and management.
Labcorp is proud to be an Equal Opportunity Employer:
As an EOE/AA employer, Labcorp advocates for diversity and inclusion in the workforce and does not tolerate any form of harassment or discrimination. Our employment decisions are based on the needs of our business and the qualifications of the individual, and we do not discriminate based on race, religion, color, national origin, gender (including pregnancy or other medical conditions/needs), family or parental status, marital, civil union or domestic partnership status, sexual orientation, gender identity, gender expression, personal appearance, age, veteran status, disability, genetic information, or any other legally protected characteristic. We encourage everyone to apply.
For more information about how we collect and store your personal data, please see our Privacy Statement.