At Coinbase, our mission is to increase economic freedom around the world, and we couldn't achieve this without hiring the best people. We are a hardworking group of overachievers who focus on building the future of finance and Web3 for our users worldwide, whether they are trading, storing, staking or using crypto.
We look for a few things in all our hires, regardless of role or team. First, we desire candidates who will thrive in our culture where we default to trust, embrace feedback, and disrupt ourselves. Second, we expect everyone to commit to our mission-focused approach. Finally, we seek individuals who are keen to learn and live crypto and can handle the intensity of our work sprint and recharge culture. As a remote-first company, we aim to hire the best talent from around the world.
Are you ready to #LiveCrypto? Here's who we're looking for:
- You radiate positive energy and have an optimistic view of the future.
- You are a lifelong learner and want to become an expert in cutting-edge technology like DeFi, NFTs, DAOs, and Web 3.0.
- You value direct communication and active listening, and you see feedback and setbacks as opportunities for growth.
- You adapt to change smoothly since crypto constantly evolves, and so do our priorities.
- You possess a "can do" attitude and aren't afraid of owning a problem.
- You want to be part of a successful team and are open to being pushed out of your comfort zone.
The Data Engineering team builds reliable data sources and products for timely and accurate data-driven decision-making across the company. We're at the forefront of data science and business intelligence innovation. Our core offerings:
- Building and directing a foundational data layer that serves as the single source of truth across Coinbase.
- Designing and managing robust data pipelines to ensure data quality and timely delivery.
- Pioneering developer tools that inject automation into data science processes.
- Delivering custom data products to empower users with self-serve capabilities.
Your duties will include data modeling, data pipeline development and optimization, system integration, building scalable systems, and collaboration.
Job requirements:
- Data Modeling: Understanding data modeling best practices.
- ETL/ELT Processes: Experience in developing ETL/ELT pipelines to process large datasets.
- Apache Airflow: Experience in building, deploying, and optimizing DAGs in Airflow.
- Python and SQL: Must be proficient at scripting in Python, particularly for data manipulation and integration tasks, and have a strong grasp of advanced SQL techniques.
- GitHub: Experience with version control, branching, and collaboration on GitHub.
- Data Visualization: Knowledge of tools like Superset, Looker or Python visualization libraries (Matplotlib, Seaborn, Plotly…etc)
It would be nice if you have experience in marketing data pipelines, third party tools integration, email management platforms, familiarity with Docker, Kubernetes, cloud platforms like AWS/GCP, knowledge of fundamental DevOps practices, and data governance experience. Some knowledge of ML, AI, or NLP is welcomed but not required.
Job ID: P52953
Pay Transparency Notice: Depending on your w