Airbnb was established in 2007 when two hosts offered lodging to three guests in their San Francisco residence, and has drastically grown to over 4 million hosts who have accommodated more than 1 billion guests from almost every country all over the world. Every single day, hosts provide distinct stays and experiences, enabling guests to connect with local communities in a more genuine way.
About the team
Analytics engineers build on the data groundwork. We are in search of an individual with expertise in metric development, data modeling, SQL, Python, and large-scale distributed data processing frameworks like Presto or Spark. Using these tools, combined with premier internal data feature, you will convert data from data warehouse tables into pivotal data artifacts that power impactful analytic use cases (for instance, metrics, dashboards) and assist downstream data consumers. As an Analytics Engineer, you will be located at the meeting point of data science, product analytics, and data engineering, and work collaboratively to attain highly influential outcomes. Data can revolutionize the way a company functions; superior data quality and tooling are the most important means to achieve that transformation. You will make that happen.
Responsibilities:
Comprehend data needs by interacting with fellow Analytics Engineers, Data Scientists, Data Engineers and Business Partners.
Plan, build, and launch efficient and reliable data models and pipelines in association with Data Engineering.
Create and implement metrics and dimensions to allow for analysis and predictive modeling.
Design and develop data resources to facilitate self-serve data consumption.
Fabricate tools for auditing, error logging, and data table verification.
Detail logging needs in collaboration with Data Engineering.
Determine and disseminate best practices on metric, dimension, and data model development for analytics use.
Develop and refine data tooling in association with Data Platform teams.
Act as a technical expert on data model usage.
Oversee and review code alterations to certified metric and dimension definitions.
Manage communication of data model updates and adjustments across the organization.
Ensure data models are fully documented, and all metrics and dimensions have lucid descriptions and metadata.
Minimum qualifications:
Passion for superior data quality and scaling data science work.
Over 6 years of relevant industry experience.
Strong skills in SQL and optimization of distributed systems (e.g. Spark, Presto, Hive).
Experience in designing schemas and dimensional data modeling.
Proficiency in at least one programming language for data analysis (e.g. Python, R).
Proven capability to succeed in both group and independent work situations.
Detail-oriented with eagerness to learn new skills and tools.
Strong influence and relationship management skills.
Preferred qualifications:
Experience with an ETL framework like Airflow.
Python, Scala, Superset preferred.
Effective storytelling and articulation skills – ability to convert analytical outcome into clear, concise, and persuasive insights and suggestions for technical and non-technical audience.
An eye for design when it comes to dashboards and visualization tools.
Awareness of experimentation and machine learning technique.