About Cybersyn
Cybersyn is a novel DaaS (data-as-a-service) firm, supported by Sequoia, Coatue, and Snowflake. Our goal is to unveil the world's economic data to governments, ventures, and entrepreneurs and inspire a new wave of decision makers. We procure distinctive data assets (companies, rights, data rights, consumer dividends) and construct advanced products on top of them, focusing on detecting where consumers and businesses are allocating their money. Cybersyn can be seen as an amalgamation of an investment company and a technology organization concentrating on data: if we succeed, we will revolutionize the traditional market intelligence arena. The reward is substantial - if successful, we have the potential to disrupt an industry worth $100Bs and build a SimCity for the real world.
We have already introduced a considerable number of public datasets that we have cleaned, reformatted and made connectable on the Snowflake Marketplace.
Click here to see our current data.
Try our data on our Streamlit App here.
About the role:
Cybersyn is in search of a Data Scientist to address the challenges that surface in modernizing the world of economic data. You will be part of an exceptionally skilled team of nimble, product-centric data scientists and engineers striving to devise innovative solutions to intricate statistical issues and evolving our data product vision.
What you will do:
Build advanced data products that answer some of the most complex and intriguing questions about the economy. In practice, this means:
- Prototyping and establishing data processing pipelines and statistical models in Python/SQL/R which will ultimately contribute to our technical vision.
- Utilizing SQL, Python, dbt, and orchestration tools (e.g. Dagster)
- Collaborating closely with software engineers, analytics engineers, and product managers to realize our roadmap
- Report to the Data Science Department Head and support them in executing our data product vision.
Who you are:
- A commercially-minded data scientist with the capability to balance technical precision with rapid execution and applicable results.
- You've got at least two years of practical experience with developing statistical models and data pipelines to decipher imperfect data.
- Verified record of implementing pragmatic research projects from inception to completion.
- Prior familiarity with alternate, third-party data is strongly preferred.
- Previous experience in the following fields is a plus: sampling and inference methods, panel data analysis, Bayesian data analysis, time series modeling, data normalization, numerical analysis.
- Proficiency in Python/R and SQL is mandatory; ideally has worked with cloud data warehouses before (Snowflake, BigQuery, Redshift, etc.)
- You should have a solid understanding of what “clean code” resembles, possess experience in reviewing Pull Requests and establishing coding standards. Prior experience with handling big data is highly favored.
- Familiarity with dbt, AWS, Github is quite beneficial, but not strictly necessary.
What you get from it:
- Ability to shape Cybersyn’s initial product, technology decisions and own statistical methodologies and libraries.
- Access to some of the most compelling economic data in the world, including real-time spending, transaction, clickstream, data from both third-party and first-party sources. A lot of our data is not available to any other third parties.
- Quick-paced culture, a ton of responsibility and autonomy from day one.