About Appier
Appier is a software-as-a-service (SaaS) company that utilizes artificial intelligence (AI) to promote business decision-making. Established in 2012 with the goal of making AI accessible to all, Appier’s mission is to turn AI into return on investment by enhancing the intellect of software. Currently, Appier has 17 offices across Asia Pacific, Europe and the U.S., and is listed on the Tokyo Stock Exchange (Ticker number: 4180). More information is available on www.appier.com.
About the role
Appier’s solutions are driven by proprietary deep learning and machine learning technologies that enable every business to use AI to convert data into business insights and decisions. As a Software Engineer, Data Backend, you will assist in the development of essential components of this platform.
Responsibilities
* Design, develop, and maintain RESTful APIs using Python.
* Construct and manage extensive data warehouses with the use of Trino/Presto and Pinot.
* Design and develop data pipelines using Apache Airflow and Apache Spark.
* Collaborate with cross-functional teams to develop automation tools for daily operations.
* Implement cutting-edge monitoring and alerting systems to ensure optimal system performance and stability.
* Answer application queries promptly and effectively, ensuring high client satisfaction.
* Operate on cloud platforms such as AWS and GCP to optimize data operations.
* Use Kubernetes (k8s) for container orchestration to enable efficient deployment and scaling of applications.
About you
[Minimum qualifications]
* BS/BA degree in Computer Science.
* 3+ years of experience in building and managing large-scale distributed systems or applications.
* Experience in Kubernetes development, Linux/Unix.
* Experience in managing a data lake or data warehouse.
* Expertise in developing data structures and algorithms on Big Data platforms.
* Ability to operate independently and efficiently in a dynamic environment.
* Capability to work in a fast-paced team environment and handle numerous tasks and projects.
* Aspiration to have a significant impact on the world through self-learning and building.
[Preferred qualifications]
* Contributing to open-source projects is a significant advantage (please include your Github link).
* Experience working with Python, Scala/Java is considered beneficial.
* Experience with Hadoop, Hive, Flink, Presto/Trino and related big data systems is desirable.
* Experience with Public Cloud like AWS or GCP is beneficial.