Tiki is extremely focusing on growing products to wider customer selection. This is not outside the goal of everything we have done, is to bring more happiness and convenience to our customer.
As a member of Supply Chain Optimization team, we have the responsibility to drive core projects to help accelerate Tiki’s effectiveness in full-filling and managing their inventory, fastening delivery speed and making right investment decision for company budget. To be honest, we have to argue that the growing of customer selection would be our new challenges - the more selection, the more challenges in managing and optimizing things.
Fortunately, our team is constantly iterating and standing together to solve problems. We found out many solution to deal with challenges. We play with Big Data, Machine Learning, and even Deep Learning.
We know the road, but we're just getting started.
And at the time solutions come, the complexity of data systems also grow up…
We are looking for Data Engineer to stand together with us and take responsibility for building a platform with strong architecture. And since we are just at the beginning of the road, you can let your imagination run free. We encourage everyone to dare to try new things and even make some mistakes, after all, it is all part of life and learning.
- Maintaince the streaming process of data from variety sources to Data warehouse.
- Build up instrumentals to improve the speed & availability of streaming process.
- Build up & maintaince the ETL process to transform data into DataMart.
- Create instrumentals: tools, visualizations, monitors & alerts,… on top of DataMart to to assist business & product teams, data scientists & analysts maximize the power of data.
- Work closely with product owner, data analyst and data scientist to strive for greater functionality in our data systems.
- A minimum 2 years experience with Python (or Java) is required.
- Ability to deep dive & analyze problems and propose an end to end solutions.
- Working knowledge of message queue, streaming process, and scalable data stores is a plus
- Experienced with data pipeline and workflow management, also big data tools: Airflow, Hadoop, Spark, Kafka is a big plus.
Why you will want to work here:
- We are constantly iterating! There is no such best proposal for anything, no fastest API, no best machine learning models. We design, build, test, ship, and optimize, and test. Just a stream of improvements and tests.
- We have data-driven mindset, every point of changes must be tested to gain insights into its impacts on key metrics. It's a long process, but over time, we gradually learn and become confident in our approach.
- We love "best practices". Serving important features with high throughput always give us a hitch to research and apply best practices. Any experiment or optimization is always welcomed.
- We are both independent and open. We own our products. Technical problems would be discussed internal, but for difficult one, we could request other's help.