Tiki is extremely focused on growing products to wider customer selection. This is not outside the
the goal of everything we have done is to bring more happiness and convenience to our customers.
As a member of the Supply Chain Optimization team, we have the responsibility to drive core
projects to help accelerate Tiki’s effectiveness in full-filling and managing their inventory, fastening delivery speed, and making the right investment decision for the company budget. To be honest, we have to argue that the growth of customer selection would be our new challenge - the more selection, the more challenges in managing and optimizing things.
Fortunately, our team is constantly iterating and standing together to solve problems. We found
many solutions to deal with challenges. We play with Big Data, Machine Learning, and even Deep
We are looking for a Senior Data Engineer to stand together with us and take responsibility for building a platform with strong architecture. And since we are just at the beginning of the road, you can let your imagination run free. We encourage everyone to dare to try new things and even make some mistakes, after all, it is all part of life and learning.
What you’d contribute:
• Maintenance of the streaming process of data from various sources to the Data warehouse.
• Build up instrumentals to improve the speed & availability of the streaming process.
• Build-up & maintenance of the ETL process to transform data into DataMart.
• Create instrumentals: tools, visualizations, monitors & alerts,… on top of DataMart to assist
business & product teams, data scientists & analysts maximize the power of data.
• Work closely with product owners, data analysts, and data scientists to strive for greater
functionality in our data systems.
What you’d love for the role:
• We are constantly iterating! There is no such best proposal for anything, no fastest API, no best
machine learning models. We design, build, test, ship, and optimize, and test. Just a stream of
improvements and tests.
• We have a data-driven mindset, every point of change must be tested to gain insights into its
impacts on key metrics. It‘s a long process, but over time, we gradually learn and become confident in our approach.
• We love "best practices". Serving important features with high throughput always give us a hitch in research and applying best practices. Any experiment or optimization is always welcomed.
• We are both independent and open. We own our products. Technical problems would be
discussed internally, but for difficult ones, we could request others‘ help.
What you’d have to succeed in the role:
• A minimum of 2 years of experience with Python (or Java) is required.
• Ability to deep-dive & analyze problems and propose end-to-end solutions.
• Working knowledge of message queue, streaming process, and scalable data stores is a plus
• Experienced with data pipeline and workflow management, also big data tools: Airflow,
Hadoop, Spark, and Kafka are a big plus.
What we love to offer:
• Hybrid working
• Attractive package + immediate healthcare insurance
• Full paid during the probation period
• Social insurances contributions paid on full salary
• Be coached by experienced & inspirational leaders and managers
• Tech application of autonomous robots and AI technologies
• Unlimited access to knowledge via learning library & via team knowledge-sharing