Based on the successful software development projects we previously completed for the client - a leading fast-moving consumer goods company, Melon is a proven reference to support their new Big Data and machine-learning endeavor.
Big Data & Machine Learning
Food & Beverage, Fast-moving consumer goods
Azure Databricks (PySpark), Azure Data Factory, Azure Data Lake (gen1, gen2), Scrum Management
Melon has been a long-term technological partner of the customer. We developed a web-based knowledge management system to keep track of, transfer and update the accumulated business solutions knowledge within the company. We delivered a .NET based scripting tool to speed up the learning curve of new IT associates and make the overall application management services as efficient as possible. We contributed as well with an intuitive e-learning solution for the customer’s core business solution, with a tailored authoring tool for content refresh. In September 2019, one of our data engineers joined our customer’s effort to realize a use case, leveraging big data and machine learning to predict diverse sales variables across their vast market footprint. The project enables ‘just the right’ outlet segmentation and produces tailored assortment and merchandising.
The client needed augmentation of their development team since the project has already spun huge, with a substantial international team of seven sub-units. Based on the successful software development projects we previously completed for the client, Melon had been a proven reference to support their new Big Data and machine-learning endeavor.
Our data engineer gathers, analyzes and models data from various external and internal sources. He enables the ingestion of maps, forecasts, categorizations, Facebook, Trip Advisor and Google ratings, mobile operators’ statistics, company’s own surveys, etc. and prepares it so that the predictive and prescriptive models can consume it. At the moment the use case is already live and with a robust plan to cover new markets within the customer’s footprint.
Our data engineer had little experience with Apache Spark as it is applied only to larger Big Data projects. For example, it was challenging to take 30 GB text data and transform them into meaningful Delta tables. This toppled with the various internal and external sources, which had to be ingested, curated and enriched, required advanced SW [Python] development skills and a quick adaptation to a multicultural and complex business environment. One mistake in optimization can cost two days of processing time, losing time and money. Therefore, code quality is as crucial as identifying the ‘right’ data. The volume of the input itself is another challenge that requires creativity, focus and precision.