SAI #17: Patterns for implementing Business Logic in Machine Learning Services.
Patterns for implementing Business Logic in Machine Learning Services, Feature Platforms.
👋 This is Aurimas. I write the weekly SAI Newsletter where my goal is to present complicated Data related concepts in a simple and easy to digest way. The goal is to help You UpSkill in Data Engineering, MLOps, Machine Learning and Data Science areas.
This week in the Newsletter:
Patterns for implementing Business Logic in Machine Learning Services.
Feature Platforms.
Patterns for implementing Business Logic in Machine Learning Services.
Machine Learning Models usually do not stand on their own. There will be additional business or other processing logic before you feed the data to the Trained Model and after you retrieve the inference results.
You can think of the final Deployable as:
Inference Results = Preprocessing + Business Logic + Machine Learning Model + Post-Processing + Business Logic
Single Service Deployment.
The most straightforward way to package this Deployable is to have all of the additional processing logic coupled with the ML Model and deploy it as a single service.
Here is how it would work for an Request-Response type of deployment:
Backend Service calls the ML Service exposed via gRPC.
ML Service retrieves required Features form a Feature Store. Preprocessing and additional Business Logic is applied on retrieved Features.
The resulting data is fed to ML Model.
Inference Results are ran against additional Post-Processing and Business Logic.
Results are returned to the Backend Service and can be used in the Product Application.
Business Logic decoupled from the Machine Learning Model.
A more complicated way is to have a separate Service that sits in between the Backend Service and the Service exposing ML Model.
Here is the breakdown of the diagram:
Backend Service calls the Service containing Business Logic Rules exposed via gRPC.
Service containing Business Logic Rules calls the ML Service exposed via gRPC.
ML Service retrieves required Features form a Feature Store. Preprocessing is applied on retrieved Features.
The resulting data is fed to ML Model.
Inference Results are ran against additional Post-Processing.
Results are returned to the Service containing Business Logic Rules that are then applied on the inference results.
Final results are returned to the Backend Service and can be used in the Product Application.
When does this added complexity starts to make sense?
Quite often in the Real World situations Machine Learning Model Deployments are ensembles of multiple models chained after each other or producing derived results from a mix of Inference Results.
In this case different teams could and most likely would be working on developing different part of the system. As an example we could take a Recommender System, different teams could be working separately on:
Candidate Retrieval.
Candidate Ranking.
Business Logic.
Decoupling of Business Logic starts to make sense here because each pease of the puzzle can be developed and tested for performance separately. We will dig deeper into all of this in a separate long form Newsletter in the future. So keep tuned in!
Feature Platforms.
Keep reading with a 7-day free trial
Subscribe to