[ad_1]
“Our machine learning infrastructure is a great big Frankenstein of one-offs,” said one data scientist at our Seattle Roadshow. Heads nodded. Every time his data-driven organization needs to integrate with a new system, software development teams hardcode dependencies and schedule jobs inside their model deployment system, creating a collection of ad-hoc integrations that runs a very real risk of breaking if either connected system changes.
Machine learning workflows are complex, but connecting all the pieces doesn’t need to be. Data generated by many sources feed many different models. These, in turn, are used by a variety of applications (or even by other models in a pipeline). That’s a lot of moving pieces for any one data scientist to consider. This article will discuss the strategies needed to implement an event-driven architecture.
What Is An Event-Driven Model?
Event-driven programming means that the program execution flow is determined by user-action event processing. Whether it’s a key press, mouse click, or any other user input, the program will react according to the action. For example, an application could be programmed to send event notifications every time a user clicks a certain button. An event-driven approach can be valuable, but it can also be tedious. Next, we’ll discuss some of the options for event-driven architecture.
Loosely Coupled, Event-Driven: The Pub/Sub Pattern
Hardcoding every interaction between a model serving platform and its connected systems is far more work than data scientists should have to handle. Loosely coupled, event-driven architecture, such as the publish/subscribe (pub/sub) pattern, are asynchronous, message-oriented notification patterns commonly found in traditional enterprise software. It’s time they became more common in machine learning.
In a pub/sub model, one system acts as a publisher, sending messages to a message broker, such as Amazon SNS. Through that message broker, subscriber systems explicitly subscribe to a channel, and the broker forwards and verifies delivery of publisher messages, which can then be used by subscribers as event triggers.
Traditional vs. Pub/Sub Implementation
The following is a simple user story describing a routine process an organization might want to follow as it ingests scanned documents:
As a records manager, when new documents are added to an S3 bucket, I want to run them through an OCR model.
A traditional implementation might look like this:
- Create a document insertion trigger in the database.
- Create a custom, hardcoded function to invoke a model run via API, including delivery verification, failure handling, and retry logic.
- Maintain that function as end-user requirements change.
- Deprecate that function when it is no longer needed.
Over time, as other business apps want to build onto the document insertion event, a fifth step will be added:
- Repeat steps 2 through 4 for every other application of the trigger.
Pros of traditional implementation:
- Near real-time communication
- No scheduling or communication burden on ML deployment system
Cons of traditional implementation:
- Database development team becomes a service organization.
- Database development team must have knowledge of all downstream use cases involving document ingestion.
- Architectural changes on either side of the exchange could break existing integrations.
The pub/sub implementation might look like this:
- Create a document insertion trigger in the database.
- When the trigger fires, send an event notification to a message broker.
At that point, the owners of the model deployment system would do the following:
- Subscribe to the published event feed.
- Build a function to invoke a model run via API.
Any other systems wishing to use that event as a trigger would follow the same process independently.
Pros of pub/sub implementation:
- Near real-time communication
- No scheduling or communication burden on ML deployment systems
- Database development team can focus on database development
- Downstream development teams can build without dependencies
- Architectural changes far less likely to break existing integrations
The pub/sub pattern’s loose coupling is highly scalable, centralizing the burden of communication and removing it from dependent apps. It is also extremely flexible. By abstracting communications between publishers and subscribers, each side operates independently. This reduces the overhead of integrating any number of systems and allows publishers and subscribers to be swapped at any time, with no impact on performance.
Pub/sub advantages:
- Flexible, durable integration: Upgrades to component systems won’t break communications workflows. As long as a published system continues to send messages to the appropriate queue, the receiving systems shouldn’t care how it generates those messages.
- Developer independence: Decoupling publishers and subscribers means teams can iterate at their own paces and ship updates to components without introducing breaking changes.
- Increased performance: Removing messaging from the application allows Ops to dedicate infrastructure to queueing, removing the processing burden from system components.
- Modularity: Because components integrate through an intermediary message queue, any component can be added or swapped at any time without impacting the system.
- Scalability: Since any number of applications can subscribe to a publisher’s feed, multiple systems can react to the same events.
Using Pub/Sub With Algorithmia’s Event Listeners
The Enterprise AI Layer provides configurable event listeners, also called an event handler, so users can trigger actions based on input from pub/sub event-driven database systems. In concert with the AI Layer’s automatic API generation, integrated versioning, and multi-language SDKs, this means your ML infrastructure is able to grow. Systems can trigger any version of any model, written in any programming language, trained with any framework. That’s critical as an ML program grows in size and scope and touches every part of a business.
[ad_2]
This article has been published from the source link without modifications to the text. Only the headline has been changed.
Source link