HomeData EngineeringData MediaUnderstanding Spark Framework for Data Mechanics

Understanding Spark Framework for Data Mechanics

Summary

Spark is one of the most well-known frameworks for data processing, whether for batch or streaming, ETL or ML, and at any scale. Because of its popularity it has been deployed on every kind of platform you can think of. In this episode Jean-Yves Stephan shares the work that he is doing at Data Mechanics to make it sing on Kubernetes. He explains how operating in a cloud-native context simplifies some aspects of running the system while complicating others, how it simplifies the development and experimentation cycle, and how you can get a head start using their pre-built Spark container. This is a great conversation for understanding how new ways of operating systems can have broader impacts on how they are being used.

Interview

  • Introduction
  • How did you get involved in the area of data management?
  • Can you start by giving an overview of what you are building at Data Mechanics and the story behind it?
  • What are the operational characteristics of Spark that make it difficult to run in a cloud-optimized environment?
  • How do you handle retries, state redistribution, etc. when instances get pre-empted during the middle of a job execution?
    • What are some of the tactics that you have found useful when designing jobs to make them more resilient to interruptions?
  • What are the customizations that you have had to make to Spark itself?
  • What are some of the supporting tools that you have built to allow for running Spark in a Kubernetes environment?
  • How is the Data Mechanics platform implemented?
    • How have the goals and design of the platform changed or evolved since you first began working on it?
  • How does running Spark in a container/Kubernetes environment change the ways that you and your customers think about how and where to use it?
    • How does it impact the development workflow for data engineers and data scientists?
  • What are some of the most interesting, unexpected, or challenging lessons that you have learned while building the Data Mechanics product?
  • When is Spark/Data Mechanics the wrong choice?
  • What do you have planned for the future of the platform?

This article has been published from the source link without modifications to the text. Only the headline has been changed.

Source link

Most Popular