Summary
The Data industry is changing rapidly, and one of the most active areas of growth is automation of data workflows. Taking cues from the DevOps movement of the past decade data professionals are orienting around the concept of DataOps. More than just a collection of tools, there are a number of organizational and conceptual changes that a proper DataOps approach depends on. In this episode Kevin Stumpf, CTO of Tecton, Maxime Beauchemin, CEO of Preset, and Lior Gavish, CTO of Monte Carlo, discuss the grand vision and present realities of DataOps. They explain how to think about your data systems in a holistic and maintainable fashion, the security challenges that threaten to derail your efforts, and the power of using metadata as the foundation of everything that you do. If you are wondering how to get control of your data platforms and bring all of your stakeholders onto the same page then this conversation is for you.
Announcements
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
- Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask.
- RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today.
- Your host is Tobias Macey and today I’m interviewing Max Beauchemin, Lior Gavish, and Kevin Stumpf about the real world challenges of embracing DataOps practices and systems, and how to keep things secure as you scale
Interview
- Introduction
- How did you get involved in the area of data management?
- Before we get started, can you each give your definition of what “DataOps” means to you?
- How does this differ from “business as usual” in the data industry?
- What are some of the things that DataOps isn’t (despite what marketers might say)?
- What are the biggest difficulties that you have faced in going from concept to production with a workflow or system intended to power self-serve access to other members of the organization?
- What are the weak points in the current state of the industry, whether technological or social, that contribute to your greatest sense of unease from a security perspective?
- As founders of companies that aim to facilitate adoption of various aspects of DataOps, how are you applying the products that you are building to your own internal systems?
- How does security factor into the design of robust DataOps systems? What are some of the biggest challenges related to security when it comes to putting these systems into production?
- What are the biggest differences between DevOps and DataOps, particularly when it concerns designing distributed systems?
- What areas of the DataOps landscape do you think are ripe for innovation?
- Nowadays, it seems like new DataOps companies are cropping up every day to try and solve some of these problems. Why do you think DataOps is becoming such an important component of the modern data stack?
- There’s been a lot of conversation recently around the “rise of the data engineer” versus other roles in the data ecosystem (i.e. data scientist or data analyst). Why do you think that is?
- What are some of the most valuable lessons that you have learned from working with your customers about how to apply DataOps principles?
- What are some of the most interesting, unexpected, or challenging lessons that you have learned while building your respective platforms and businesses?
- What are the industry trends that you are each keeping an eye on to inform you future product direction?
This article has been published from the source link without modifications to the text. Only the headline has been changed.