Improving your project management practices can help big data make an even bigger corporate impact.
By 2022, Statista predicts that big data and analytics revenues will be at 274.3 billion dollars. In healthcare alone, projections show the industry could save as much as 300 billion dollars if it could only integrate its big data with other systems and business processes.
While these projections on corporate investments and benefits are significant, organizations continue to lag when it comes to effectively managing big data projects.
“It’s pretty well understood that data science is a key driver of innovation, but few organizations know how to consistently turn data science output into business value,” said Nick Elprin, CEO and Co-Founder of Domino Data Lab.
How can companies turn this around? See the four suggestions below.
Infuse project management into big data efforts
The nature of big data and analytics projects is iterative. There will always be new information and data types coming in, and data scientists are prepared to revise algorithms and queries as new information becomes available. However, this doesn’t mean that management practices from more linear projects shouldn’t be adopted.
For example, data needs to be cleaned and prepped before it can be used. There should be a first-step method for doing this, and ideally, the job shouldn’t be performed by very expensive data scientists. Second, once algorithms and applications that use big data are developed, it should be tested and staged before deployment.
The best way to achieve these goals is to add a skilled project manager to the data science team or use project management skills and personnel from IT.
If your data science team is separate from IT, it’s time to blend these two disciplines.
Initially, many organizations started their data science teams as standalone departments in order to pilot test what big data and analytics could deliver. Organizations no know that the big data, artificial intelligence, and machine learning applications they’re developing must be integrated with other IT apps and systems to gain maximum value.
In the past, there were arguments for standalone data science departments, data science within IT departments, and interactive project teams of data science and IT. The time has come for data science to either become part of IT or to closely collaborate in projects and deployments with IT. This is the only way to achieve true integration of big data and analytics with other systems and applications throughout the company.
Develop a big data maintenance and monitoring team
Whether it is network/hardware infrastructure, or assuring that big data properly performs within and independently of applications, the process must be continuously monitored and maintained once big data and analytics are deployed in production. For example, if a big data source is called by other IT applications as an embedded subroutine, IT needs to ensure that the call works properly, and the proper data is returned. If there is a “break” in the app, IT needs to fix it. Similarly, hardware and network bandwidth and quality of service must be maintained at acceptable levels—again, a job for IT.
Use agile development
Because the revision of algorithms as data changes is an iterative and continuous process, IT must adapt its project management style to agile development, and away from the traditional waterfall IT project management. Data scientists already understand the concept of iterative revisions to algorithms as data changes. In this case, the project manager, whether from IT or the data science team, must learn how to combine some of the linear flows of traditional IT project management such as data prep, regression testing, and app maintenance—with agile revisions and insertions of data algorithms as the need to change them arises.