Questions to ask your developers about AI tools

IT leaders need to understand some hard truths of Artificial Intelligence tools in order to shape AI strategy. Consider these key questions to discuss with your developers

Artificial Intelligence (AI) tools that just a few years ago would have been found only at the most cutting-edge companies are now becoming commonplace. But while specialized hardware, software, and frameworks may be more mainstream these days, the knowledge and experience to use them effectively has not kept up.

Here are five essential questions to ask your development teams before you finalize your AI strategy.

1. What do you mean when you say you want to use AI tools?

Artificial Intelligence covers a broad range of definitions, algorithms, approaches, tools, and solutions. For example, there are the underlying approaches such as machine learning (both supervised and unsupervised), and rule-based systems. On top of these approaches are packages that provide solutions from image recognition to  natural language processing (NLP). Which approaches and tools are the best for addressing the problem?

Developers need to choose the right tool for the problem at hand. BERT, GPT, and other complex neural network-based approaches might often be in the headlines but that doesn’t mean they’re the most appropriate tool for most use cases. The simplest algorithm that solves the problem is typically best and innovation leaders should be sure their teams are selecting their tools accordingly, not based on hype.

The simplest algorithm is typically best and innovation leaders should be sure their teams are selecting their tools accordingly.

2. How well do you understand the problem you want AI to solve?

The goal of AI tools is to mimic human intelligence – so how well do you understand the problem? Is the approach that subject matter experts take to solve the problem well understood? What is the expected accuracy of the subject matter expert? How confident are you that the AI tool can meet or exceed that level of accuracy?

If your developers are not experts in the domain where the AI will be trained – and they probably aren’t – they’ll likely need input from subject matter experts in that field. These experts will need to work alongside data scientists and engineers to craft, tune, and evaluate the models, so it’s important to designate knowledgeable experts the development team can turn to with questions.

Also keep in mind that even experts make mistakes, so don’t expect an AI system trained on a non-trivial problem to be perfect either. AI may be able to compete at or above human levels in chess, Go, and Jeopardy, but these narrow domains are outliers rather than the rule. Be realistic about how well the system can be expected to perform – claims that AI systems will surpass their human counterparts in terms of quality and accuracy rarely turn out to be true.

3. Many AI tools depend on data for machine learning. What data do you have?

Algorithms are only as good as the data they are given. Machine learning models can require massive quantities of data to build accurate statistical models. Depending on the use case and the algorithm, data requirements can range from thousands to millions of examples. Is the data available? Has the quality of the data been checked? Bias inherent in the data is also an issue – can you be sure that the data doesn’t include biases?

Many leaders believe they have mass quantities of valuable untapped data just waiting to be mined by an AI algorithm. And it’s true that most companies maintain logs, transactions, old emails, customer information databases, and so on – but frequently that data is noisy, inconsistent, or unsuited for training an AI system for the task you want to address.

A thorough assessment of the available training data is a prerequisite for any AI endeavor. In some cases, data preparation can take up to 90 percent of the development effort in an AI project, so validating the quality of the data available should be a top priority from the beginning.

Let’s explore two more questions:

4. What kind of compute resources will you need?

If you are planning to host your initiative in the data center, have you scoped the compute resources needed? AI tools such as machine learning algorithms can be compute-intensive and may need additional hardware such as GPUs to handle the processing load.

AI tools such as machine learning algorithms can be compute-intensive and may need additional hardware to handle the processing load.

If you are planning to host it in the cloud using resources such as Google’s GoogleML or Amazon’s Comprehend, have you scoped the costs? What are the infosec issues regarding sending data outside the firewall?

Training and deploying AI models may require hardware resources not typically available at organizations that are new to AI. Cloud providers can facilitate access to the necessary equipment, but they can be expensive, and transferring training data to the cloud may be out of the question if your dataset contains confidential information. Make sure you have not only the compute resources you’ll need, but also clearance to transfer the data to train your AI systems there.

5. How long will it take to get the solution into production – and once it’s there, how will you update the solution?

In addition to best practices for application tools and solution development, AI solutions require model training and testing. What percentage of false positives is acceptable (precision)? What percentage of missed targets is acceptable (recall)?

Training and testing the model can take weeks, months, or even longer. Once the solution is in production, how will updates be handled? Will the model need to be completely retrained and tested? How will the integrity of the model be ensured once it’s in production? Nuances in live data can change over time so periodic re-evaluation and tuning may be required.

AI systems will inevitably make mistakes, so you’ll need plans in place to deal with those false predictions when they occur. And just as traditional software deployments still require maintenance and administration after the core development is completed, AI systems also need to be continually evaluated, tuned, and updated. Just because the project has gone live doesn’t mean you should immediately assign the experts who trained the models to a new project.

Setting an AI project up for success can require significant preparation – in terms of engineering effort but also in embracing the right mindset within the organization. Above all, it’s important to be realistic about the expectations an AI system can achieve. Even if your system is deployed on the fastest processors money can buy, it won’t necessarily outperform – or even match – the accuracy of the human subject matter expert it’s meant to emulate.

This article has been published from the source link without modifications to the text. Only the headline has been changed.

Source link