AI is getting very good at writing code

Determining the programming language to learn is a big challenge for today’s developers, as it takes a very long time. However, this question can be redundant in the future when the artificial intelligence (AI) model does all the tedious work, understands the problem description, and codes the solution.

Researchers at Google’s AI-focused unit DeepMind claim that the AlphaCode system can represent a solution to code problems that score reasonably well in programming competitions run by new programmers. These competitions require humans to understand problems written in natural language and code algorithms efficiently.

In a new, non-peer-reviewed article, DeepMind researchers explain how AlphaCode achieved an average ranking in the top 54.3% of participants in the last 10 programming contests with more than 5,000 participants. The contest was hosted on the Codeforces platform for code contests.

DeepMind claims that AlphaCode is the first AI code generation system to run at a competitive level in code competition for human developers. This research will increase the productivity of programmers and allow non-programmers to express their solutions without knowing how to code.

Human participants, AlphaCode, needed to quickly write a program to analyze the description of the task or puzzle and solve it. This is more difficult than training your model with GitHub data and solving simple coding tasks. Like humans, AlphaCode needed to understand the multi-paragraph natural language description of the problem, background information, and a description of the desired solution in terms of input and output.

To solve this problem, competitors need to create the algorithm and then implement it efficiently. For example, to overcome these limitations, you should choose a faster programming language, such as C ++, instead of Python.

AlphaCode’s pre-training dataset contains 715GB of code from GitHub repository files written in C ++, C #, Go, Java, JavaScript / TypeScript, Lua, Python, PHP, Ruby, Rust, Scala. The team refined the model with datasets on conflicting programming issues taken from Codeforces and similar datasets.

The boost that DeepMind gave to AlphaCode was achieved by combining a large transformer model. Examples are OpenAI’s GPT3 and Google’s BERT language model. DeepMind used a transformer-based language model to generate code and filter the output into a small group of “promising programs” sent for evaluation.

As the AlphaCode DeepMind team explains in their blog, During evaluation, we generate a significantly higher number of C++ and Python programs for each task than the previous one.

Then we rank these solutions into a small set of 10 candidate programs that we filter, group, and submit for external evaluation. This automated system replaces the process of debugging, compiling, passing tests, and ultimately submitting to a competitor.

DeepMind demonstrates how AlphaCode codes a solution to a given problemĀ here.

DeepMind considers some potential shortcomings of what it is trying to achieve. For example, a model can generate code that contains exploitable vulnerabilities, such as “an unintended vulnerability from legacy code or a vulnerability that was deliberately injected into a training set by a malicious actor.”

There is also a environmental cost. Training the model required “hundreds of petaflops days” in Google’s data center. However, in the long run, AI code generation can “lead to systems that can be recursively written and self-improved, and rapidly lead to advanced systems.

.” While there is a risk that automation will reduce developers demand, DeepMind points out the limitations of today’s code completion tools that can significantly improve programming productivity, but until recently, one line of suggestions and identification were limited to the language or short code snippets.

However, DeepMind emphasizes that the work is by no means a threat to human programmers and that the system must be able to develop problem-solving skills to help humanity. Our code generation research shows a lot of room for improvement and even more exciting ideas to help programmers be more productive and open the field to people who aren’t currently writing code, Said DeepMind researchers.

Source link