In order to write code more quickly and effectively, developers can now use generative artificial intelligence, or GenAI. However, they ought to go cautiously and pay equal care as before.
Even though artificial intelligence (AI) has been used in software development since at least 2019, GenAI significantly improves the creation of natural language, images, and, more recently, videos and other assets, such as code. Vice president and principal analyst at Forrester Diego Lo Giudice.
According to Giudice, machine learning was utilized to optimize models for testing procedures in earlier incarnations of AI, which were mostly employed in code testing. Beyond these use cases, GenAI provides interactive querying to rapidly obtain knowledge from an expert peer programmer or specialist (like a business analyst or tester). Test cases and solutions can also be recommended by GenAI.
He claimed that for the first time, we are witnessing notable productivity benefits that traditional AI and other technologies have not brought.
Giudice pointed out that developers can use AI throughout the whole software development lifecycle, with a specific “TuringBot” at each stage to improve tech stacks and platforms.
TuringBots are AI-powered tools that assist engineers in writing, testing, and deploying code. Forrester invented this term. TuringBots, according to the research firm, will power a new wave of software creation that will help with all phases of the process, including finding technical documentation and automatically finishing code.
The “analyze/plan TuringBots,” for example, can help with the planning and analysis stage of software development, according to Giudice, who cited OpenAI’s ChatGPT and Atlassian Intelligence as examples of such AI products. While Microsoft Sketch2Code can produce functioning code from hand-written sketched user interfaces, others, like Google Cloud’s Gemini Advanced, can generate designs of microservices and APIs with their code implementation, he added.
In software development, Giudice said, “coder TuringBots” are now the most common use case for GenAI. These tools generate code based on prompts, code context, and comments via autocompletion for widely used integrated development environments (IDEs). Common languages like JavaScript, C++, Python, and Rust are among them.
According to Boomi’s head of architecture and AI strategy, Michael Bachman, generative models are very appealing since they can write code in several languages, giving developers the ability to submit a prompt and have lines of code generated, modified, or debugged. “Essentially all humans interacting with GenAI are quasi and senior developers,” he stated.
The software provider incorporates GenAI into a few of its offerings, such as Boomi AI, which turns requests in natural language into actions. Boomi AI enables developers to create integration procedures, APIs, and data models that link data, applications, and procedures.
The business employs GenAI to assist its in-house software developers, who closely monitor the platform’s code.
And Bachman said, that’s the crucial part. You will most likely be dissatisfied if you base the whole development of your application on GenAI. Before deploying code into production, skilled developers thoroughly test failure scenarios or use GenAI as a jumping off point. This is our internal procedure for handling that.
Additionally, his group develops capabilities to satisfy the “practical AI objectives” of their clients. Boomi, for instance, is developing a retrieval system because many of its customers want to be able to look for content in natural language on their websites, such as catalogs, instead of using keyword searches.
Giudice added that GenAI may be used by developers to address security flaws by identifying weaknesses in AI-generated code and providing recommendations to address those flaws.
A no- or low-code development approach can provide speed, built-in quality, and adaptability in contrast to traditional coding, according to John Bratincevic, principal analyst at Forrester.
In addition, Bratincevic stated that it offers access to a broader talent pool comprising non-coders and “citizen developers” from outside the IT industry, along with an integrated software development lifecycle toolchain.
The administration of large-scale implementation, he warned, may provide difficulties for organizations, particularly when it comes to overseeing thousands of citizen developers. Because pricing is usually determined by the number of end users, it might also be a barrier, according to him.
Although junior workers can bridge skill gaps in cybersecurity and other fields with the help of GenAI or AI-infused software assistants, Giudice noted that all of these jobs still require an expert’s evaluation.
Concurring, Bratincevic emphasized that anything the platform generates or uses AI to auto-configure needs to be reviewed by developers and other staff members throughout the software development lifecycle.
According to him, we are not yet, and most likely never will be, at the point where we can design software simply relying on AI.
The Asia-Pacific CTO of Thoughtworks, Scott Shaw, states that one must take security concerns into account. The tech consultancy examines new tools on a regular basis to increase productivity, whether it is in the IDE or to assist with the job of developers. Although some firms are still hesitant to use GenAI, Shaw told that the company only uses it when it is appropriate for its customers and only with their cooperation.
According to our experience, security coding practices are not as well-informed and intuitive in [GenAI-powered] software development tools, he stated. Developers employed by companies in regulated or data-sensitive settings, for example, could be required to follow extra security protocols and guidelines as part of their software delivery procedures.
Although he pointed out that using a coding helper can quadruple productivity, developers should consider whether or not they can sufficiently test the code and meet quality standards along the way.
It’s a two-edged sword: while organizations should consider integrating GenAI into their coding procedures to make their products more secure, they also need to consider the additional security risks that the AI poses due to its new attack avenues and weaknesses.
Shaw pointed out that because GenAI offers such a large scale, everything a company undertakes is amplified, including the hazards involved. It can generate a lot more code, which also means that there are exponentially more possible hazards.
Know your AI models
Bratincevic pointed out that while low-code platforms might provide a solid basis for GenAI Turingbots to assist in software development, businesses must be aware of which large language models (LLMs) are being utilized and make sure they comply with their corporate rules.
He stated that GenAI players “vary wildly” in this regard and advised companies using open-source language modules (LLMs) like OpenAI’s ChatGPT to review the licensing agreement and version.
He continued by saying that functionalities enabled by GenAI that generate code or component configurations from natural language are still in their early stages of development. Although professional developers are unlikely to be impressed, they might see greater uptake among citizen developers.
According to Bratincevic, using GenAI in conjunction with a well-established low-code platform makes more sense right now than using a lightweight or unproven platform that just talks a nice AI game.
Although LLMs do the bulk of the labor-intensive code development, people are still needed to understand the requirements and supply the necessary context, knowledge, and debugging to guarantee correct output, according to Bachman.
Additionally, he stated that when using open-source technologies, developers should exercise caution when disclosing confidential information and intellectual property (IP). To make sure they are not leveraging the intellectual property of another company to train their GenAI models, they should refrain from using private IP like as financial data and code. Additionally, he advised making sure an open-source LLM has undergone extensive testing prior to deploying it in a live environment.
When it comes to the models that GenAI tools are trained on, he would err on the side of caution. You need to set up appropriate pipelines if you want those models to have any value. He warned that if you don’t do it, GenAI can lead to a lot more issues.
The technology is still in its infancy and will likely continue to grow, therefore it is unclear how this will affect jobs in general and software engineers in particular.
AI-driven coding helpers, for instance, could alter the value system for abilities. Shaw made the following joke: will developers be valued more for their experience or for their ability to recall every coding sequence?
For the time being, he thinks that GenAI’s greatest promise lies in its capacity to summarize data, providing developers with a good knowledge foundation to enhance their comprehension of the industry. They can then convert that information into precise instructions, enabling computers to carry out the assigned tasks and create the features and goods that consumers desire.