Weaving Generative AI into DevSecOps

Software development has historically been a time-intensive and often tedious process that requires tooling and configuration. Generative AI is changing that process, and offers software engineers the ability to dramatically streamline the application development process, improve code quality, and deliver more functionality. To reap such benefits, they need to put the right guardrails in place. 

According to the 2023 State of AI in Software Development report, code creation accounts for only 25% of a developer’s time. The rest is spent with prep work like setting up a development environment, collecting the necessary tools and libraries, establishing version control, and accounting for security issues.  

A lot of this work is repetitive and follows set patterns. Humans tend to become distracted and sometimes make mistakes, leaving holes in the process. Generative AI can enhance the workflow in many ways. Integrating AI into software engineering creates more robust, more secure, higher quality software and does so much more quickly than traditional development. For instance, Generative AI software can examine a failed build, assess what went wrong, and provide possible solutions, reducing remediation time. 

Establish Guardrails for AI

AI is very helpful, but, like humans, imperfect and can overlook key items like security vulnerabilities. Such solutions can automatically build tests but this is not ideal for code that’s already created. Generative AI only tests what it wrote and does not perform sophisticated analysis, leaving the enhancement open to potential problems.  

Therefore, dev teams must put checks in place, so they catch potential vulnerabilities before they make their way into production applications. Organizations need to scope out project parameters and then establish rules, best practices, and guardrails to mitigate risk and meet compliance requirements. This step is complicated and requires input from legal, compliance, and DevSecOps teams. Many companies are engaging in this area for the first time. As a result, they may need help. GitLab and its AI Transparency Center recently released valuable resources on building a transparency-first AI strategy.

After companies understand the potential risks, they need to talk with their AI provider and understand how the solution works. What AI models does it use? What data do the AI modules interact with? Which vector databases does the application access? How large are the language models (LLMs) that are being trained? How do they function? That analysis provides them with a good foundation for understanding where potential security holes may arise.

Another best practice is limiting how many distinct AI tools will be used throughout the software development lifecycle across the organization. The more tools in use; the more complexity introduced, potentially causing security risks, operational issues, and oversight challenges. The more numerous the tools, the greater the overhead. The more numerous the solutions, the more difficult it becomes to centrally manage what is occurring. The more numerous the tools, the more training that the tech staff needs.

Put Metrics in Place 

Sometimes, enhanced does not mean better. To truly understand AI’s impact, dev teams need to establish baselines and then measure areas like productivity. Typically, organizations would examine how quickly they move code into production, the four DORA metrics, or the time it takes to remediate bugs. 

Those items provide snapshots and not a complete picture. A better option is building out standard workflow measurements inside groups and projects. As a result, metrics from teams to business units roll up automatically and managers analyze the outputs continuously. 

Buy or Build? 

However, software engineers do not want to build such AI monitoring tools themselves. GitLab created an expanding AI DevSecOps platform and toolbox. It includes powerful generative AI models and cutting-edge technologies from hypercloud vendors. GitLab Duo delivers a range of features, like code assistants, conversational chat assistants, and a vulnerability explainer.  

The solution’s benefits extend throughout the software development lifecycle.

Explain Code in Natural Language 

QA testers can use Code Explanation to quickly and easily understand code. For instance, if an MR includes code written in Rust and a complex set of methods, a QA tester can highlight the methods and deliver a natural language readout of what the change is trying to do. This feature enables a QA tester to write test cases more efficiently.

Write Merge Request Descriptions 

GitLab Duo automates the creation of comprehensive descriptions for merge requests and quickly and accurately captures the essence of an MR’s string of commits. The tool also identifies surface tasks that are missing.

Root Cause Analysis of Pipeline Errors

If something breaks, troubleshooting can be difficult. GitLab Duo identifies a possible root cause and a recommended action that can be copied and pasted directly back into a CI job.

Vulnerability Resolution

In the rush to shift security left, engineering teams have had to quickly become security experts. Issues can arise that they’re not familiar with. With generative AI, engineers can access Duo Chat to learn what a vulnerability is, where it is in the code, and even open an automated MR with a possible fix. All of these actions occur within the development window, so no context-switching is needed, saving software engineers time. 

Increase Security and Productivity

By using a tool, like GitDuo, businesses increase software delivery velocity. They lower the time required to resolve vulnerabilities and validate merge requests and have the right reviewers and the right tests. So, the code review time diminishes, and quality increases. 

They also gain visibility. Software engineers view each stage, including dependencies, and the delta it takes the development team to get through those stages. Dashboards illustrate what that speed looks like, so they can easily pivot, if needed. In essence, they also have a better handle on whether or not to release software into production. 

When used consistently across the software development lifecycle, GitLab Duo can drive a 10x faster cycle time, helping organizations do more with less and allowing employees to spend their time on high value tasks.

The “Omdia Market Radar: AI-Assisted Software Development, 2023–24″ report highlighted GitLab Duo as one of the products the analyst firm considers “suitable for enterprise-grade application development,” noting that its “AI assistance is integrated throughout the SDLC pipeline”.

Software development moves faster and faster. DevSecOps teams sometimes have trouble keeping pace. Generative AI has the potential to automate different pieces of the development cycle. However, businesses need tools to ensure that unintended consequences don’t occur when processes are automated. GitLab Duo offers them a platform that lets them reap Generative AI’s potential benefits and avoid its pitfalls.

Full Disclosure: This blog post is sponsored by GitLab.

Software Daily

Software Daily

Subscribe to Software Daily, a curated newsletter featuring the best and newest from the software engineering community.