Isaac Sacolick
Contributing writer

4 key devsecops skills for the generative AI era

analysis
Jan 01, 20248 mins
Artificial IntelligenceDevopsGenerative AI

High-level skills such as validating and monitoring LLMs and learning the AI stack are good areas for technologists to tackle.

When cloud computing became enterprise-ready, and tools such as continuous integration and continuous delivery, infrastructure as code, and Kubernetes became mainstream, it marked a clear paradigm shift in dev and ops. The work separating dev and ops became devops responsibilities, and collaborative teams shifted from manual work configuring infrastructure, scaling computing environments, and deploying applications to more advanced automation and orchestrated workflows.

Experts believe that generative AI capabilities, copilots, and large language models (LLMs) are ushering in a new era of how developers, data scientists, and engineers will work and innovate. They expect AI to improve productivity, quality, and innovation, but devsecops teams must understand and manage a new set of data, security, and other operational risks. More importantly, CIOs and teams in devsecops, information security, and data science will play important roles in enabling and protecting the organization using generative AI capabilities.

IT leaders must drive an AI responsibility shift

CIOs and IT leaders must prepare their teams and employees for this paradigm shift and how generative AI impacts digital transformation priorities. Nicole Helmer, VP of development and customer success learning at SAP, says training must be a priority. “Companies should prioritize training for developers, and the critical factor in increasing adaptability is to create space for developers to learn, explore, and get hands-on experience with these new AI technologies,” she says.

The shift may be profound and tactical as more IT automation becomes productized, enabling IT to shift to more innovation, architecture, and security responsibilities.

“In light of generative AI, devops teams should deprioritize basic scripting skills for infrastructure provisioning and configuration, low-level monitoring configurations and metrics tracking, and test automation, says Dr. Harrick Vin, chief technology officer of TCS. “Instead, they should focus more on product requirements analysis, acceptance criteria definition, software, and architectural design, all of which require critical thinking, design, strategic goal setting, and creative problem-solving skills.”

Here are four devsecops, data science, and other IT skills to develop for this era of generative AI.

Prompt AIs, but research and validate the response

Prompting is fundamental when working with generative AI tools, including ChatGPT, copilots, and other LLMs. But the more important skill is evaluating results, recognizing hallucinations, and independently validating generative AI’s recommendations.

“Developers, testers, and business analysts should learn how to write prompts [and learn] where generative AI does well and where it falls down,” says David Brooks, SVP and lead evangelist at Copado. “Adopt a ‘trust but verify’ mentality where you actually read all of the generated content to determine if it makes sense.”

Cody De Arkland, director of developer relations at LaunchDarkly, says prompting and validating skills must be applied to experiments with LLMs. “Used correctly, developers can leverage an LLM to enhance their product experimentation by rapidly generating new experiment variations, especially when the prompt is framed around their hypothesis and with the right audience in mind. Learning to catch the gaps in the answers they give and how to take the 90% it gives you and close the gap on the final 10% will make you a much more effective devops practitioner.”

My recommendation to devsecops engineers is to shift problem-solving approaches. Before LLMs, engineers would research, validate, implement, and test solutions. Today, engineers should insert prompting at the start of the process but not lose the remaining steps when experimenting.

Improve LLMs with data engineering

When I asked Akshay Bhushan, partner at Tola Capital, for his pick of an important generative AI skill set, he responded, “Data engineering is becoming the most important skill because we need people to build pipelines to feed data to the model.”

Before LLMs, many organizations focused on building robust data pipelines, improving data quality, enabling citizen data science capabilities, and establishing proactive data governance on structured data. LLMs require an expanded scope of unstructured data, including text, documents, and multimedia to train and enable a broader context. Organizations will need data scientists and data governance specialists to learn new tools to support unstructured data pipelines and develop LLM embeddings, and there will be opportunities for devsecops engineers to integrate applications and automate the underlying infrastructure.

“Generative AI models rely heavily on data for training and evaluation, so data pipeline orchestration skills are essential for cleaning, preprocessing, and transforming data into a format suitable for machine learning,” says Rohit Choudhary, cofounder and CEO of Acceldata. “Visualization skills are also important for understanding data distributions, identifying patterns, and analyzing model performance.”

All technologists will have opportunities to learn new data engineering skills and apply them to growing business needs.

Learn the AI stack from copilots to modelops

Technology platform providers are introducing generative AI capabilities in IDEs, IT service management platforms, and other agile development tools. Copilots that generate code based on developers’ prompts are promising opportunities for developers, but they require evaluating the results for integration, performance, security, and legal considerations.

“AI has ushered in a whole new era of efficiency, but tools like Copilot produce massive amounts of code which are not always accurate,” said Pryon Founder and CEO Igor Jablokov. “Both the devops stack and cybersecurity industry will have to catch up in spotting generative code to ensure no copyright issues and defects are being introduced into the enterprise.”

Organizations with significant intellectual property can create embedding and develop privatized LLMs for prompting and using natural language queries against this data. Examples include searching financial information, developing LLMs on healthcare patient data, or establishing new educational learning tools. Developers and data scientists who want to contribute to developing LLMs have several new technologies to learn.

“The modern devops engineer needs to learn vector databases and the open source stack, such as Hugging Face, Llama, and LangChain,” says Nikolaos Vasiloglou, VP of research machine learning at RelationalAI. “While using giant language models with 100 billion parameters is popular, there is enough evidence that the game might change with fine tuning and composing hundreds of smaller models. Managing the life cycle of these models is another task that is not trivial.”

Lastly, although developing proofs of concept and experimenting is important, the goal should be to deliver production-ready generative AI capabilities, monitor their results, and continuously improve them. The disciplines of MLops and modelops extend from machine learning into generative AI and are required to support the full development and support life cycles.

Kjell Carlsson, head of data science strategy and evangelism at Domino, says, “The ability to operationalize generative AI models and their pipelines is quickly becoming the most valuable skill in AI as it is the largest barrier in driving impact with generative AI.”

Shift-left security and test automation

Experts all state that researching, validating, and testing a generative AI’s responses are critical disciplines, but many IT organizations lack the security and QA test automation staffing, skills, and tools to meet the growing challenges. Developers, operations engineers, and data scientists should invest in these security and test automation skills to help fill these gaps.

“With AI, we can shift security, QA, and observability left in the development life cycle, catch issues earlier, deliver higher-quality code, and give developers rapid feedback,” says Marko Anastasov, cofounder of Semaphore CI/CD. “Legacy skills like manual testing and siloed security may become less important as AI and automation take over more of that work.”

IT must institute continuous testing and security disciplines wherever they insert generative AI capabilities into their workflows, leverage AI-generated code, or experiment with developing LLMs.

“Devops teams should prioritize skills that bridge the gap between generative AI and devops, such as mastering AI-driven threat detection, ensuring the security of automated CI/CD pipelines, and understanding AI-based bug remediation,” says Stephen Magill, VP of product innovation at Sonatype. “Investing in areas that are the biggest pain points for teams, such as the lack of insight into how code was built or code sprawl from producing too much code, is also crucial, while less emphasis can be placed on manual and reactive tasks.”

However, focusing on the security and testing around how IT uses generative AI is insufficient, as many other departments and employees are already experimenting with ChatGPT and other generative AI tools.

David Haber, CEO and cofounder of Lakera, says devops teams must understand AI security. “Develop skills to mitigate common vulnerabilities like prompt injections or training data poisoning, and conduct LLM-oriented red-teaming exercises. Devops teams should implement continuous monitoring and incident response mechanisms to quickly detect emerging threats and respond before they become a companywide problem.”

Will generative AI change the world, or will risks and regulations slow down innovation’s pace? Every major technological advancement comes with new technical opportunities, challenges, and risks. Learning the tools and applying test-driven approaches are key practices for technologists to adapt with generative AI, and there are growing security responsibilities to address as departments look to operationalize AI-enabled capabilities.

Isaac Sacolick
Contributing writer

Isaac Sacolick, President of StarCIO, a digital transformation learning company, guides leaders on adopting the practices needed to lead transformational change in their organizations. He is the author of Digital Trailblazer and the Amazon bestseller Driving Digital and speaks about agile planning, devops, data science, product management, and other digital transformation best practices. Sacolick is a recognized top social CIO, a digital transformation influencer, and has over 900 articles published at InfoWorld, CIO.com, his blog Social, Agile, and Transformation, and other sites.

The opinions expressed in this blog are those of Isaac Sacolick and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.

More from this author