While part 1 of this blog series on the AI ripple effect in software development and testing discussed some of the factors contributing to the rapid growth of AI-led software development and testing, this installment delves deeper into the need for end-to-end testing and integration testing, as well as going back to the basics: building tests from requirements!
There are three interconnected movements that are contributing to the increased standardization observed across components of new software.
First, let's look at the surge in low-code development platforms like Salesforce, Bubble, or Retool. These platforms are reshaping how we think about software building. While traditional coding won't be packing its bags any time soon, low-code tools empower developers to stitch together applications by arranging pre-built components in visual interfaces. It's a clever twist that not only streamlines the development process but also provides a consistent, standardized approach to application building.
Second, the use of developer assistance tools like Copilot, which generates code based on patterns learned from public sources, leads to unintended standardization. As developers rely on AI assistants to generate code for specific behaviors, the models' ability to follow best practices and handle edge cases improves over time. The result? Co-authored code that's robust, resilient, and in sync with recognized coding standards.
Third, the increasing integration of third-party services, such as authentication or analytics, into software products contributes to standardization across multiple applications. This adoption of standardized components further drives efficiency and overall quality in software construction.
The increasing level of standardization at the component level is likely to shift the focus of testing efforts. As certain application components become more standardized, the similarities across their usage in different applications increase, and the need for extensive testing decreases. This allows testing experts to allocate more time and resources to test the unique and challenging aspects of their systems, rather than spending excessive time on testing routine functionalities.
Standardization efforts, including the use of frameworks that streamline common boilerplate in software development, also contribute to enhanced security by mitigating common vulnerabilities like SQL injection. This reinforces the importance of integration and end-to-end testing, as the focus shifts from individual component testing to more comprehensive and interconnected testing approaches.
Overall, the combination of standardization, code generation, and the integration of third-party services will drive the need for increased emphasis on integration and end-to-end testing in the software development and testing process.
We've seen AI models step up and show off their prowess in generating unit tests, either by scrutinizing code or following test-driven development (TDD) practices. These AI-generated unit tests not only ramp up test coverage but also sharpen the quality and effectiveness of tests.
Improving unit tests translates to better software components. This will steer testing efforts toward the top of the testing pyramid, putting more emphasis on integration and end-to-end testing.
As software development evolves along a spectrum from fully bespoke to fully standardized (or generated using low-code techniques), a parallel spectrum of test types will become more prevalent.
There's a plot twist in the epic of software development: the rebirth of requirements. Once, requirements documents were the VIPs at the development party, setting the tone for every project. However, over time, our industry shifted its focus to communication and collaboration, aiming to prevent business needs from getting distorted as they passed through different stages of the development process.
So, software behavior morphed into a chameleon, donning different colors for different stakeholders — business requirements for some, systems requirements for others, test cases, scripts, and finally, code. This multitude of representations often resulted in project delays and a chaotic game of "Who's got the right version?".
Enter modern AI language models. They've kicked open the door to a whole new realm where computers understand the subtleties of human language. Future AI will harness this power to read narrative-format requirements, automatically generating everything we need, from code to test cases to user manuals. Picture this: requirements reclaiming their throne in the kingdom of development, with AI as their loyal knight.
There will be a renewed emphasis on capturing the intended behavior effectively. AI-powered systems will assist users in describing their needs better by seeking clarifications when certain behaviors are vague or subjective. While this transformation will not happen overnight, we anticipate early signs of this shift in tools like Copilot, where developers express their intent, and eventually extending to testing tools, where test creators increasingly leverage requirements to craft their test cases.
Picture this: testers with superhuman capabilities. Sounds exciting, doesn't it? While we can't gaze into a crystal ball and predict the distant future, we have a pretty solid guess about what the next two, five, and even ten years hold. The forecast? The emergence of super professionals, enhanced by AI-powered sidekicks.
We're already seeing this dynamic duo in action. Marketers and AI are joining forces to craft content, while developers and AI are pairing up to spin out code. Now, it's the testers' turn. We foresee a major evolution in the testing field, with AI-powered tools extending their hands to support testers in every stage of their workflow. It's vital to note that AI is not the new Grim Reaper here, intending to replace testers. Rather, it's like a high-tech pair of glasses helping testers focus better on high-value tasks.
With the assistance of AI, testers will be able to dedicate more time and energy to activities that bring significant value to the testing process. This includes defining testing strategies, deriving comprehensive testing plans, optimizing test coverage, advocating for early quality activities, providing mentorship, and more. By leveraging AI to handle tasks such as maintaining test cases, generating test data, and generating reports, testers will be able to free up valuable time and resources.