Continuous integration

From Wikipedia, the free encyclopedia

Sketch of flow diagram for continuous integration

Continuous integration (CI) is the practice of frequently building and testing a software system during its development. It is intended to ensure that code written by programmers is always buildable, runnable and passes automated testing. Developers merge to an integration branch and an automated system builds and tests.[1] Often, the automated process runs on each commit or runs on a schedule such as once a day.

Grady Booch first proposed the term CI in 1991,[2] although he did not advocate integrating multiple times a day, but later, CI came to include that aspect.[3]

History[edit]

The earliest known work on continuous integration was the Infuse environment developed by G. E. Kaiser, D. E. Perry, and W. M. Schell.[4]

In 1994, Grady Booch used the phrase continuous integration in Object-Oriented Analysis and Design with Applications (2nd edition)[5] to explain how, when developing using micro processes, "internal releases represent a sort of continuous integration of the system, and exist to force closure of the micro process".

In 1997, Kent Beck and Ron Jeffries invented extreme programming (XP) while on the Chrysler Comprehensive Compensation System project, including continuous integration.[1][self-published source] Beck published about continuous integration in 1998, emphasising the importance of face-to-face communication over technological support.[6] In 1999, Beck elaborated more in his first full book on Extreme Programming.[7] CruiseControl, one of the first open-source CI tools,[8][self-published source] was released in 2001.

In 2010, Timothy Fitz published an article detailing how IMVU's engineering team had built and been using the first practical CI system. While his post was originally met with scepticism, it quickly caught on and found widespread adoption[9] as part of the Lean software development methodology, also based on IMVU.

Goal[edit]

A stated goal of CI is to run the automated process frequently enough that no intervening window remains between commit and build, and such that no errors can arise without developers noticing them and correcting them immediately.[1] Generally, this means triggering builds on each commit to a repository. Due to processing limitations, sometimes multiple changes are committed between automation runs.

Practices[edit]

A server builds from the integration branch frequently. Usually this is after each commit or periodically like once a day.

Once built, all tests should run.[10]

The server may also run other quality control and software quality processes such as static analysis, measuring performance, extracting documentation from the source code, and facilitating manual QA processes.

Related practices[edit]

This section lists best practices from practitioners for other practices that enhance CI.

Build automation[edit]

Build automation is a best practice.[11][12]

Atomic commits[edit]

CI requires the version control system to support atomic commits; i.e., all of a developer's changes are handled as a single commit.

Committing changes[edit]

When making a code change, a developer creates a branch that is a copy of the current codebase. As other changes are committed to the repository, this copy diverges from the latest version.

The longer development continues on a branch without merging to the integration branch, the greater the risk of multiple integration conflicts[13] and failures when the developer branch is eventually merged back. When developers submit code to the repository they must first update their code to reflect the changes in the repository since they took their copy. The more changes the repository contains, the more work developers must do before submitting their own changes.

Eventually, the repository may become so different from the developers' baselines that they enter what is sometimes referred to as "merge hell", or "integration hell",[14] where the time it takes to integrate exceeds the time it took to make their original changes.[15]

Testing locally[edit]

Proponents of CI suggest that developers should use test-driven development and to ensure that all unit tests pass locally before committing to the integration branch so that one developer's work does not break another developer's copy.

Incomplete features can be disabled before committing, using feature toggles.

Continuous delivery and continuous deployment[edit]

Continuous delivery ensures the software checked in on an integration branch is always in a state that can be deployed to users, and continuous deployment automates the deployment process.

Continuous delivery and continuous deployment are often performed in conjunction with CI and together form a CI/CD pipeline.

Version control[edit]

Proponents of CI recommend storing all files and information needed for building in version control, (for git a repository); that the system should be buildable from a fresh checkout and not require additional dependencies.

Martin Fowler recommends that all developers commit to the same integration branch.[16]

Automate the build[edit]

Build automation tools automate building.

Proponents of CI recommend that a single command should have the capability of building the system.

Automation often includes automating the integration, which often includes deployment into a production-like environment. In many cases, the build script not only compiles binaries but also generates documentation, website pages, statistics and distribution media (such as Debian DEB, Red Hat RPM or Windows MSI files).

Everyone commits to the baseline every day[edit]

By committing regularly, every committer can reduce the number of conflicting changes. Checking in a week's worth of work runs the risk of conflicting with other features and can be very difficult to resolve. Early, small conflicts in an area of the system cause team members to communicate about the change they are making.[17] Committing all changes at least once a day (once per feature built) is generally considered part of the definition of Continuous Integration. In addition, performing a nightly build is generally recommended.[citation needed] These are lower bounds; the typical frequency is expected to be much higher.

Every commit should be built[edit]

The system should build commits to the current working version to verify that they integrate correctly. A common practice is to use Automated Continuous Integration, although this may be done manually. Automated Continuous Integration employs a continuous integration server or daemon to monitor the revision control system for changes, then automatically run the build process.

Every bug-fix commit should come with a test case[edit]

When fixing a bug, it is a good practice to push a test case that reproduces the bug. This avoids the fix to be reverted, and the bug to reappear, which is known as a regression.

Keep the build fast[edit]

The build needs to complete rapidly so that if there is a problem with integration, it is quickly identified.

Test in a clone of the production environment[edit]

Having a test environment can lead to failures in tested systems when they deploy in the production environment because the production environment may differ from the test environment in a significant way. However, building a replica of a production environment is cost-prohibitive. Instead, the test environment or a separate pre-production environment ("staging") should be built to be a scalable version of the production environment to alleviate costs while maintaining technology stack composition and nuances. Within these test environments, service virtualisation is commonly used to obtain on-demand access to dependencies (e.g., APIs, third-party applications, services, mainframes, etc.) that are beyond the team's control, still evolving, or too complex to configure in a virtual test lab.

Make it easy to get the latest deliverables[edit]

Making builds readily available to stakeholders and testers can reduce the amount of rework necessary when rebuilding a feature that doesn't meet requirements. Additionally, early testing reduces the chances that defects survive until deployment. Finding errors earlier can reduce the amount of work necessary to resolve them.

All programmers should start the day by updating the project from the repository. That way, they will all stay up to date.

Everyone can see the results of the latest build[edit]

It should be easy to find out whether the build breaks and, if so, who made the relevant change and what that change was.

Automate deployment[edit]

Most CI systems allow the running of scripts after a build finishes. In most situations, it is possible to write a script to deploy the application to a live test server that everyone can look at. A further advance in this way of thinking is continuous deployment, which calls for the software to be deployed directly into production, often with additional automation to prevent defects or regressions.[18][19]

Benefits[edit]

Continuous integration is intended to produce benefits such as:

  • Helps detect bugs early and facilitate fixing them due to smaller changesets
  • Avoids build chaos that otherwise ensures at release time
  • When a test fails or a bug is found, reverting the codebase to a bug-free baseline entails fewer lost changes
  • Frequent availability of a known-good build for testing, demo, and release
  • Frequent code check-in encourages modular, less complex code[20]

Continuous automated testing may include benefits:

Costs[edit]

Downsides of continuous integration include:

  • Setup of a build system requires effort.[21]
  • Constructing and maintaining an automated test suite requires effort although this cost may be offset by the benefits of testing
  • CI may not be valuable if the project is small
  • Value added depends on the quality of tests[22]
  • Tracking deliveries while preserving quality can be difficult
  • Builds can sit in queue which limits value.[22]
  • Partial implementation for a feature could easily be pushed and therefore integration tests will fail until the feature is complete.[22]
  • Safety and mission-critical development assurance (e.g., DO-178C, ISO 26262) require rigorous documentation and in-process review that are difficult to achieve using continuous integration

See also[edit]

References[edit]

  1. ^ a b c Fowler, Martin (1 May 2006). "Continuous Integration". Retrieved 9 January 2014.
  2. ^ Booch, Grady (1991). Object Oriented Design: With Applications. Benjamin Cummings. p. 209. ISBN 9780805300918. Retrieved 18 August 2014.
  3. ^ Beck, K. (1999). "Embracing change with extreme programming". Computer. 32 (10): 70–77. doi:10.1109/2.796139. ISSN 0018-9162.
  4. ^ Kaiser, G. E.; Perry, D. E.; Schell, W. M. (1989). Infuse: fusing integration test management with change management. Proceedings of the Thirteenth Annual International Computer Software & Applications Conference. Orlando, Florida. pp. 552–558. CiteSeerX 10.1.1.101.3770. doi:10.1109/CMPSAC.1989.65147.
  5. ^ Booch, Grady (December 1998). Object-Oriented Analysis and Design with applications (PDF) (2nd ed.). Archived from the original (PDF) on 19 August 2019. Retrieved 2 December 2014.
  6. ^ Beck, Kent (28 March 1998). "Extreme Programming: A Humanistic Discipline of Software Development". Fundamental Approaches to Software Engineering: First International Conference. Vol. 1. Lisbon, Portugal: Springer. p. 4. ISBN 9783540643036.
  7. ^ Beck, Kent (1999). Extreme Programming Explained. Addison-Wesley Professional. p. 97. ISBN 978-0-201-61641-5.
  8. ^ "A Brief History of DevOps, Part III: Automated Testing and Continuous Integration". CircleCI. 1 February 2018. Retrieved 19 May 2018.
  9. ^ Sane, Parth (2021), "A Brief Survey of Current Software Engineering Practices in Continuous Integration and Automated Accessibility Testing", 2021 Sixth International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET), pp. 130–134, arXiv:2103.00097, doi:10.1109/WiSPNET51692.2021.9419464, ISBN 978-1-6654-4086-8, S2CID 232076320
  10. ^ Radigan, Dan. "Continuous integration". Atlassian Agile Coach.
  11. ^ Brauneis, David (1 January 2010). "[OSLC] Possible new Working Group – Automation". open-services.net Community (Mailing list). Archived from the original on 1 September 2018. Retrieved 16 February 2010.
  12. ^ Taylor, Bradley. "Rails Deployment and Automation with ShadowPuppet and Capistrano". Rails machine (blog). Archived from the original on 2 December 2012. Retrieved 16 February 2010.
  13. ^ Duvall, Paul M. (2007). Continuous Integration. Improving Software Quality and Reducing Risk. Addison-Wesley. ISBN 978-0-321-33638-5.
  14. ^ Cunningham, Ward (5 August 2009). "Integration Hell". WikiWikiWeb. Retrieved 19 September 2009.
  15. ^ "What is Continuous Integration?". Amazon Web Services.
  16. ^ Fowler, Martin. "Practices". Continuous Integration (article). Retrieved 29 November 2015.
  17. ^ "Continuous Integration". Thoughtworks.
  18. ^ Ries, Eric (30 March 2009). "Continuous deployment in 5 easy steps". Radar. O’Reilly. Retrieved 10 January 2013.
  19. ^ Fitz, Timothy (10 February 2009). "Continuous Deployment at IMVU: Doing the impossible fifty times a day". Wordpress. Retrieved 10 January 2013.
  20. ^ Junpeng, Jiang; Zhu, Can; Zhang, Xiaofang (July 2020). "An Empirical Study on the Impact of Code Contributor on Code Smell" (PDF). International Journal of Performability Engineering. 16 (7): 1067–1077. doi:10.23940/ijpe.20.07.p9.10671077. S2CID 222588815.
  21. ^ Laukkanen, Eero (2016). "Problems, causes and solutions when adopting continuous delivery—A systematic literature review". Information and Software Technology. 82: 55–79. doi:10.1016/j.infsof.2016.10.001.
  22. ^ a b c Debbiche, Adam. "Assessing challenges of continuous integration in the context of software requirements breakdown: a case study" (PDF).

External links[edit]