-3.1 C
New York
Tuesday, December 24, 2024

Versioning with Git Tags and Standard Commits


When performing software program improvement, a primary apply is the versioning and model management of the software program. In lots of fashions of improvement, similar to DevSecOps, model management contains way more than the supply code but additionally the infrastructure configuration, check suites, documentation and plenty of extra artifacts. A number of DevSecOps maturity fashions take into account model management a primary apply. This contains the OWASP DevSecOps Maturity Mannequin in addition to the SEI Platform Impartial Mannequin.

The dominant instrument for performing model management of supply code and different human readable information is git. That is the instrument that backs standard supply code administration platforms, similar to GitLab and GitHub. At its most simple use, git is superb at incorporating modifications and permitting motion to completely different variations or revisions of a venture being tracked. Nevertheless, one draw back is the mechanism git makes use of to call the variations. Git variations or commit IDs are a SHA-1 hash. This downside isn’t distinctive to git. Many instruments used for supply management resolve the issue of methods to uniquely determine a set of modifications from some other in the same method. In mercurial, one other supply code administration instrument a changeset is recognized by a 160-bit identifier.

This implies to confer with a model in git, one could need to specify an ID similar to 521747298a3790fde1710f3aa2d03b55020575aa (or the shorter however no much less descriptive 52174729). This isn’t a great way for builders or customers to confer with variations of software program. Git understands this and so has tags that permit task of human readable names to those variations. That is an additional step after making a commit message and ideally is predicated on the modifications launched within the commit. That is duplication of effort and a step that could possibly be missed. This results in the central query: How can we automate the task of variations (by way of tags)? This weblog put up explores my work on extending the traditional commit paradigm to allow computerized semantic versioning with git tags to streamline the event and deployment of software program merchandise. This automation is meant to avoid wasting improvement time and forestall points with handbook versioning.

I’ve lately been engaged on a venture the place one template repository was reused in about 100 different repository pipelines. It was vital to check and ensure nothing was going to interrupt earlier than pushing out modifications on the default department, which a lot of the different initiatives pointed to. Nevertheless, with supporting so many customers of the templates there was inevitably one repository that might break or use the script in a non-conventional method. In a number of instances, we wanted to revert modifications on the department to allow all repositories to cross their Steady Integration (CI) checks once more. In some instances, failing the CI pipeline would hamper improvement for the customers as a result of it was a requirement to cross the script checks of their CI pipelines earlier than constructing and different phases. Consequently, some shoppers would create a long-lived department within the template repository I helped preserve. These long-lived branches are separate variations that don’t get the entire identical updates as the principle line of improvement. These branches are created in order that customers didn’t get all of the modifications rolled out on the default department instantly. Lengthy-lived branches can grow to be stale after they don’t obtain updates which were made to the principle line of improvement. These long-lived, stale branches made it troublesome to wash up the repository with out additionally presumably breaking CI pipelines. This turned an issue as a result of when reverting the repository to a earlier state, I typically needed to level to a reference, similar to HEAD~3, or the hash of the earlier commit earlier than the breaking change was built-in into the default department. This subject was exacerbated by the truth that the repository was not utilizing git tags to indicate new variations.

Whereas there are some arguments for utilizing the newest and biggest model of a brand new software program library or module (also known as “stay at head,”) this technique of working was not working for this venture and consumer base. We wanted higher model management within the repository with a method to sign to customers if a change could be breaking earlier than they up to date.

Standard Commits

To get a deal with on understanding the modifications to the repository, the builders selected adopting and imposing standard commits. The standard commits specification provides guidelines for creating an specific commit historical past on high of commit messages. Additionally, by breaking apart a title and physique, the affect of a commit could be extra simply deduced from the message (assuming the writer understood the change implications). The usual additionally ties to semantic versioning (extra on that in a minute). Lastly, by imposing size necessities, the group hoped to keep away from commit messages, similar to fastened stuff, Working now, and the automated Up to date .gitlab-ci.yml.

For standard commits the next construction is imposed:

<sort> [optional scope]: <description>

[optional body]

[optional footer(s)]

The place <sort> is one in all repair, feat, BREAKING CHANGE or others. For this venture we selected barely completely different phrases. The next regex defines the commit message necessities within the venture that impressed this weblog put up:

^(characteristic|bugfix|refactor|construct|main)/ [a-z ]{20,}(rn?|n)(rn?|n)[a-zA-Z].{20,}$

An instance of a traditional commit message is:

characteristic: Add a brand new put up about git commits

The put up explains methods to use standard commits to routinely model a repository

The primary motivation behind imposing standard commits was to wash up the venture’s git historical past. Having the ability to perceive the modifications {that a} new model brings in by way of commits alone can pace up code critiques and assist when debugging points or figuring out when a bug was launched. It’s a good apply to commit early and sometimes, although the steadiness between committing each failed experiment with the code and never cluttering the historical past has led to many completely different git methods. Whereas the venture inspiring this weblog put up makes no suggestions on how usually to commit, it does implement at the least a 20-character title and 20-character physique for the commit message. This adherence to standard commits by the group was foundational to the remainder of the work achieved within the venture and described on this weblog put up. With out the power to find out what modified and the affect of the change immediately within the git historical past, it might have difficult the trouble and doubtlessly pushed in the direction of a much less transportable answer. Imposing a 20-character minimal could appear arbitrary and a burden for some smaller modifications. Nevertheless, imposing this minimal is a method to get to informative commit messages which have actual that means for a human that’s reviewing them. As famous above this restrict can drive builders to remodel a commit message from ci working to Up to date variable X within the ci file to repair construct failures with GCC.

Semantic Versioning

As famous, standard commits tie themselves to the notion of semantic versioning, which semver.org defines as “a easy algorithm and necessities that dictate how model numbers are assigned and incremented.” The usual denotes a model quantity consisting of MAJOR.MINOR.PATCH the place MAJOR is any change that’s incompatible, MINOR is a backward suitable change with new options, and PATCH is a backward suitable bug repair. Whereas there are different versioning methods and a few famous points with semantic versioning, that is the conference that the group selected to make use of. Having variations denoted on this method by way of git tags permits customers to see the affect of the change and replace to a brand new model when prepared. Conversely a group may proceed to stay at head till they run into a problem after which extra simply see what variations have been accessible to roll again to.

COTS Options

This subject of routinely updating to a brand new semantic model when a merge request is accepted isn’t a brand new concept. There are instruments and automations that present the identical performance however are typically focused at a particular CI system, similar to GitHub Actions, or a particular language, similar to Python. For instance, the autosemver python package deal is ready to extract info from git commits to generate a model. The autosemver functionality, nonetheless, depends on being arrange in a setup.py file. Moreover, this venture isn’t extensively used within the Python neighborhood. Equally, there’s a semantic-release instrument, however this requires Node.js within the construct surroundings, which is much less widespread in some initiatives and industries. There are additionally open-source GitHub actions that allow computerized semantic versioning, which is nice if the venture is hosted on that platform. After evaluating these choices although, it didn’t appear essential to introduce Node.js as a dependency. The venture was not hosted on GitHub, and the venture was not Python-based. On account of these limitations, I made a decision to implement my very own minimal viable product (MVP) for this performance.

Different Implementations

Having determined in opposition to off-the-shelf options to the issue of versioning the repo, subsequent I turned to some weblog posts on the topic. First a put up by Three Dots Labs helped me determine an answer that was oriented towards GitLab, much like my venture. That put up, nonetheless, left it as much as the reader methods to decide the following tag model. Marc Rooding expanded the Three Dots Labs put up along with his personal weblog put up. Right here he suggests utilizing merge request labels and pulling these from the API to determine the model to bump the repository to. This strategy had three drawbacks that I recognized. First, it appeared like an extra handbook step so as to add the proper tags to the merge request. Second, it depends on the API to get tags from the merge request. Lastly, this could not work if a hotfix was dedicated on to the default department. Whereas this final level ought to be disallowed by coverage, the pipeline ought to nonetheless be sturdy ought to it occur. Given the probability of error on this case of commits on to essential, it’s much more vital that tags are generated for rollback and monitoring. Given these components, I made a decision to choose utilizing the traditional commit sorts from the git historical past to find out the model replace wanted.

Implementation

This template repository referenced within the introduction makes use of GitLab because the CI/CD system. Consequently, I wrote a pipeline job to extract the git historical past for the default department after being merged. The pipeline job assumes that both (1) there’s a single commit, (2) the commits have been squashed and that every correctly formatted commit message is contained within the squash commit, or (3) a merge commit is generated in the identical method (containing all department commits). Which means the setup proposed right here can work with squash-and-merge or rebase-and-fast-forward methods. It additionally handles commits on to the default department, if anybody would try this. In every case, the belief is that the commit–whether merger, squash, or regular–still matches the sample for standard commits and is written appropriately with the proper standard commit sort (main, characteristic, and so forth.). The final commit is saved in a variable LAST_COMMIT in addition to the final tag within the repo LAST_TAG.

A fast apart on merging methods. The answer proposed on this weblog put up assumes that the repository makes use of a squash-and-merge technique for integrating modifications. There are a number of defensible arguments for each a linear historical past with all intermediate commits represented or for a cleaner historical past with solely a single commit per model. With a full, linear historical past one can see the event of every characteristic and all trials and errors a developer had alongside the way in which. Nevertheless, one draw back is that not each model of the repository represents a working model of the code. With a squash-and-merge technique, when a merge is carried out, all commits in that merge are condensed right into a single commit. This implies that there’s a one-to-one relationship with commits on the principle department and branches merged into it. This allows reverting to anyone commit and having a model of the software program that handed by way of no matter overview course of is in place for modifications going into the trunk or essential department of the repository. The right technique ought to be decided for every venture. Many instruments that wrap round git, similar to GitLab, make the method for both technique simple with settings and configuration choices.

With all the traditional commit messages because the final merge to essential captured, these commit messages have been handed off to the next_version.py Python script. The logic is fairly easy. For inputs there’s the present model quantity and the final commit message. The script merely appears for the presence of “main” or “characteristic” because the commit sort within the message. It really works on the idea that if any commit within the department’s historical past is typed as “main” the script is completed and outputs the following main model. If not discovered, the script searches for “minor” and if not discovered the merge is assumed to be a patch model. On this method the repo is at all times up to date by at the least a patch model.

The logic within the Python script may be very easy as a result of it was already a dependency within the construct surroundings, and it was clear sufficient what the script was doing. The identical could possibly be rewritten in Bash (e.g., the semver instrument), in one other scripting language, or as a pipeline of *nix instruments.

This code defines a GitLab pipeline with a single stage (launch) that has a single job in that stage (tag-release). Guidelines are specified that the job solely runs if the commit reference identify is identical because the default department (normally essential). The script portion of the job provides curl and Python to the picture. Subsequent it will get the final commit by way of the git log command and shops it within the LAST_COMMIT variable. It does the identical with the final tag. The pipeline then makes use of the next_version.py script to generate the following tag model and eventually pushes a tag with the brand new model utilizing curl to the GitLab API.

```

phases:

- launch

tag-release:

guidelines:

- if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH

stage: launch

script:

- apk add curl git python3

- LAST_COMMIT=$(git log -1 --pretty=%B) # Final commit message

- LAST_TAG=$(git describe --tags --abbrev=0) # Final tag within the repo

- NEXT_TAG=$(python3 next_version.py ${LAST_TAG} ${LAST_COMMIT})

- echo Pushing new model tag ${NEXT_TAG}

- curl -k --request POST --header "PRIVATE-TOKEN:${TAG_TOKEN}" --url "${CI_API_V4_URL}/initiatives/${CI_PROJECT_ID}/repository/tags?tag_name=${NEXT_TAG}&ref=essential"

```

The next Python script takes in two arguments, the final tag within the repo and the final commit message. The script then finds the kind of commit by way of the if/elseif/else statements to increment the final tag to the suitable subsequent tag and prints out the following tag to be consumed by the pipeline.

```
import sys

last_tag = sys.argv[1]
last_commit = sys.argv[2]
next_tag = ""
brokenup_tag = last_tag.cut up(".")

if "main/" in last_commit:
major_version = int(brokenup_tag[0])
next_tag = str(major_version+1)+".0.0"

elif "characteristic/" in last_commit:
feature_version = int(brokenup_tag[1])
next_tag = brokenup_tag[0]+"."+str(feature_version+1)+".0"

else:
patch_version = int(brokenup_tag[2])
next_tag = brokenup_tag[0]+"."+brokenup_tag[1]+"."+str(patch_version+1)

print(next_tag)
```

Lastly, the final step is to push the brand new model to the git repository. As talked about, this venture was hosted in GitLab, which gives an API for git tags within the repo. The NEXT_TAG variable was generated by the Python script, after which we used curl to POST a brand new tag to the repository’s /tags endpoint. Encoded within the URL is the ref to make the tag from. On this case it’s essential however could possibly be adjusted. The one gotcha right here is, as acknowledged beforehand, that the job runs solely on the default pipeline after the merge takes place. This ensures the final commit (HEAD) on the default department (essential) is tagged. Within the above GitLab job, the TAG_TOKEN is a CI variable whose worth is a deploy token. This token must have the suitable permissions arrange to have the ability to write to the repository.

Subsequent Steps

Semantic versioning’s essential motivation is to keep away from a state of affairs the place a chunk of software program is in both a state of model lock (the lack to improve a package deal with out having to launch new variations of each dependent package deal) or model promiscuity (assuming compatibility with extra future variations than is cheap). Semantic versioning additionally helps to sign to customers and keep away from operating into points the place an API name is modified or eliminated, and software program is not going to interoperate. Monitoring variations informs customers and different software program that one thing has modified. This model quantity, whereas useful, doesn’t let a consumer know what has modified. The following step, constructing on each discrete variations and traditional commits, is the power to condense these modifications right into a changelog giving builders and customers, “a curated, chronologically ordered checklist of notable modifications for every model of a venture.” This helps builders and customers know what has modified, along with the affect.

Having a method to sign to customers when a library or different piece of software program has modified is vital. Even so, it isn’t essential to have versioning be a handbook course of for builders. There are merchandise and free, open supply options to this subject, however they might not at all times be a very good match for any specific improvement surroundings. On the subject of security-critical software program, similar to encryption or authentication, it’s a good suggestion to not roll your individual. Nevertheless, for steady integration (CI) jobs generally industrial off-the shelf (COTS) options are extreme and produce important dependencies with them. On this instance, with a 6-line BASH script and a 15-line Python script, one can implement auto-semantic versioning in a pipeline job that (within the deployment examined) runs in ~10 seconds. This instance additionally reveals how the method could be minimally tied to a particular construct or CI system and never depending on a particular language or runtime (even when Python was used out of comfort).

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles