GitHub Integration With Subject7

Overview

In this guide, we will walk you through the process of integrating Subject7 with GitHub Actions. It is assumed that you are well-acquainted with GitHub and possess a fundamental understanding of programming, particularly in Python.

Whether you're a seasoned developer or just starting, this document will provide you with clear, step-by-step instructions to successfully set up the integration between Subject7 and GitHub, ensuring a seamless workflow for your projects.

GitHub Actions is a powerful tool for automating your software development and deployment workflows. One common use case is to call a REST API as part of your job. In this article, we'll walk you through the process of creating a GitHub Actions workflow named 'daily-run' using Python to call a REST API. We'll break down each section of the GitHub CI/CD configuration file to explain its purpose.

This workflow consists of several crucial jobs, including "build," "deploy," and "execute tests." The "build" and "deploy" jobs are responsible for building and deploying the app on the target server. However, the real integration happens when we reach the "run-tests" job. Here, we leverage the Subject7 REST API to initiate the execution of a predefined test set (i.e. Execution Set), which essentially comprises a collection of test cases. Based on the outcomes of this execution, we gain the flexibility to make informed decisions, essentially to fail the build or promote the build to the next job/step of the pipeline. 

Main Subject7 REST API functions: 

1. Start an Execution 

2. Query the results of an execution 

3. Get the artifacts of an execution (video, logs, etc.) 

 

Setting Up The 'daily' Workflow

Here's an example GitLab CI/CD configuration for a 'daily' workflow explaining each section. Workflow is stored in YAML file that should be added to repository by next path: .github/workflows/daily.yml

name: daily
run-name: ${{ github.actor }} running daily
on: [push]
jobs:
 
build:
   
runs-on: self-hosted
   
steps:
      -
uses: actions/checkout@v4
      -
name: Set up JDK 11
       
uses: actions/setup-java@v3
       
with:
         
java-version: '11'
         
distribution: 'temurin'
         
cache: maven
      -
name: Build with Maven 11
       
run: mvn -B package --file pom.xml
 
deploy:
   
runs-on: self-hosted
   
needs: build
   
steps:
      -
run: ./deploy.sh
 
run-tests:
   
needs: build
   
runs-on: self-hosted
   
steps:
      -
name: Set up Python 3.10
       
uses: actions/setup-python@v3
       
with:
         
python-version: "3.10"
     
- name: Install dependencies
       
run: |
          python -m pip install --upgrade pip
          pip --disable-pip-version-check install -r .github/tests/requirements.txt
      -
name: Run tests
       
run: |
          python3 .github/tests/run_tests.py https://demo.subject7.com ${{ secrets.S7_API_KEY }} jira_testing default github_testing false

 

name

Workflow name. This particular workflow is named "daily".

 

run-name

Workflow run name that is visible in launch history. The workflow will be named dynamically using the GitHub actor's name who triggered the run, showcasing who is “running daily”.

 

on

Describes an event when to start workflow. In the example above, the workflow will be started on each commit - [push]. You can configure any even or even start workflow only manually.

 

jobs

The workflow comprises three main jobs: build, deploy, and run-tests.

 

runs-on

Each job is set to run on a self-hosted runner rather than GitHub-hosted runners. GitHub also provides own runners on Linux and Windows.

 

needs

Describes if any job needs to wait for the end of other job, so jobs cannot be executed in parallel. In our case deploy job needs to wait for build job and run-tests job needs to wait for deploy job.

 

steps

Ordered steps of the job that will be executed one by one.

 

build job

This job checks out the codebase using the actions/checkout@v4 action. Then job sets up JDK 11 with the actions/setup-java@v3 action to use Temurin distribution and Maven cache. It then proceeds to build the project using Maven.
This is an example how to build java project. GitHub already has some built-in actions like checkout and setup-java, so no need to install it manually with own scripts on each agent machine. Also for self-hosted machine, tools will be cached so it will be not downloaded again on next workflow call.

 

deploy job

Occurs post-build and executes a custom script (deploy.sh) to deploy the built application. This job will not start until the build job completes successfully.

 

run-tests job

This is dependent on the build job as well and involves setting up Python 3.10 with the actions/setup-python@v3 action. It installs necessary dependencies from a requirements.txt file located in the .github/tests/ directory and then runs a Python script to execute tests.

 

The only requirement we need in Python is requests, that allows to do REST calls.

So requirements.txt file should be added to .github/tests (or any other directory in project) and contain only one line below:

requests==2.31.0

Remember, that if you want to change file location you need to update related path in run-tests job step.

 

By following these guidelines, you can create a 'daily' workflow in your GitHub Actions that efficiently calls a REST API using Python. This automation can help streamline your development and deployment processes, ensuring consistent and reliable results.

 

Script Launch Parameters

As you can see in the last step of run-tests job Python script has 6 parameters:

  1. URL of Subject7 Instance where you want to run tests in case if you have multiple on-prem installations. If you use only one instance, you can simplify this parameter and update Python script. (In the example above, it is https://demo.subject7.com)
  2. Subject7 API Key that will be used in REST calls. Details will be described below in additional sections. (In the example above, it is ${{ secrets.S7_API_KEY }} )
  3. Name of the Subject7 project where required execution sets are located. If you don’t use Subject7 features to isolate different QA teams working on different projects with help of Subject7 projects, then you use project named Common. You can either provide it or simplify project name parameter. (In the example above, project name is jira_testing)
  4. Subject7 project version. If you don’t use Subject7 versioning functionality, then you use version called default. You can either provide it or simplify version name parameter. (In the example above, version name is default)
  5. Execution sets names that you want to run. Names should be separated with comma, e.g., set1,set2,set3,set4. (In the example above, we run the only execution set github_testing)
  6. Fail run-tests job (and workflow as result) if any of launched executions fails. If parameter is set to true, then Python script will exit with error if any of executions fail. If it is false, then results of execution will be ignored. It can be needed if you want to run some other jobs after run-tests job and analyze execution results manually after they had been sent via callbacks/notifications of launched execution sets. If you always need or not need to fail the job you can simplify this parameter.

Setting up GitHub Secrets and Variables

It can happen that we don’t want to keep any of 6 parameters above in plain text, because parameter can be long (like execution sets list) or insecure (like API key).

In this case we can use GitHub Secrets and variable functionality.

To add any of them you need to navigate to repository settings in GitHub, expand “Secrets and variables” section in left panel and click on “Actions”.

Here you can put API key to Secrets and execution sets to Variables.

  1. In the example above API key is stored in GitHub secrets with name S7_API_KEY. As you see in the Python script call secrets can be injected into job with next expression template ${{ secrets.SECRET_PARAM_NAME }}.  So in run-tests job it is ${{ secrets.S7_API_KEY }}.
  2. Variables can be referenced starting with simple $ sign. For example, if you create variable EXECUTION_SETS_LIST and put there execution sets separated by comma, then you can point to this variable in run-tests job as $EXECUTION_SETS_LIST.

Python Script for Managing Test Executions via REST API

This script interacts with a REST API and continuously checks the status of test executions. It should be put into repository as .github/tests/run_tests.py. If you want to change location of file don’t forget to update run-tests job.

import time
import requests
import sys
import datetime


def log_time():
    current_time = datetime.datetime.now()
    print(
"Time now at greenwich meridian is:", current_time)


print(
"Starting test ...")
log_time()

FINAL_STATES = [
'COMPLETED', 'ERROR', 'CANCELLED', 'NONE', 'GATEWAY DENIED']
FAIL_STATUSES = [
'ERROR', 'FAIL']
host = sys.argv[
1]
token = sys.argv[
2]
project = sys.argv[
3]
version = sys.argv[
4]
execution_sets = sys.argv[
5].split(',')
error_on_fail = sys.argv[
6].lower() == 'true'
ids = []
execution_failed =
False

for
es in execution_sets:
    print(
"Running for next test suite: {execution_set}".format(execution_set=es))
    resp = requests.post(
"{host}/api/v2/executions".format(host=host), json={'name': es, 'configuration': {'projectName': project, 'versionName': version}},
                        
headers={'X-API-KEY': token, 'Content-Type': 'application/json'})
    print(
"Got response \n {resp}".format(resp=resp))
    e_id = resp.json()[
'id']
    ids.append(e_id)
log_time()
while True:
    time.sleep(
20)
    ids_to_rm = []
    log_time()
   
for e_id in ids:
        resp = requests.get(
"{host}/api/v2/executions/{e_id}".format(host=host, e_id=e_id),
                           
headers={'X-API-KEY': token})
        state = resp.json()[
'executionState']
        print(
"Status {status} for execution {execution}".format(status=state, execution=e_id))
       
if state.upper() in FINAL_STATES:
            ids_to_rm.append(e_id)
            status = resp.json()[
'executionStatus']
           
if error_on_fail is True and status.upper() in FAIL_STATUSES:
                execution_failed =
True
    for
rm_id in ids_to_rm:
        ids.remove(rm_id)
   
if len(ids) == 0:
       
break
if
execution_failed is True:
    print(
"One or more execution sets has ERROR/FAIL status...")
    sys.exit(
"Test case execution failed")

This script should provide better clarity and readability. This is just one way to implement such functionality, and users are free to adapt it to their specific needs and preferences.

Was this article helpful?
0 out of 0 found this helpful
Have more questions? Submit a request

Comments

Please sign in to leave a comment.