continuous integration gitlabgitlab cidevopsdevsecopsci cd pipeline

A Modern Guide to Continuous Integration GitLab

Build a robust continuous integration GitLab workflow. Learn to create pipelines, manage runners, secure secrets, and integrate automated security scanning.

Published March 15, 2026 · Updated March 15, 2026

A Modern Guide to Continuous Integration GitLab

Bringing continuous integration into GitLab is about more than just automation; it’s about making your entire development workflow smarter and more cohesive. By handling builds, tests, and validation directly within your version control system, you eliminate the constant back-and-forth between different tools.

Your repository becomes the true centre of your project, which helps teams deliver better software, faster.

Why GitLab's Approach to CI Is So Effective

In any modern software project, speed and reliability are everything. This is where GitLab's integrated CI/CD really proves its worth. Instead of treating your source code and your CI pipelines as two separate things, GitLab brings them together into one unified platform.

The secret sauce is the .gitlab-ci.yml file. This simple configuration file lives right inside your project repository, alongside your application code. This concept is often called "pipeline as code," and it’s a powerful way to work.

What this means in practice is that every change to your build, test, or deployment process is versioned and reviewed just like any other code change. For teams building on platforms like Supabase or Firebase, this creates an incredibly smooth workflow. A single git push can automatically kick off a whole chain of events:

  • Builds: Compiling your code into a ready-to-deploy package.
  • Tests: Running all your unit, integration, and end-to-end tests to catch regressions.
  • Scans: Checking for security vulnerabilities and ensuring code quality standards are met.
  • Feedback: Instantly letting developers know if their changes passed or failed.

This immediate feedback is what helps you maintain momentum without sacrificing quality. It’s no surprise that the continuous integration tools market is expected to hit USD 5.36 billion by 2031, as more companies move away from older, high-maintenance CI systems.

By keeping CI/CD inside the version control system, GitLab gets rid of the context switching that burns out developers. It creates a single source of truth that improves collaboration and makes sure everyone is on the same page.

When you look at a CI/CD tools comparison, including GitLab CI, you can see how this integrated approach stands out against standalone tools. It turns the development lifecycle into an efficient, transparent, and automated process. This philosophy also aligns perfectly with modern DevOps, a topic we explore further in our guide on the relationship between security and DevOps.

Alright, enough with the theory. Let's get our hands dirty and build a real .gitlab-ci.yml file. We’re going to skip the usual "hello world" stuff and jump straight into a practical pipeline for a web application backend. This one file is the brain behind your entire GitLab CI/CD process, dictating every step from code commit to deployment.

The .gitlab-ci.yml file revolves around two core concepts: stages and jobs. Think of stages as the major phases in your assembly line, like build and test. A job is a specific task that happens within a stage, containing the actual commands you want to run.

The whole process, from a developer's push to a fully checked result, is beautifully simple when you visualise it.

A clear diagram illustrating the GitLab CI process flow from coding to deployment.

As you can see, your code kick-starts a process that GitLab orchestrates based entirely on the pipeline you're about to define.

Defining Your Pipeline Stages

The first thing we need to do is map out our pipeline's stages. The order you list them in is crucial because GitLab runs them sequentially. Jobs in a later stage won't even think about starting until every job in the previous stage has passed.

For a typical backend service, a simple two-stage pipeline is a classic, robust starting point.

stages:

  • build
  • test

This tells GitLab that all jobs assigned to the build stage must complete successfully before anything in the test stage kicks off. It's a simple guardrail that stops you from wasting time trying to test code that couldn't even compile.

Creating Your First Build Job

With our stages defined, let's create our first job for the build stage. We’ll build a Node.js application. We need to tell the job which Docker image to use, which gives us a clean, predictable environment for every single run.

build_app: stage: build image: node:18-alpine script: - echo "Starting the build process..." - npm install - echo "Build complete."

Here, our build_app job runs inside a lean node:18-alpine container. The script block is where the action happens—we install our project dependencies with npm install, a standard first step for any Node.js project.

A quick word of advice: I always recommend using specific and lightweight Docker images, like the -alpine variants. They make your jobs start faster and use fewer resources, which is a massive win for keeping your pipelines speedy and efficient. Heavy images can add frustrating delays.

Adding an Automated Test Job

Once the application is built, we need to know if it actually works. This is where our test job comes in. We’ll set it up similarly, using a Node.js image to run our test suite, but this time we’ll assign it to the test stage.

I’m also throwing in one of GitLab’s incredibly useful predefined variables, $CI_COMMIT_REF_NAME. It automatically contains the name of the branch or tag being built, which makes your logs much more informative.

run_tests: stage: test image: node:18-alpine script: - echo "Running tests on branch $CI_COMMIT_REF_NAME..." - npm install # Needed again for this isolated job - npm test - echo "Tests finished."

With just those two jobs, you’ve created a foundational CI pipeline. Now, every time you push new code, GitLab will automatically build the app and run your tests. This immediate feedback loop is the very essence of continuous integration and is your first line of defence against shipping broken code.

Configuring Runners and Securing Pipeline Secrets

GitLab runners securing pipeline secrets with shared, group, and specific variables stored in a vault.

Your .gitlab-ci.yml file is just a set of instructions; the real work happens on GitLab Runners. These are the agents that actually execute the jobs you’ve defined, and your pipeline's performance and security depend heavily on how you set them up.

Choosing the right type of runner is one of the first big decisions you'll make. GitLab gives you three main options:

  • Shared Runners: These are managed by GitLab and available to everyone on the instance. They're perfect for getting started or for small personal projects where you just need something to run your jobs without any setup. The downside? You're sharing resources, so they can get busy and slow things down.
  • Group Runners: This is a great middle ground for teams. You can provision a set of runners and make them available to all projects within a specific GitLab group, giving you shared but dedicated resources.
  • Specific Runners: These are tied to individual projects and offer the most control. This is what you’ll want for most production apps, especially if you have special requirements like needing macOS for an iOS build or a specific hardware configuration.

For any serious project, particularly one that interacts with a Supabase or Firebase backend, you should really be using your own Specific Runners. You gain full control over the environment, which is a massive win for both performance and security.

Registering and Tagging Your Runners

Spinning up your own runner is pretty simple. You install the GitLab Runner software on a server you manage (it could be a VM, a physical machine, or a container) and then "register" it with your GitLab instance.

During this registration process, you’ll be prompted to add tags. Don't skip this. Tags are what make your runners truly flexible. For instance, you could tag one runner docker-build and another macos-xcode. Inside your .gitlab-ci.yml, you can then direct a job to a specific runner by using its tag, ensuring your specialised jobs always run on the right machine.

Protecting Your Pipeline Secrets

Sooner or later, your pipeline will need secrets—API keys, database passwords, or deployment tokens. I can't stress this enough: never, ever hardcode them directly in your .gitlab-ci.yml. That file is part of your repository, and committing secrets to version control is a recipe for disaster.

This is exactly what GitLab’s CI/CD variables are for. You can define these secrets at the project, group, or even instance level, keeping them safely out of your codebase. The real power comes from two specific settings: Protected and Masked.

A Protected variable is only available to jobs running on your protected branches or tags (like main or production). A Masked variable’s value is hidden in job logs, replaced with [MASKED], which stops them from being accidentally exposed.

This built-in approach to security is a major reason for GitLab's widespread adoption. In the UK, which makes up about 5% of all GitLab customers, there's a huge emphasis on tools that help with GDPR compliance and provide clear audit trails. This naturally pushes companies towards platforms that bake security directly into the CI/CD workflow. If you're looking for more strategies on this front, our guide on API key management offers some great best practices.

Taming Slow Pipelines with Caching and Artifacts

Let's be honest, slow pipelines are a massive drain on productivity. Nothing kills your flow more than waiting ten minutes just to see if a small commit broke something. That rapid feedback loop is the whole point of continuous integration, and when it slows to a crawl, the benefits start to evaporate.

Thankfully, GitLab gives us two fantastic tools to fight back: caching and artifacts. Mastering these is a game-changer for speeding things up.

Think of caching as a way to stop doing the same heavy lifting over and over. If you have three different jobs that all run npm install, you’re downloading the same dependencies three times. Caching lets you save the node_modules directory after the first run and simply reuse it for the other jobs in the same pipeline. It's a simple change that can shave minutes off your pipeline's execution time.

Artifacts serve a different, but related, purpose. They're how you pass the results of one job on to the next one. Imagine a build job compiles your code into a binary. Instead of having the next test job rebuild everything from scratch, it can just grab the binary—the "artifact"—from the build job and get straight to testing.

Getting Caching Right

The real trick to effective caching is the cache:key. This little setting tells GitLab when to create a fresh cache and when to pull down an existing one. A common pitfall I've seen is using a key that's too generic, which can lead to your jobs running with outdated, stale dependencies.

The best approach is to tie your cache key directly to your project's dependency lock file. For a Node.js project, that means keying off the package-lock.json file. This way, a new cache is only built when you’ve actually changed your dependencies.

build_app: stage: build script: - npm install cache: key: files: - package-lock.json paths: - node_modules/

With this config, GitLab generates a unique key from the contents of package-lock.json. If that file hasn't been modified since the last pipeline run, GitLab fetches the saved node_modules directory, and the npm install command finishes in seconds.

Passing Work Between Jobs with Artifacts

Now, let's connect our pipeline stages using artifacts. Once our build_app job has installed our dependencies, we need to get that node_modules folder over to our run_tests job. This is where the artifacts block comes in.

build_app: stage: build script: - npm install artifacts: paths: - node_modules/ expire_in: 1 hour

run_tests: stage: test script: - npm test

Here's what happens: at the end of the build_app job, GitLab zips up the node_modules directory and uploads it as an artifact. When the run_tests job starts, it automatically downloads and unpacks that artifact before running any scripts. This gives the test job everything it needs without having to run npm install all over again. Notice the expire_in: 1 hour line—that's a good habit to get into. It automatically cleans up old artifacts so they don't consume storage space forever.

Integrating Automated Security Scanning into Your Pipeline

A proper CI/CD workflow does more than just build and test features—it builds and tests secure features from the get-go. This is what we mean by "shifting security left." Instead of treating security as a final, manual hurdle, we’re embedding it as an automated, proactive check for every single commit. It’s the heart of DevSecOps, and it’s how your pipeline becomes your first line of defence against vulnerabilities.

Let's make this tangible. For anyone building on platforms like Supabase or Firebase, this is non-negotiable. It's surprisingly easy to misconfigure Row Level Security (RLS) policies or accidentally expose sensitive database functions. These are the exact kinds of mistakes that can lead to catastrophic data leaks, and they are precisely what automated scanning is designed to catch early and often.

Diagram illustrating a security scan process, followed by an app audit using Supabase/Firebase, resulting in pass/fail report.

The diagram above gives you a clear picture of how a specialised tool like AuditYour.App can slot right into your pipeline. It runs its checks and delivers a simple pass-or-fail verdict, turning an abstract security policy into a concrete, unavoidable step.

Adding a Security Stage to Your Pipeline

First things first, we need to tell GitLab about our new security-focused stage. In your .gitlab-ci.yml file, I always recommend placing the security stage right after your test stage but before any deploy stages. This way, you’re only scanning code that has already passed its functional tests, which saves you both time and compute resources.

stages:

  • build
  • test
  • security
  • deploy

With the stage defined, we can now create the job that will actually run the scan. For this example, we’ll use AuditYour.App, which is conveniently triggered with a simple curl command. You can adapt this same pattern for just about any other CLI-based security tool you might be using.

To get this working, you’ll need to pop over to your GitLab project’s CI/CD settings and configure two variables:

  • AUDITYOURAPP_API_KEY: This is your secret API key for authentication. Make absolutely sure this variable is set to both masked and protected.
  • AUDITYOURAPP_PROJECT_ID: This is the unique identifier for the project you want AuditYour.App to scan.

As you start weaving automated scanning into your pipeline, it's helpful to see how it fits into the bigger picture. This guide to security testing in software development provides some excellent context on modern practices that will complement what you’re building here.

Failing the Pipeline on Critical Findings

Now for the job itself. The configuration below uses a lightweight Alpine image with curl and jq (a command-line JSON processor). We’ll use jq to parse the scanner’s JSON response and check for any high-severity findings. If it finds any, the job fails.

supabase_security_scan: stage: security image: name: alpine/curl entrypoint: [""] before_script: - apk add --no-cache jq script: - | scan_result=$(curl -s -X POST "https://api.audityour.app/v1/scans"
-H "Authorization: Bearer $AUDITYOURAPP_API_KEY"
-H "Content-Type: application/json"
-d '{"project_id": "'"$AUDITYOURAPP_PROJECT_ID"'"}') - | echo "Scan initiated. Waiting for results..." # In a real-world scenario, you would poll a 'results' endpoint. # For this example, we will parse the initial response. high_severity_issues=$(echo "$scan_result" | jq '.summary.high') - | if [ "$high_severity_issues" -gt 0 ]; then echo "ERROR: Found $high_severity_issues high-severity security issues. Failing pipeline." exit 1 else echo "SUCCESS: No high-severity issues found. Proceeding." fi rules: - if: $CI_COMMIT_BRANCH == "main"

This approach of automatically failing the build is a non-negotiable best practice. It creates a powerful incentive for developers to fix security issues immediately, rather than letting them accumulate as technical debt. To dive deeper, you can also check out our comprehensive https://audityour.app/guides/automated-security-scanning-guide.

Given that GitLab holds about two-thirds of the self-managed Git market, its CI/CD tooling has become a standard for countless development teams, particularly across the UK. This widespread adoption makes it much easier to find well-supported security tools that integrate smoothly into the GitLab ecosystem you’re already familiar with.

Frequently Asked Questions About GitLab CI

As you get your hands dirty with GitLab CI, you'll naturally run into some common questions. I've been there. This section pulls together some practical answers to the hurdles and head-scratchers that often pop up when you're setting things up.

What Is the Difference Between GitLab CI and Jenkins?

The biggest difference comes down to integration. GitLab CI is baked right into GitLab, meaning your .gitlab-ci.yml file lives in the same repository as your code. This creates a single source of truth, which simplifies everything from setup to maintenance.

Jenkins, on the other hand, is a standalone powerhouse. It’s incredibly powerful and customisable, but that flexibility comes at a cost. You have to set it up, manage it separately, and often rely on a whole ecosystem of plugins just to get it talking to your code repository. If you're aiming for a unified, all-in-one platform, GitLab CI just feels more cohesive and requires less administrative heavy lifting.

How Can I Speed Up My GitLab Pipelines?

Slow pipelines are a real drag on productivity. Luckily, a few battle-tested strategies can make a huge difference to your pipeline execution times.

  • Smart Caching: Use the cache directive with a specific key—something tied to your dependency file like package-lock.json. This stops your pipeline from re-downloading the same packages on every single run.
  • Run Jobs in Parallel: Look for independent tasks in your pipeline. If they don't depend on each other, run them in parallel. It's a simple change that can slash your overall runtime.
  • Lean on Artifacts: Don't rebuild things you don't have to. Pass built files, test reports, or other dependencies between stages using artifacts.
  • Optimise Your Docker Images: Smaller is faster. Always choose a specific, slim base image (like node:18-alpine instead of the generic node:latest) to cut down on image pull times.
  • Use Dedicated Runners: For critical pipelines, shared runners can be a bottleneck. Switching to dedicated Specific or Group Runners with guaranteed resources will give you much more predictable and faster performance.

Can I Use GitLab CI for Mobile App Development?

Absolutely. GitLab CI is a great fit for building, testing, and deploying mobile apps. The key is setting up runners on macOS machines, whether they're your own hardware or a cloud-based service, which you'll need for building and signing iOS apps.

A typical mobile pipeline often has stages to install dependencies (using tools like CocoaPods or Gradle), run unit and UI tests inside simulators, and build the final IPA or APK. You can even take it a step further and automate deployment straight to TestFlight or the Google Play Console. It's also the perfect place to integrate security scanning to catch things like hardcoded API keys before they ever reach the app store.

What Are Common Mistakes to Avoid in .gitlab-ci.yml?

A few common tripwires can make your pipelines inefficient or, even worse, insecure. One of the biggest mistakes I see is creating huge, monolithic jobs. It's much better to break tasks into smaller, focused jobs. They're easier to debug, and you can often run them in parallel.

Another classic oversight is forgetting to cache dependencies, which leads to painfully slow and expensive pipelines. On the security front, never, ever hardcode secrets like API keys directly in .gitlab-ci.yml. That file is part of your repository. Always use GitLab's protected and masked CI/CD variables to handle sensitive data securely.


Ready to secure your Supabase or Firebase projects automatically? AuditYour.App finds and helps you fix RLS misconfigurations and other critical vulnerabilities directly within your CI pipeline. Get your free scan and upgrade your security grade in minutes at https://audityour.app.

Scan your app for this vulnerability

AuditYourApp automatically detects security misconfigurations in Supabase and Firebase projects. Get actionable remediation in minutes.

Run Free Scan