Devops: Supercharging your mobile app CI/CD Pipeline with Bitbucket & Expo Application Services

I was tasked with optimizing our CI/CD flow on a recent project following months of incremental changes. We had started the product using Expo Application Services (EAS) and migrated to Azure, before migrating back to EAS with an “EAS Production” subscription. Tasked with restructuring our CI pipeline for a 3rd time, I approached the job with an eye for even further optimization; conferring with the rest of the team, we identified these major pain points I hoped to resolve:

  • App Version/Runtime version needed to be set manually before each store release.
  • Our analytics token needed to be set manually before each store release.
  • Automatically submit store builds to their respective app stores on-demand without needing to manually upload.
  • Build version could “drift” during normal development and there was no easy way to indicate as much without hand-checking.
  • Test build download links would needed to be scrapped manually from EAS results or from the pipeline output to share with the QA team.
  • Build and publish over-over-the -air updates directly from the pipeline, rather than construct locally.

Keeping in mind many of these tasks would require some string manipulation on a per-file basis to achieve, I set out painfully aware of how flimsy my regex skills are. In the end (after extensive googling, easily hundreds of test pipeline runs, substantial help from copilot, and only a few hiccups along the way), I was able to achieve all of our objectives in a convenient and efficient manner. Besides the huge benefit of eliminating a half-dozen manual edits we required prior to store releases, the entire app submission process could now be completed by _any_ team member, regardless of the underlying OS of their work machine (Our Windows-based developers, for example, were limited in how they could contribute to iOS App Store Releases).

The End Result

The final product was nearly 350 lines of code (including comments), which I’ll share in full (gist link) here before we dive in and break down the individual components:

# This is an expo pipeline configuration
# https://docs.expo.dev/build/building-on-ci/
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
#
# You can specify a cache to speed up build times
# https://support.atlassian.com/bitbucket-cloud/docs/cache-dependencies/
#
# Local build reference:
# https://docs.expo.dev/build-reference/local-builds/
#
# Limitations
#   Some of the options available for cloud builds are not available locally. Limitations you should be aware of:
#   You can only build for a specific platform (option all is disabled).
#   Customizing versions of software is not supported, fields node, yarn, fastlane, cocoapods, ndk, image in eas.json are ignored.
#   Caching is not supported.
#   EAS Secrets are not supported (set them in your environment locally instead).
# You are responsible for making sure that the environment have all necessary tools installed:
#   - Node.js/yarn/npm
#   - fastlane (iOS only)
#   - CocoaPods (iOS only)
#   - Android SDK and NDK

image: node:16.16.0

clone:
  depth: full # SonarCloud scanner needs the full history to assign issues properly

definitions:
  caches:
    node: ./node_modules
    yarn: /usr/local/share/.cache/yarn
    sonar: ~/.sonar/cache
  # https://support.atlassian.com/bitbucket-cloud/docs/yaml-anchors/
  steps:
    - parallel:
        - step: &lint
            name: Lint app
            caches:
              - node
              - yarn
            script:
              - mv .npmrc_config .npmrc
              - yarn
              - yarn lint --quiet
        - step: &test
            name: Test and analyze on Jest
            caches:
              - node
              - yarn
            script:
              - mv .npmrc_config .npmrc
              - mv .test.env .env
              - yarn
              - yarn test:ci
            artifacts:
              - coverage/**
    - step: &doctor
        name: Expo doctor
        script:
          - yarn
          - npx expo-doctor
    - step: &test-sonarcloud
        name: Test and analyze on SonarCloud
        caches:
          - node
          - sonar
        script:
          - pipe: sonarsource/sonarcloud-scan:2.0.0
            variables:
              SONAR_TOKEN: $SONAR_TOKEN
              EXTRA_ARGS: '-Dsonar.javascript.lcov.reportPaths="$BITBUCKET_CLONE_DIR/coverage/lcov.info"'
          - pipe: sonarsource/sonarcloud-quality-gate:0.1.6
            variables:
              SONAR_TOKEN: $SONAR_TOKEN
    - step: &build
        name: Build app
        caches:
          - node
          - yarn
        script:
          - mv .npmrc_config .npmrc
          - yarn
          - npx eas-cli build --platform all --non-interactive --profile preview
    - step: &distribute
        name: Validate & Distribute App
        deployment: qa
        caches:
          - node
          - yarn
        script:
          - apt-get update
          - apt-get -y install jq

          - mv .npmrc_config .npmrc
          - yarn

          # Set the build profile based on deployment config, which allows us to target the right buildProfile from Expo
          - PROFILE=preview
          - if [ ! $BITBUCKET_DEPLOYMENT_ENVIRONMENT = uat ]; then PROFILE=production; fi

          - echo "Validating build output and distributing to $PROFILE channel"

          # iOS
          - IOS=$(npx eas-cli build:list --json --limit=1 --platform=ios --buildProfile $PROFILE --non-interactive --status finished)
          - IBU=$(echo $IOS | jq -r '.[0].artifacts.buildUrl')
          - IBV=$(echo $IOS | jq -r '.[0].appBuildVersion')
          - IAV=$(echo $IOS | jq -r '.[0].appVersion')
          - IRV=$(echo $IOS | jq -r '.[0].runtimeVersion')

          # Check if the app version and runtime version match
          - if [ ! $IAV = $IRV ]; then echo "iOS runtime vesion $IRV does not match app version $IAV. This probably means something has changed in-code that the pipeline does not currenlty handle"; exit 1; fi

          # Android
          - AND=$(npx eas-cli build:list --json --limit=1 --platform=android --buildProfile $PROFILE --non-interactive --status finished)
          - ABU=$(echo $AND | jq -r '.[0].artifacts.buildUrl')
          - ABV=$(echo $AND | jq -r '.[0].appBuildVersion')
          - AAV=$(echo $AND | jq -r '.[0].appVersion')
          - ARV=$(echo $AND | jq -r '.[0].runtimeVersion')

          # Check if the app version and runtime version match
          - if [ ! $AAV = $ARV ]; then echo "Android runtime vesion $ARV does not match app version $AAV. This probably means something has changed in-code that the pipeline does not currenlty handle"; exit 1; fi

          # Check that iOS and Android build versions match
          - if [ ! $IBV = $ABV ]; then echo "iOS Build Version $IBV does not match Android Build Version $ABV. While this has no technical implications, we generally try to keep these numbers unified for tracking purposes. Use `npx eas-cli build:version:set` to sync build versions, then re-run  the pipeline"; exit 1; fi
          - echo "iOS Build Version $IBV matches Android Build Version $ABV. Pass"

          # Check that iOS and Android app versions match
          - if [ ! $IAV = $AAV ]; then echo "iOS App Version $IAV does not match Android App Version $AAV. Use \`npx eas-cli build:version:set\` to sync app versions, then re-run  the pipeline"; exit 1; fi
          - echo "iOS App Version $IAV matches Android App Version $AAV. Pass"

          - echo "Disribute app version $IAV build $IBV".

          # iOS AppLive Release
          - curl -u "$BROWSERSTACK_TOKEN" -X POST "https://api-cloud.browserstack.com/app-live/upload" -F "data={\"url\":\"$IBU\"}"
          - echo "ios download url $IBU"

          # Android AppLive Release
          - curl -u "$BROWSERSTACK_TOKEN" -X POST "https://api-cloud.browserstack.com/app-live/upload" -F "data={\"url\":\"$ABU\"}"
          - echo "android download url $ABU"
    - step: &prepare
        name: Prepare app for release
        caches:
          - node
          - yarn
        script:
          - FNF=" not found. Ensure the file exists or is no longer needed and update this pipeline accordingly"
          - mv .npmrc_config .npmrc
          - yarn
          - apt-get install -y sed # needed?

          # When the build is triggered from a branch, the target name is the branch name. If triggered from a tag, we must
          # use the tag name as the target name. If neither are present, we use the default target name.
          #
          # See: https://support.atlassian.com/bitbucket-cloud/docs/pipeline-start-conditions/#Tags
          - TGT=${BITBUCKET_BRANCH:-$BITBUCKET_TAG}
          - echo "Target name is $TGT"
    
          # Set app version i.e 2024.0.1 based on the branch name
          - REV=$(echo "$TGT" | grep -Eo '[0-9]{4}.[0-9]{1,2}.[0-9]{1,2}')          
          - echo "Set App Version to $REV"

          # Confirm a file exists then replace a string in the file. If the file does not exist, exit with an error.
          # The edits occurs in-place and are not committed back to the repository.
          - if [ ! -e "app.json" ]; then echo "app.json $FNF"; exit 1; fi
          - test -e app.json && sed -i "s/0000.0.0/$REV/g" app.json

          - if [ ! -f ios/MyExpoApp/Info.plist ]; then echo "ios/MyExpoApp/Info.plist $FNF"; exit 1; fi
          - sed -i "s/0000.0.0/$REV/g" ios/MyExpoApp/Info.plist

          - if [ ! -f ios/MyExpoApp/Supporting/Expo.plist ]; then echo "ios/MyExpoApp/Supporting/Expo.plist $FNF"; exit 1; fi
          - sed -i "s/0000.0.0/$REV/g" ios/MyExpoApp/Supporting/Expo.plist

          - if [ ! -f android/app/build.gradle ]; then echo "android/app/build.gradle $FNF"; exit 1; fi
          - sed -i "s/0000.0.0/$REV/g" android/app/build.gradle

          - if [ ! -f android/app/src/main/res/values/strings.xml ]; then echo "android/app/src/main/res/values/strings.xml $FNF"; exit 1; fi
          - sed -i "s/0000.0.0/$REV/g" android/app/src/main/res/values/strings.xml

          # Remove dev bundle id (Deprecated)
          # - if [ ! -f android/app/build.gradle ]; then echo "android/app/build.gradle $FNF"; exit 1; fi
          # - sed -i ''s/applicationIdSuffix ".dev" \/\/ Remove this for store releases.//g" android/app/build.gradle

          # Set Adobe App ID
          - echo "Set Adobe App ID to $ADOBE_APP_ID_PROD"

          # N.B - Adobe App ID contains a / so we supply sed with an alternate delimiter
          - if [ ! -f android/app/src/main/java/com/MyExpoApp/mobile/MainApplication.java ]; then echo "android/app/src/main/java/com/MyExpoApp/mobile/MainApplication.java $FNF"; exit 1; fi
          - sed -i "s~ADOBE_APP_ID_PLACEHOLDER~$ADOBE_APP_ID_PROD~g" android/app/src/main/java/com/MyExpoApp/mobile/MainApplication.java

          - if [ ! -f ios/AdobeBridge.m ]; then echo "ios/AdobeBridge.m $FNF"; exit 1; fi
          - sed -i "s~ADOBE_APP_ID_PLACEHOLDER~$ADOBE_APP_ID_PROD~g" ios/AdobeBridge.m

          # Build the app
          - echo "Building the app for all platforms:"
          - npx eas-cli build --platform all --non-interactive
    - step: &pingback
        name: Pingback
        script:
          - apt-get update
          - apt-get -y install jq

          - mv .npmrc_config .npmrc
          - yarn

          # Get the id for current PR
          - PRID=$BITBUCKET_PR_ID
          - if [ ! $PRID ]; then echo "No PR ID found. Exiting"; exit 0; fi # exit without error

          # Set the endpoint URL
          # https://developer.atlassian.com/cloud/bitbucket/rest/api-group-pullrequests/
          - EPT="https://api.bitbucket.org/2.0/repositories/$BITBUCKET_WORKSPACE/$BITBUCKET_REPO_SLUG/pullrequests/$PRID/"

          # Set the build profile based on deployment config, which allows us to target the right buildProfile from Expo
          - PROFILE=preview
          - if [ ! $BITBUCKET_DEPLOYMENT_ENVIRONMENT = uat ]; then PROFILE=production; fi

          - echo "Searching for latest builds on $PROFILE channel"

          # Fetch the latest completed build versions for iOS and Android. N.B - it is possible that a new, unrelated build may have
          # finished since this pipeline was triggered, but before the pingback step & thus return the wrong download URL to append.
          # This is a limitation of the current implementation
          
          # iOS
          - IOS=$(npx eas-cli build:list --json --limit=1 --platform=ios --buildProfile $PROFILE --non-interactive --status finished)
          - IBU=$(echo $IOS | jq -r '.[0].artifacts.buildUrl')
          - IBV=$(echo $IOS | jq -r '.[0].appBuildVersion')

          # Android
          - AND=$(npx eas-cli build:list --json --limit=1 --platform=android --buildProfile $PROFILE --non-interactive --status finished)
          - ABU=$(echo $AND | jq -r '.[0].artifacts.buildUrl')
          - ABV=$(echo $AND | jq -r '.[0].appBuildVersion')

          # Append a warning to the PR if the build versions do not match. Technically, this should never happen since we check during the "distribute" phase
          # but I've added this check here also in the event the "distribute" phase is skipped for some reason or if the build:list somehow returns different instances
          - BV_WARN=""
          - if [ ! $IBV = $ABV ]; then BV_WARN="> **WARN:** iOS Build Version $IBV does not match Android Build Version $ABV. While this has no technical implications, we generally try to keep these numbers unified for tracking purposes. Use \`npx eas-cli build:version:set\` to sync build versions, then re-run  the pipeline."; fi
          
          # Get PR content. Here, jq -r was also spitting out garbage for some reason, so we need to do some fancy string manipulation later
          - OG_CONTENT=$(curl --location $EPT --header "Authorization:Bearer $PINGBACK_TOKEN" --header "Accept:application/json" --header "Content-Type:application/json" | jq '.summary.raw')
          - echo $OG_CONTENT

          # Fancy string manipulation. Remove pingback content, if it exists, from original PR description and any extra quotes which will befuddle cURL later
          - DEV_CONTENT=$(echo $OG_CONTENT | sed 's/§.*//')
          - DEV_CONTENT=$(echo $DEV_CONTENT | tr -d '"')

          # Default content to add to PR description
          - APPEND_CONTENT="§ **my-expo-app-mobile-pipeline:** ($(date '+%Y-%m-%d %r %Z')) \n\n >🍏 ($IBV): [$IBU]($IBU) \n\n >🤖 ($ABV): [$ABU]($ABU)"

          # Concat the PR description, default pingback content, and optional warning (if applicable)
          - NEW_CONTENT="$DEV_CONTENT\n\n\n\n$APPEND_CONTENT\n\n$BV_WARN"

          - echo "$NEW_CONTENT" # check this for extra quotes here if the PUT fails later

          # PUT request to update the PR description with the new content
          - curl --request PUT --url $EPT --header "Authorization:Bearer $PINGBACK_TOKEN" --header "Accept:application/json" --header "Content-Type:application/json" --data "{\"description\":\"$NEW_CONTENT\"}"      
pipelines:
  branches:
    '{main}':
      - parallel:
          - step: *lint
          - step: *test
          - step: *doctor
      - step: *test-sonarcloud
      - step: *build
  pull-requests:
    '*chore/*':
      - parallel:
          - step: *lint
          - step: *test
          - step: *doctor
      - step: *test-sonarcloud
    'release/*':
      - parallel:
          - step: *lint
          - step: *test
          - step: *doctor
      - step: *test-sonarcloud
      - step: *prepare
      - step:
          <<: *distribute
          deployment: uat # for tagging/tracking purposes. These builds cannot be installed via BrowserStack, etc
      - step: *pingback
      - step:
          name: Submit to App Stores
          trigger: manual
          deployment: production
          caches:
            - node
            - yarn
          script:
            - mv .npmrc_config .npmrc
            - yarn
            - npx eas-cli submit --platform all --latest --non-interactive
    '**': # triggers if no other specific pipeline was triggered
      - parallel:
          - step: *lint
          - step: *test
          - step: *doctor
      - step: *test-sonarcloud
      - step: *build
      - step: *distribute
      - step: *pingback
  custom: # Pipelines that can only be triggered manually
    updates-preview:
      - parallel:
          - step: *lint
          - step: *test
          - step: *doctor
      - step: *test-sonarcloud
      - step:
          name: Update Patch - Preview
          deployment: qa
          caches:
            - node
            - yarn
          script:
            - mv .npmrc_config .npmrc
            - yarn
            - echo "Decrypt .env file from Bitbucket Secrets"
            - (umask  077 ; echo $ENV | base64 --decode > $BITBUCKET_CLONE_DIR/.env)
            - if [ ! -e "$BITBUCKET_CLONE_DIR/.env" ]; then echo "Missing env file"; exit 1; fi
            - echo "Creating an update patch for channel PREVIEW"
            - COMMIT_MESSAGE=`git log --format=%B -n 1 $BITBUCKET_COMMIT`
            - echo $COMMIT_MESSAGE
            - npx eas-cli update --channel preview --message "$COMMIT_MESSAGE" --non-interactive
    updates-prod:
      - parallel:
          - step: *lint
          - step: *test
          - step: *doctor
      - step: *test-sonarcloud
      - step:
          name: Update Patch - Prod
          deployment: production
          caches:
            - node
            - yarn
          script:
            - mv .npmrc_config .npmrc
            - yarn
            - echo "Decrypt .env file from Bitbucket Secrets"
            - (umask  077 ; echo $ENV | base64 --decode > $BITBUCKET_CLONE_DIR/.env)
            - if [ ! -e "$BITBUCKET_CLONE_DIR/.env" ]; then echo "Missing env file"; exit 1; fi
            - echo "Creating an update patch for channel PRODUCTION"
            - COMMIT_MESSAGE=`git log --format=%B -n 1 $BITBUCKET_COMMIT`
            - echo $COMMIT_MESSAGE
            - npx eas-cli update --channel production --message "$COMMIT_MESSAGE" --non-interactive

Default Pull Request Builds

Let’s take a look at the template for builds triggered by a pull request:

pipelines:
  pull-requests:    
    [...] 
    '**': # triggers if no other specific pipeline was triggered
      - parallel:
          - step: *lint
          - step: *test
          - step: *doctor
      - step: *test-sonarcloud
      - step: *build
      - step: *distribute
      - step: *pingback

First, we indicate that our pipeline should start when a pull request is created on our bitbucket repository. Our team had a fairly standard branch-naming convention, but using the generic pattern matching value '**', we ensure the default behavior for PR builds is always the same (N.B – we take slightly different actions for builds when the branch name matches a distinct pattern, such as '*chore/*', which you can review above. I’ll address the slight variations we take for release/* branches later in this post). Let’s review the major “phases” of the build (which, don’t line up neatly with the build “steps” for technical reasons

prebuild phase

The prebuild phase consists of whatever checks we need to run before we even start building; this includes executing our linter checks and jest test suite alongside Expo Doctor. If any of these steps fail, we stop the pipeline and report the issue, saving us valuable build time and resources. These three steps can run in parallel, since they have no co-dependency. For ease of use, we outline all of our pre-build steps under the definitions section of our pipeline yaml:

definitions:
  caches:
    node: ./node_modules
    yarn: /usr/local/share/.cache/yarn
    sonar: ~/.sonar/cache
  # https://support.atlassian.com/bitbucket-cloud/docs/yaml-anchors/
  steps:
      - step: &lint
          name: Lint app
          caches:
            - node
            - yarn
          script:
            - mv .npmrc_config .npmrc
            - yarn
            - yarn lint --quiet
      - step: &test
          name: Test and analyze on Jest
          caches:
            - node
            - yarn
          script:
            - mv .npmrc_config .npmrc
            - mv .test.env .env
            - yarn
            - yarn test:ci
          artifacts:
            - coverage/**
    - step: &doctor
        name: Expo doctor
        script:
          - yarn
          - npx expo-doctor
    - step: &test-sonarcloud
        name: Test and analyze on SonarCloud
        caches:
          - node
          - sonar
        script:
          - pipe: sonarsource/sonarcloud-scan:2.0.0
            variables:
              SONAR_TOKEN: $SONAR_TOKEN
              EXTRA_ARGS: '-Dsonar.javascript.lcov.reportPaths="$BITBUCKET_CLONE_DIR/coverage/lcov.info"'
          - pipe: sonarsource/sonarcloud-quality-gate:0.1.6
            variables:
              SONAR_TOKEN: $SONAR_TOKEN

Importantly, we have a .test.env file in our repository, which is added to ensure consistency of test execution. N.B that as part of the CI, we execute the mv command against .test.env to overwrite the standard .env file for the testing phase. Read more about dotenv for react native here.

Following the successful execution of all three parallel steps, we can test our code using Sonar Cloud. Because of how Sonar Cloud evaluates the code, this phase must be run in sequence; attempting to execute it in parallel with the other pre-build commands would cause the results of Sonar Cloud’s evaluation not to be reported! (As I learned the hard way). Our team found Sonar Cloud’s assessment of “issues” to be somewhat subjective, so we don’t fail the pipeline at this juncture if the software flags anything for review. With that done, the pre-build phase of our CI is complete.

build phase

Now on the bulk of the pipeline runtime; the build phase. For our standard builds, we lean very heavily on what comes out of the box using Expo, such as relying on the automatic build id bump. You can review the full smorgasbord of options on the official documentation, but for simplicity, here’s what it looks like for our internal releases to the QA team:

    "preview": {
      "channel": "preview",
      "autoIncrement": true,
      "android": {
        "buildType": "apk",
        "image": "latest"
      },
      "distribution": "internal",
      "resourceClass": "large",
    }

Then, we’re just supplying eas-cli with the options to indicate we want this build flavor specifically, like so:

npx eas-cli build --platform all --non-interactive --profile preview

Distribute phase

Here’s where the build starts to get interesting; our QA team uses a combination of physical hardware and remote devices vis BrowserStack’s App Live service. Rather than ask our QA team to visit the Expo build URL, download the build artifact and then upload it manually to App Live, we can leverage BrowserStack’s REST API to automatically submit our builds when combined with some command line tools from EAS alongside jq. But first. let’s set up the request to EAS by selecting the build profile type we want to query; Using variables Bitbucket provides, we can determine the target profile based on the deployment tag, which allows us to reduce the chance our request to EAS yields the “wrong” build.

          # Set the build profile based on deployment config, which allows us to target the right buildProfile from Expo
          - PROFILE=preview
          - if [ ! $BITBUCKET_DEPLOYMENT_ENVIRONMENT = uat ]; then PROFILE=production; fi

For our purposes, we only care about “not production” and “production” release types, so by default we select the “preview” profile, but you could easily expand this to update this value based on any number of build scenarios + build tag combinations. Now, we’ll validate some things about the build before we ship it off to QA. Let’s look:

          - echo "Validating build output and distributing to $PROFILE channel"

          # iOS
          - IOS=$(npx eas-cli build:list --json --limit=1 --platform=ios --buildProfile $PROFILE --non-interactive --status finished)
          - IBU=$(echo $IOS | jq -r '.[0].artifacts.buildUrl')
          - IBV=$(echo $IOS | jq -r '.[0].appBuildVersion')
          - IAV=$(echo $IOS | jq -r '.[0].appVersion')
          - IRV=$(echo $IOS | jq -r '.[0].runtimeVersion')

          # Check if the app version and runtime version match
          - if [ ! $IAV = $IRV ]; then echo "iOS runtime vesion $IRV does not match app version $IAV. This probably means something has changed in-code that the pipeline does not currenlty handle"; exit 1; fi

First, we build our EAS query to list our builds that are 1) for a given OS (iOS or Android), 2) for a specific build profile (in this case, “preview” by default) and 3) specifically “finished” build statuses, so that we don’t accidentally query a newer build in progress while the pipeline is wrapping up.

TIP: I couldn’t find an easy way to query an EAS build specifically associated with a given Bitbucket pipeline run. Being able to link these two separate processes directly would make this phase more robust and more easily scalable to larger teams. As it is currently, there is a chance the build at this phase may result in a build not specifically triggered by the executing pipeline run, depending on concurrency limits and volume of PRs that trigger the pipeline.

Once EAS has returned a JSON object describing our build to us, we can parse out some interesting facets of the build using jq. Using the -r flag to ensure we get “raw” JSON, we can save the build URL, app build version, app version, and runtime version off to discrete variables for later use. after repeating the process for our Android build, we can do some quick validations, such as:

  • Ensuring the runtime version and app version matches per OS.
  • Ensuring the iOS build version matches the Android build version.
  • Ensuring the iOS app version matches the Android app version.

These are “nice-to-haves” that ensure our QA team only has to remember “one” build signature when validating a feature or fix, i.e. – Ticket number APP-10098 is fixed in app version 1010.09.09 build 123 on all platforms. Following this validation, we can ship the builds off to BrowserStack via their API, using the $BROWSERSTACK_TOKEN which is set as a repository “secret” and pulled at runtime.

post phase

As part of this distribute phase, we echo the download URL provided by EAS to the pipeline. However, clicking into the build results is inconvenient for developers and QA alike, resulting in lost time “looking up” the build one way or another to test on real hardware. So, why not use the power of Bitbucket’s REST APIs to post the build links directly in the PR that triggered them? Taking a closer look at the “pingback” phase of our build, we can unwrap the Rube Goldberg-esque series of events required to make it happen.

Based on our triggers, the whole process hinges on the PR ID – after all, if there is no PR ID (and therefore, no PR) there is nothing to post back to! So we first check for the presence of a default bitbucket variable $BITBUCKET_PR_ID that is only set when a pipeline is executed via pipeline trigger.

          # Get the id for current PR
          - PRID=$BITBUCKET_PR_ID
          - if [ ! $PRID ]; then echo "No PR ID found. Exiting"; exit 0; fi # exit without error

We’ll also set the endpoint for our request using a handful of other pipeline variables provided by Bitbucket by default, including the workspace ID and repository slug:

          # Set the endpoint URL
          # https://developer.atlassian.com/cloud/bitbucket/rest/api-group-pullrequests/
          - EPT="https://api.bitbucket.org/2.0/repositories/$BITBUCKET_WORKSPACE/$BITBUCKET_REPO_SLUG/pullrequests/$PRID/"

From here, we rely on similar mechanics as discussed during the Distribute phase to collect some important variables and do some backup validation (again, noting the limitations that in the <30 seconds between when the build finishes in the previous phase and this phase runs, there could technically be a new build unrelated to the one created by this pipeline run). Now it’s time for a string manipulation bonanza! Let’s use cURL to request the endpoint we formed earlier to get the content of the PR as it exists currently, including all the content/details a developer provided when opening the request:

          # Get PR content.
          - OG_CONTENT=$(curl --location $EPT --header "Authorization:Bearer $PINGBACK_TOKEN" --header "Accept:application/json" --header "Content-Type:application/json" | jq '.summary.raw')
          - echo $OG_CONTENT

For some reason, passing -r to JQ in this specific case was yielding a whole host of unrelated content. It’s unclear to me why this was happening, so I was forced to work around it by following up the request with my own manipulation to ensure I had a raw, clean string at the end – otherwise, the cURL request later would terminate “mid-sentence” and cause the command to fail.

# Fancy string manipulation. Remove pingback content, if it exists, from original PR description and any extra quotes which will befuddle cURL later
          - DEV_CONTENT=$(echo $OG_CONTENT | sed 's/§.*//')
          - DEV_CONTENT=$(echo $DEV_CONTENT | tr -d '"')

N.B – that we first use sed to trim any content the pipeline reads after the section symbol § – this is because later on, we’ll include the section symbol as an indicator to content this pipeline has appended to the PR and we don’t want to duplicate that! Anytime the pipeline re-runs, we’ll simply remove any old info appended by the pipeline and replace it with updated values, such as download URLS. Taking our variables from earlier, we can join some values into meaningful content and concatenate the original, developer provided texts to our final output along with any last minute warning (which again, should be unlikely/rare but not impossible).

          # Default content to add to PR description
          - APPEND_CONTENT="§ **my-expo-app-mobile-pipeline:** ($(date '+%Y-%m-%d %r %Z')) \n\n >🍏 ($IBV): [$IBU]($IBU) \n\n >🤖 ($ABV): [$ABU]($ABU)"

          # Concat the PR description, default pingback content, and optional warning (if applicable)
          - NEW_CONTENT="$DEV_CONTENT\n\n\n\n$APPEND_CONTENT\n\n$BV_WARN"

Notice that our appended content includes download URLs for iOS and Android using the variables retrieved from Expo earlier. Lastly, let’s bring it all together in a cURL PUT request to actually update the PR on Bitbucket:

          # PUT request to update the PR description with the new content
          - curl --request PUT --url $EPT --header "Authorization:Bearer $PINGBACK_TOKEN" --header "Accept:application/json" --header "Content-Type:application/json" --data "{\"description\":\"$NEW_CONTENT\"}"      

Pay special sttention to how the data is composed here; --data "{\"description\":\"$NEW_CONTENT\"}" – as mentioned earlier, if we have failed to sanitize any content for extraneous quote marks, the PUT request will be malformed and fail. Since this is a non-critical part of our workflow (just a nice to have!) I opted not to fail the pipeline at this point; a “bad request” from cURL in Bitbucket does not cause an error by default, so the pipeline proceeds as normally whether the request is successful or not. Your use case may vary.

That said, if everything goes according to plan, the PR that triggered the pipeline will now include a special section with all the juicy details from the Expo build:

Default Store Builds

Thanks to some handy tools EAS provides, CI/CD pipelines can automagically submit release candidate builds to app stores, after which they can be reviewed, edited and ultimately sent for review or added to test tracks via Google Play Console and Apple Developer Portal respectively. We capitalized on this by adding two phases to our pipeline to construct proper store builds and make sure they were delivered seamlessly.

Prepare

A particular pain point for our release process was that making a build for the store specifically required us to manually intervene in a few files of our code – inline, hard-coded values needed to be updated specifically from non-prod to prod values. Bear in mind, there are plenty of build services and tools that can do this for you, but if – like us – you’re “rolling your own” for whatever reason, we can rely on the power of grep and sed to get us where we need to go. Here’s the approach we took:

  • First, update all the values that would change between QA and Prod builds to be a consistent value. For example, instead of having a build version inline read 2024.02.02, we set it to a constant 0000.00.00.
  • With that convention established, we can sed our way through the various files that use the pattern to easily replace with content with something meaningful when the pipeline runs.
  • Likewise, establish a known convention for branch names that will spawn a release build – if a branch adheres to the correct convention, we can scrape the value we want from the branch name itself and use that as our variable for substitution.

In practice, a developer would make a new branch, git checkout -b release/2024.02.02-some-release-name and make a PR for that; the pipeline would then find all instances of 0000.00.00 across all necessary files and substitute 2023.02.02 in its place before triggering the build via EAS; this change would exist “in memory” and never be committed back to the repository. Let’s take a look:

          # When the build is triggered from a branch, the target name is the branch name. If triggered from a tag, we must
          # use the tag name as the target name. If neither are present, we use the default target name.
          #
          # See: https://support.atlassian.com/bitbucket-cloud/docs/pipeline-start-conditions/#Tags
          - TGT=${BITBUCKET_BRANCH:-$BITBUCKET_TAG}
          - echo "Target name is $TGT"
    
          # Set app version i.e 2024.0.1 based on the branch name
          - REV=$(echo "$TGT" | grep -Eo '[0-9]{4}.[0-9]{1,2}.[0-9]{1,2}')          
          - echo "Set App Version to $REV"

We leverage a bitbucket default variable to determine the branch name that spawns the build; in this case the trigger is a PR, so we employ BUITBUCKET_BRANCH, but other triggers (such as a tag, as mentioned in the comments) would require a different variable. Your mileage may vary. Once we’ve established the branch name, we can grep for a known pattern to determine the correct build name. -Eo allows us to only return the matching pattern – Granted, if the branch _does_ not contain the right pattern, the resulting variable assignment will be flawed and likely cause build errors down the line, but validating the final value of REV is left as an exercise to the reader. ¯\(ツ)/¯

The actual substitution takes places over two steps for each file that needs to be affected:

  • app.json (shared)
  • Info.plist (ios)
  • Expo.plist (ios)
  • app/build.gradle (android)
  • strings.xml (android)

Consider the followng:

          # Confirm a file exists then replace a string in the file. If the file does not exist, exit with an error.
          # The edits occurs in-place and are not committed back to the repository.
          - if [ ! -e "app.json" ]; then echo "app.json $FNF"; exit 1; fi
          - test -e app.json && sed -i "s/0000.0.0/$REV/g" app.json

First, we check the file exists using the ! notation, in this case app.json. If Any file in the list is missing, we fail the build and exit the pipeline. Each of these files is required for EAS to make the build, so exiting here saves time over submitting to EAS with incomplete code.

Then, we use a simple sed substitution to replace our constant value with the determined value derived from the branch name. Rinse and repeat as necessary.

Now, in our case we ran into a small problem for a particular variable – our analytics config, which depended on a value provided by Adobe, required a key that included slashes (/) – sed using the same slash as a delimiter, so even providing the appropriate value as a repository variable caused the command to exit early and start reading garbage. Fortunately, sed provides the capability to define a custom delimiter for the normal syntax, so we can just use a new character that does not appear in our key variable to complete the substitution.

          # N.B - Adobe App ID contains a / so we supply sed with an alternate delimiter
          - if [ ! -f android/app/src/main/java/com/MyExpoApp/mobile/MainApplication.java ]; then echo "android/app/src/main/java/com/MyExpoApp/mobile/MainApplication.java $FNF"; exit 1; fi
          - sed -i "s~ADOBE_APP_ID_PLACEHOLDER~$ADOBE_APP_ID_PROD~g" android/app/src/main/java/com/MyExpoApp/mobile/MainApplication.java

At this point, our code in memory has all the correct values to make a proper store build and we can submit to EAS as normal to build, indicating we want to use all the production artifacts and configurations set directly in our EAS project:

          # Build the app
          - echo "Building the app for all platforms:"
          - npx eas-cli build --platform all --non-interactive

Submit

Submitting a build to the stores is extremly trivial at this point; after establishignt he necessary config in code via EAS, we only need one command to promote our latest build to the store:

- npx eas-cli submit --platform all --latest --non-interactive

Again, caution here; if another build has somehow completed in the few moments after the previous phase of the pipeline, a different build may be transmitted, but this should be sufficient for most cases. Otherwise, we could do some fancy dancing with eas-cli build:list and jq again. We also opted to mark this phase of the pipeline as a manual step so that builds weren’t flying all willy-nilly up to the release portals without one last human check. In practice, that meant someone on our team (usually the release “owner” du jor) would need to observe the pipeline results in Bitbucket and press the big launch button.

And that’s it! At this point, we can promote our builds no further from Bitbucket and need to use the developer portals directly to add builds to test tracks and ultimately release to stores. I hope this post has been informative and inspired you to supercharge your existing pipelines or start some automation of your own. Thanks for reading!

Leave a comment