App service timeout limit solution

21.05.2022 - Saturday - 15:49

How to solve the 240 seconds app service timeout limit for standard users

While working in Azure we stumbled a problem what at the time we thought could not be solved.
We had our main application running on App Service, and for standard account, there is a timeout of 240 seconds for a response to it.
As we were growing in clients, we started working with clients that their response time was bigger. It started to be a big problem. We couldn't serve some big clients because of it.
We could've become a premium account, but before doing that, we wanted to try to find other solution.
The first step was to devide everything up, to understand what solution can be used, or how can I group services to solve this.
---
First thing are the containers that App service runs - Container Instances, can be used.
To make it work, what we need is the pipeline that build the images - and then running the containers from that images.
Then we can use Application gateway. It can be connected to any compute service, in this case a container. We can put the container instance as the backend and define application gateway with whatever timeout we want.
When I checked it (at about the start of April 2022) you can put the timeout to be at least 10 mins.
To check it you can create a little application that waits as long as you want. I created this:

from flask import Flask

from time import sleep
app = Flask(__name__)

@app.route("/")
def hello_world():
  sleep(<n>)
  return "Hello, world!"

Instead of <n> write whatever time you want it to wait, and make the application gateway timeout bigger than that.
Run it:
>> export FLASK_APP=<file_name>
>> flask run
For example I put 600 seconds for 10 minutes in the flask application ad 660 in the timeout of application gateway.
---
Pipeline step
Now you should create a pipeline that does everything you want
You can look at the pipeline here.
Explanation:
pr:
branches:
include:
- staging
exclude:
- develop
- production
-
Run this pipeline when a PR is done.
branches.include - What branches to include in those that are going to call the pipeline
branches.exclude - What branches to exclude in those that are going to not call the pipeline
To include all branches put in include - '*'
resources:
repositories:
- repository: repo1
type: github
endpoint: endpoint1 name: name/repo2_name
trigger: true

- repository: repo2
type: github
endpoint: endpoint1 name: name/repo2_name
trigger: true

containers:
- container: container-name
image: container.registry/container-name
endpoint: ACR-endpoint
-
The resources that I am going to use in the pipe line.
repositories - I give:
The type - in this case its github.
The endpoint - which is the service connections that I am using to get this resource.
The trigger - which says whether this repo trigger the pipeline when it have a PR.
The containers - If I want to use a container to run steps - this is the container that I am using. I will refer to it in the stage where I want to use it, by the name after container.
variables:
- group: group-name
- name: dockerRegistryServiceConnection
value: "<docker_registry_service_connection_value>"
- name: imageRepository
value: "<image_repo_name>"
- name: containerRegistry
value: "<container_registry>"
#System.Debug: true
-
group - Used for variables group - you can deinfe a variable group through the ui and use it afterwards, so it won't appear in the pipeline as raw text.
System.Debug - The best way to debug your pipeline. Much more information on the pipeline run.
Other names are self explanatory.
stages:
- stage: Fetch
condition: eq(variables['Build.Repository.Name'], 'repo1')
displayName: Fetch, build and upload
jobs:
- job: Fetch
displayName: Fetch and Build
steps:
- checkout: repo1
- task: NodeTool@0
inputs:
versionSpec: '14.16.1'

- script: |
npm install
displayName: npm install

- task: DownloadSecureFile@1
name: securefile1
inputs:
secureFile: "file1.env"

- script: |
cp $(securefile1.secureFilePath) $(Build.SourcesDirectory)/.env
- script: |
npm run prod
displayName: npm run prod

- task: PublishPipelineArtifact@1
inputs:
targetPath: build/
artifactName: staticall

- task: PublishPipelineArtifact@1
inputs:
targetPath: build/static/css
artifactName: staticcss

- task: PublishPipelineArtifact@1
inputs:
targetPath: build/static/js
artifactName: staticjs
-
First of all I give the first stage a name: Fetch.
The condition condition: eq(variables['Build.Repository.Name'], 'repo1') - check whether the current branch is repo1, so this stage will run only if the pipeline that had a pr that started the pipeline is repo1.

The NodeTool is self explanatory - download and cache Nodejs, add it to path and then use it.
DownloadSecureFile - Download a secure file that you put in Azure.

Next script tasks, copy the secure file and run npm prod - some package script that you want npm to run
Then I wanted to publish file that were built in this build to other parts of the pipeline so I used PublishPipelineArtifact.

I give it as input targetpath, which is the target that I want to be published as an artifacr
And artifactName - which is the name of the artifact that is going to be used in the next stages.

I publish all the static file that were created, and specific files like css files and js.
- stage: Build
condition: or(succeeded(), eq(variables['Build.Repository.Name'], 'repo2'))
displayName: Build and push stage
jobs:
- job: Build
displayName: Build
steps:
- checkout: repo2
- task: DownloadSecureFile@1
name: npmrc
inputs:
secureFile: ".npmrc"

- script: sudo cp $(npmrc.secureFilePath) $(Build.Repository.LocalPath)
displayName: Copy .npmrc file to working directory
- bash: |
#Create path for files if doesn't exit
sudo mkdir -p $(Build.Repository.LocalPath)/static/js
sudo mkdir -p $(Build.Repository.LocalPath)/static/css
sudo chown -R vsts:vsts $(Build.Repository.LocalPath)

displayName: Create all needed directories

- task: DownloadPipelineArtifact@2
inputs:
source: current
artifact: staticcss
path: $(Build.Repository.LocalPath)/static/css

displayName: Download staticcss artifact

- task: DownloadPipelineArtifact@2
inputs:
source: current
artifact: staticjs
path: $(Build.Repository.LocalPath)/static/js

displayName: Download staticjs artifact

- task: DownloadPipelineArtifact@2
inputs:
source: current
artifact: staticall
path: $(Build.Repository.LocalPath)/static/

displayName: Download staticall artifact

- bash: | #DEBUG - see that everything is in place
echo $(Build.Repository.LocalPath)
ls $(Build.Repository.LocalPath)
echo
echo STATIC
ls $(Build.Repository.LocalPath)/static
echo
echo STATICCSS
ls $(Build.Repository.LocalPath)/static/css
echo
echo STATICJS
ls $(Build.Repository.LocalPath)/static/js

displayName: Content and path of all artifacts
continueOnError: true

- task: Docker@2
displayName: Build and push an image to container registry
inputs:
serviceConnection:
containerRegistry: "$(containerRegistry)"
repository: "$(imageRepository)"
command: "buildAndPush"
Dockerfile: "**/Dockerfile"
tags: "my-container-image"

condition: succeededOrFailed()
-
Second stage - The build itself.

or(succeeded(), eq(variables['Build.Repository.Name'], 'repo2')) - Do this stage in one of two cases - if the last one succeeded or if the repository running the build is repo2.

Checkout and Download - Like in the last stage - Checkout the repo and download secure file for nodejs.
Move the npmrc file to the local path of where the pipeline is running.
Run script to create directories for files if doesn't exist(You can create whatever paths you need)
Download the artifact created before with: DownloadPipelineArtifact@2
For input you give:
source - The place where you take the artifacts from, in this case current means the current pipeline.
artifact - The name of the artifact you want to download.
path - Where you want to download the artifact to.

Download all the artifacts to the places where they are suppose to be -
All static file to $LocalPath/static
JS files are going to $LocalPath/static/js
CSS files are going to $LocalPath/static/css

In the next step I want to see that all the files are in the right places.
So I do a script that prints the names of the artifact that I suppose to be looking at and list the contents of the places where it is suppose to be.

And the last stage is to push an image of the container that I created (The container that the pipeline is running on) to the registry that I want, in this case my own ACR.
ServiceConnection - The service connection to the ACR that I want to use.
containerRegistry - The name of the container registry that I want to use.
command - There are few options for command when using Docker task, in this case - buildAndPush does exactly what it says.
Dockerfile - What dockerfile to use when building - ** searches through all the directories in the repo that the pipeline is in.
tags - The tag to push the image with to the repo, after building.
The condition succeededOrFailed means that this step is running whether the last step succeeded or failed.
- stage: Run
condition: succeeded()
displayName: Activate the App
jobs:
- job: StartAndConnect
displayName: Connect APPGW to container
steps:
- task: DownloadSecureFile@1
name: securefile3
inputs:
secureFile: securefile3.env"/span>

- task: AzureCLI@2
displayName: Get container IP
inputs:
# Name of service connection to azure resources
azureSubscription: azurerm-connection
scriptType: bash
scriptLocation: inlineScript
inlineScript: |
file1_envfile=$(securefile3.secureFilePath)
file2_pass=$(registrypass)
echo CREATING CONTAINER...
az container create -g my-resourcegroup --name my-container \
--image container.regostry/container-name:my-container-image \
--registry-username username --registry-password $registrypass \
--ip-address Private --location eastus --os-type Linux \
--command-line "/bin/bash -c 'az login --identity; npm run start-app'" \
--ports 443 --vnet myvnet --subnet ci-subnet \
--assign-identity "" --environment-variables $(cat $file1_envfile | xargs) \

ACI_IP=$(az container show -g my-resourcegroup --name my-container
--query ipAddress.ip -o tsv)
echo "info: $ACI_IP"
echo
echo CHAGING APPLICATION GATEWAY my-appgw BACKENDPOOL IP
az network application-gateway address-pool update -g my-resourcegroup \
--gateway-name my-appgw -n my-bp --servers $ACI_IP
-
Last stage - Run the container and connect it to the application gateway
condition - only if the last stage succeded (meaning, all stages).
DownloadSecureFile@1 - Download file of environment variables
AzureCLI@2 - Enable to run a script that uses az commands.

To be able to run it I need to use a service connection that enables connection to resource - azurerm-connection in this case.
I provide a script type with the value of scriptType- in this case bash.
And then I do the script itself, writing it after inlineScript:
file1_envfile - The path to the file that contains all environment variables of the container that will be initialized.
registrypass - a secret that is defined in the variable group - 'groupname' - in the begining of the file. You use the secrets there in correspondence to their hirarchy - this secret is stored directly in the group, so it doesn't have a prefix, only the name.
az container create... - The command to create a container with all the information from above. I take the image that was created in stage before. Using the registry username and password, which can be achieved from within azure. Use Private ip address so it will be accessible only from my vnet.
--command-line - Start the command by login to azure, using the identity which is mentoined in the next parameter. Only then I can run the app with npm.
Using port 443, with myvnet, and subnet that the container instance is in with the assigned identity inside the quotes (not mentioned here), and the environment variables in one line, which I can get by printing the file with env variables from above and using xargs which print all of them in one line.
After starting the container, I need its IP, to connect to Application Gateway. So I did the command:
az container show -g my-resourcegroup --name my-container --query ipAddress.ip -o tsv
This will query only the ip of the container that I created before.

To finish the task, I need to put the IP that I found as a backend to the existing application gateway. So:
az network application-gateway address-pool update -g my-resourcegroup --gateway-name my-appgw -n my-bp --servers $ACI_IP
I update the address-pool of the application gateway that I created which, which is named my-bp with the servers(compte resource) $AC_IP - which is the IP of the container that I created.

And That's It!!!