Continuous Deployment of Azure Analysis Services

If you have an Azure Analysis scenario and you are using different stages, like Dev/Testing/Productive, you can use Azure DevOps to deploy the environment to the different stages. In my demo environment, I’m using one SQL Server with two databases with the same schema and one Analysis Services server. The goal is simple, I want to deploy the model to different SSAS cubes which are using different connections to the source databases.

To begin, I have setup a Visual Studio Analysis Services Tabular Project and pushed this code to a GitHub repo at DevOps:

Next, I’ve created a SQL Server with the two databases on Azure and the Analysis Services server which allows connections from Azure at the firewall setting page. This is necessary to allow that DevOps can work on it.

For the next step, I have setup a Release Pipeline at DevOps

The artifact refers to the main branch where the code is stored. Then I have added to stages to deploy the cube on DEV and Testing. Each stage contains this step:

The add-in can found at the store under and you need to select the “model.bim” file, this file contains your cube definition and you need to store your credentials for the SSAS server and also for your source. After running the pipeline, you will find the two different cubes at your server:

The connections will be automatically aligned to the source databases. After that you must process your cubes to fill them with data. This can be done with the Azure DataFactory and then you need to setup the permission. Details for these two tasks can be found here:

Power BI – Page Navigation using a Slicer

Many reports are having multiple pages and it is mostly a problem for the business user to navigate to the right page to get the right insights. If you have report with less than 5 pages, it is easy to create an overview page with different buttons for the navigation. But in the past, I’ve seen and working with reports which includes more than 10 pages to provide insights for the different business cases. To maintain so many buttons, mostly you don’t have enough space at the pages, and it is a problem to add the description for each. So, it is a good idea to use slicers for the navigation. How this can be done? See below 😊

First, I created the report with multiple pages including an overview page

For the next step to create a meta data table, I’ve added a table with the different page names and for each page a short description. With this information you can provide a page description for the user. This table must maintain if you add new pages.

Next, I’ve added a single value slicer with the page names and a card with the description for the selected page.

Now, you need a button to create a page navigation to the selected page.

To enable the page navigation, you need a simple DAX measure to gather the selected page from the slicer:

Navigation = SELECTEDVALUE(‚Table'[Page])

Now, you can configure the action of the button by using the measure. And that’s it, if you select a page from the slicer, you get some information and when you click to button, you navigate to the selected page.

Trigger an Azure Data Factory pipeline by external users

The very simple and common use case is, that you have a pipeline that fills your data warehouse, process your cube, and delivers this information to your business users. The data warehouse uses as a part of it, excel files which are created by the business. The pipeline runs every day one or two times, but now, there is a requirement, that the process owner of these excel files what to control this process by trigger the pipeline. The reason is very simple, there was a file which contains errors and must be replaced that the management does not get misleading information.
If you want that the user can trigger a ADF pipeline you have different options, you can give him permissions or you can trigger the ADF by an event. I think, the last one is more interesting, because the user needs no permissions to the factory.

So, I have created an Azure storage account. This account is only to save a trigger file which is created by the user in this container. Next, I have created a simple pipeline which needs to be triggered by the business

The pipeline deletes after the run the trigger file from the storage. I use an Azure Event Blob trigger with these settings

So, I a place a file which starts with the word “trigger” inside the filename at the container, the ADF will recognize it at runs the pipeline.
After uploading a file named “triggeron.txt” the pipeline will run immediately

Now, you need only an option that the user can create this file very easily. There you can provide a Power Shell script to this user with access key, or you can use a Power Automate flow. I think this is better, because you have a flow which is shared to this user and when it runs, it will create this file inside this container. You can also trigger this flow by arriving an email to a particular post box, like this flow

But you must aware, that in the case you are using Power Automate you need a Premium connector.

How to add users into roles with Azure DevOps by using PowerShell to a Analysis Service Cube

To add users users after a deployment to a Azure Analysis Services cube you can do this manually with the Management tools or with the Visual Studio. But with a Continiuos Deployment (CI/CD) and different stages, you need mostly add different users/groups to add to the roles. So, you need todo this dynamiccaly with some scipts. I found in the marketplace a addin to do this with DevOps by using a pipline. But, unfornatially this adding does not work and you can only onee user/group by using one step. So, I decided to do this with PowerShell to add more than one user in one step.
I added my deployment user as a SSAS admin with the portal (you can also do this with the Management Studio)

After that, I created this PowerShell script and added this to my Repo and published it to DevOps to use it inside a Pipeline: This script connects to my Analysis Service and adds a user to a role. use it inside a Pipeline: This script connects to my Analysis Service and adds a user to a role.

param (

$myPassword = $ParamPassword
$myUsername = $ParamUser

$password = ConvertTo-SecureString $myPassword -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential ($myUsername, $password)
Login-AzAccount -Credential $credential
Import-Module SqlServer -DisableNameChecking

Add-RoleMember `
    -Server asazure:// `
    -membername "" `
    -database "adventureworks" `
    -rolename "Internet Sales Manager" `
    -Credential $credential

Here you see the script inside my Repo at DevOps

Next, I created a pipeline, and I added the needed credentials to connect to the SSAS server as variables.

After doing this, I created an Azure PowerShell task inside my pipeline this one uses the PowerShell script from the repo and pushes the variables as parameters to the script.

After tunning the pipeline, the user from the script is added to the role.

Continuous Deployment of Power BI Reports with Azure DevOps (Part 2, Live Connection to a OLAP Cube)

This is the second part of the series “Continuous Deployment of Power BI reports. This article is dedicated to the deployment of reports which are using Analysis Services (OLAP) cubes as a data source. Here I’m using a Premium (or Premium per user works as well) capacity. I’ve setup a workspace with a deployment pipeline which are responsible to deploy the reports to the other stages. The workspace contains a report which is connected to a SSAS service.

Next, I’ve configured a deployment rule to modify the data source when the report/dataset is deployed to the other stages. This works well, but when we have a bigger scenario with deployments of other artifacts, like SQL, AS etc., this must be orchestrated with the other pieces and a pipeline only controlled by the Power BI services will not fit the requirements in the most cases.

So, back to Azure DevOps where you can manage, configure and setup your pipelines for the other artifacts and here, I have created a new pipeline where I’m using the “Power BI Pipeline Tasks” Task.

In this task you must configure the connection and refer to your newly created pipeline from above. There it is also possible to choose which stage should be affected.

After the pipeline runs successfully, you can check your connections.

I think it makes sense, to use Power BI Deployment pipelines for a scenario like this, because you use native functions from Power BI and control this only by DevOps – but yes, it depends on 😊

Continuous Deployment of Power BI Reports with Azure DevOps (Part 1, Direct Query and Import Mode)

In an Enterprise Environment the Power BI Report development and deployment is a little bit different. The reports will mostly not create by business users and self-deployed. The reports will be developed by IT Professionals with depending data sources like SQL Server, Analysis Services and so on. So, this Blog post will cover the deployment with Azure DevOps and data sources which are connected by Direct Query and/or the Import Mode. This post handles a scenario without using Premium or Premium per user. So, I don’t use deployment pipelines of Power BI.

To begin, I’ve created a new repository for saving and organize my Power BI reports.

Next, I have cloned this repo to Visual Studio and added a blank solution and added my PBIX file to it and published the new files to DevOps.

At the Web portal of DevOps you will find the new project and the Power BI file.

The PBIX file contains a parameter for the data source, and it is used by the Query. The result of this query is only the server name. Again, in an enterprise environment you have not only one data source. Mostly you have different stages like Development, Integration/Testing and a Productive Environment.

So, if you change the server name the report will automatically connect to the new data source and display in this case the actual server name.

I have also created a new, blank workspace at the Power BI portal and now, we need to publish the report with DevOps to this workspace.
Now I created a new Release Pipeline (using the blank template):

Next, we need an artifact. This is our Power BI file. I selected my new repo, the project and the PBIX.

For the next step we need a step/task inside out stage to deploy the report to the newly created workspace. Here I have added the Task template from the store “Power BI report Tasks”

To configure this task, we need a valid connection with a Pro license. You need also to configure the new workspace name where  the report should be deployed.

After saving and creating a new release of this pipeline the report should be deployed.

After the deployment is succeed you will find the artifact/the PBIX at the workspace.

But the connection is the same as before. So, we need to add a new task to our pipeline:

Here we need to change the action to modify our parameter and we need a small JSON script

      "name": "server",
      "newValue": ""

After running the pipeline again, the report would be overwritten, and the connection will be changed because our parameter has received a new value.

The next blog post will be dedicated by changing the source for a live connection and using a Premium capacity with deployment pipelines.