Continuous Deployment of Power BI Reports with Azure DevOps (Part 2, Live Connection to a OLAP Cube)

This is the second part of the series “Continuous Deployment of Power BI reports. This article is dedicated to the deployment of reports which are using Analysis Services (OLAP) cubes as a data source. Here I’m using a Premium (or Premium per user works as well) capacity. I’ve setup a workspace with a deployment pipeline which are responsible to deploy the reports to the other stages. The workspace contains a report which is connected to a SSAS service.

Next, I’ve configured a deployment rule to modify the data source when the report/dataset is deployed to the other stages. This works well, but when we have a bigger scenario with deployments of other artifacts, like SQL, AS etc., this must be orchestrated with the other pieces and a pipeline only controlled by the Power BI services will not fit the requirements in the most cases.

So, back to Azure DevOps where you can manage, configure and setup your pipelines for the other artifacts and here, I have created a new pipeline where I’m using the “Power BI Pipeline Tasks” Task.

In this task you must configure the connection and refer to your newly created pipeline from above. There it is also possible to choose which stage should be affected.

After the pipeline runs successfully, you can check your connections.

I think it makes sense, to use Power BI Deployment pipelines for a scenario like this, because you use native functions from Power BI and control this only by DevOps – but yes, it depends on 😊

Continuous Deployment of Power BI Reports with Azure DevOps (Part 1, Direct Query and Import Mode)

In an Enterprise Environment the Power BI Report development and deployment is a little bit different. The reports will mostly not create by business users and self-deployed. The reports will be developed by IT Professionals with depending data sources like SQL Server, Analysis Services and so on. So, this Blog post will cover the deployment with Azure DevOps and data sources which are connected by Direct Query and/or the Import Mode. This post handles a scenario without using Premium or Premium per user. So, I don’t use deployment pipelines of Power BI.

To begin, I’ve created a new repository for saving and organize my Power BI reports.

Next, I have cloned this repo to Visual Studio and added a blank solution and added my PBIX file to it and published the new files to DevOps.

At the Web portal of DevOps you will find the new project and the Power BI file.

The PBIX file contains a parameter for the data source, and it is used by the Query. The result of this query is only the server name. Again, in an enterprise environment you have not only one data source. Mostly you have different stages like Development, Integration/Testing and a Productive Environment.

So, if you change the server name the report will automatically connect to the new data source and display in this case the actual server name.

I have also created a new, blank workspace at the Power BI portal and now, we need to publish the report with DevOps to this workspace.
Now I created a new Release Pipeline (using the blank template):

Next, we need an artifact. This is our Power BI file. I selected my new repo, the project and the PBIX.

For the next step we need a step/task inside out stage to deploy the report to the newly created workspace. Here I have added the Task template from the store “Power BI report Tasks”

To configure this task, we need a valid connection with a Pro license. You need also to configure the new workspace name where  the report should be deployed.

After saving and creating a new release of this pipeline the report should be deployed.

After the deployment is succeed you will find the artifact/the PBIX at the workspace.

But the connection is the same as before. So, we need to add a new task to our pipeline:

Here we need to change the action to modify our parameter and we need a small JSON script

    {
      "name": "server",
      "newValue": "sql090219.database.windows.net"
    }

After running the pipeline again, the report would be overwritten, and the connection will be changed because our parameter has received a new value.

The next blog post will be dedicated by changing the source for a live connection and using a Premium capacity with deployment pipelines.