Use Azure KeyVault with DevOps to secure your credentials

The most DevOps pipelines use variables to store credentials and other variables. With DevOps you have the possibility to hide/encrypt data, but in an enterprise environment mostly KeyVaults are in use to store the sensitive data in a central store to provide better management. This post is to show you how you can use a KeyVault and use passwords in your pipeline. This post referrers to my last blog post and uses this pipeline.

To start, I created an Azure KeyVault and I store inside a password which is able in combination with the username to start a Power BI deployment pipeline.

Next, you need to create a service connection to your Azure tenant:

Select Azure Resource Manager

Select service principal

Select your subscription and the Resource Group where the KeyVault is stored

After creating your Service Principal, you can get the name of it by clicking to the Manage link

After clicking at ServiePrincipal, you will get the name of it, and your can add the Access Policy:

Select Get and List to get the Secrets and the content

Add your Service Principal which was created by DevOps

Don’t forget to click Save. Next, back to Azure DevOps and refer to Library. Here you can add a new variable group which uses your KeyVault.

You need click to Authorize to enable the connection. Next, you must go to your pipeline variables and add the newly created library

Inside your tasks you can use these variables. If your passwords contain some special characters, please put them into single quotes.

By viewing the results, you can see the using of the KeyVault

Continuous Deployment of Power BI Reports with PowerShell and Azure DevOps

For Azure DevOps you find several Addons to deploy Power BI Content to a workspace and addons to trigger a Pipeline. But when you need more flexibility to trigger a pipeline at Power BI and decide which content you want to deploy, you can use the Power BI REST API bs triggering it with PowerShell. For this purpose, I’ve created a Script which can start a pipeline.

The script contains four parameters:

The UPN / username who can start the pipeline
The password from the user (if the password contains special characters, you should put the password into single quotes.
The stage: 0=from DEV to test, 1=from test to prod
The pipeline name

pipeline.ps1 username 'password' 1 DeployPipeline

So, you can call the script for different pipelines and for the different stages.
The script:

[CmdletBinding()]
param (
    $ParamUser,
    $ParamPassword,
    $ParamstageOrder, # The order of the source stage. Development (0), Test (1).
    $ParamPipelinename # The name of the pipeline
)

$myPassword = $ParamPassword
$myUsername = $ParamUser
$stageOrder = $ParamstageOrder   
$pipelineName = $ParamPipelinename  

$moduleName = "MicrosoftPowerBIMgmt.Profile"
$module = Get-Module $moduleName -ListAvailable -ErrorAction SilentlyContinue

if (!$module) 
{
	Install-Module -Name $moduleName -Force -Scope CurrentUser -SkipPublisherCheck 
}


$password = ConvertTo-SecureString $myPassword -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential ($myUsername, $password)

Login-PowerBIServiceAccount -Credential $credential
                  

# End Parameters =======================================

try { 
    # Get pipelines
    $pipelines = (Invoke-PowerBIRestMethod -Url "pipelines"  -Method Get | ConvertFrom-Json).value

    # Try to find the pipeline by display name
    $pipeline = $pipelines | Where-Object {$_.DisplayName -eq $pipelineName}

    if(!$pipeline) {
        Write-Host "A pipeline with the requested name was not found"
        return
    }

    # Construct the request url and body
    $url = "pipelines/{0}/DeployAll" -f $pipeline.Id

    $body = @{ 
        sourceStageOrder = $stageOrder

        options = @{
            # Allows creating new artifact if needed on the Test stage workspace
            allowCreateArtifact = $TRUE

            # Allows overwriting existing artifact if needed on the Test stage workspace
            allowOverwriteArtifact = $TRUE
        }
    } | ConvertTo-Json

    # Send the request
    $deployResult = Invoke-PowerBIRestMethod -Url $url  -Method Post -Body $body | ConvertFrom-Json

    "Operation ID: {0}" -f $deployResult.id
} catch {
    $errmsg = Resolve-PowerBIError -Last
    $errmsg.Message
}

I moved this script to a DevOps repo:

After that, I created a release pipeline:

The content of each stage:

After running the pipeline, the content of each stage is synchronized:

Add comments to Power BI Visuals and the data by using Analysis Services Cubes?

If you use Power BI reports, the users can add comments to a report and to a visual. But mostly, the business users want to add report to the data, e.g., to say “This data is not complete” or something like that. So, the use case for this blog post is to add comments to a report which is using a OLAP Cube as the data source.
Firstly, we need a simple data source for the cube. I’ve created two table. The first one contains the data for the cube, and the second one is used to store the comments:


CREATE TABLE dbo.factData (
	id int not null identity(1,1) primary key
	, category nvarchar(255) not null
	, project nvarchar(255) not null
	, amount int not null
);
GO
INSERT INTO dbo.factData (category, project, amount)
VALUES 
	('Cat A', 'P 1', 1000)
	, ('Cat A', 'P 2', 2000)
	, ('Cat B', 'P 3', 1100)

GO

CREATE TABLE dbo.comments (
	id int not null identity(1,1) primary key
	, project nvarchar(255) not null
	, comment  nvarchar(1000) not null
	, createtiondate datetime default GETDATE()
);
GO

INSERT INTO dbo.factData (category, project, amount)
VALUES 
	('Cat A', 'P 4', 1000)
GO

INSERT INTO dbo.comments (project, comment)
VALUES ('P 4', 'a comment ...')

After creating the tables, I added some sample data.
Next, I have created a SSAS model like this:

After deploying the cube, I created a Power BI report to display the data:

To show the comments, I added a Tooltip page to show the comments for each project of the first page:

Now, we need a simple way to comment. For this purpose, I created s Power App with a SQL connection to the source database:

After creating the app, which points to the comment table, I embed the APP to the Power BI report. Now, a user off the report can create a comment to each project.

Now we need a technique to refresh the cube to show the comments. If your cube contains many data, it is not good idea to refresh the whole data. But with SSAS you can refresh only one table, and this can be done by using the Azure Data Factory every X minutes. This means, when a user add comment, the user must wait a minute to see the comment.

{
  "refresh": {
    "type": "full",
    "objects": [
      {
        "database": "Comments",
        "table": "comments"
      }
    ]
  }
}

Continuous Deployment of Azure Analysis Services

If you have an Azure Analysis scenario and you are using different stages, like Dev/Testing/Productive, you can use Azure DevOps to deploy the environment to the different stages. In my demo environment, I’m using one SQL Server with two databases with the same schema and one Analysis Services server. The goal is simple, I want to deploy the model to different SSAS cubes which are using different connections to the source databases.

To begin, I have setup a Visual Studio Analysis Services Tabular Project and pushed this code to a GitHub repo at DevOps:

Next, I’ve created a SQL Server with the two databases on Azure and the Analysis Services server which allows connections from Azure at the firewall setting page. This is necessary to allow that DevOps can work on it.

For the next step, I have setup a Release Pipeline at DevOps

The artifact refers to the main branch where the code is stored. Then I have added to stages to deploy the cube on DEV and Testing. Each stage contains this step:

The add-in can found at the store under https://github.com/liprec/vsts-release-aas and you need to select the “model.bim” file, this file contains your cube definition and you need to store your credentials for the SSAS server and also for your source. After running the pipeline, you will find the two different cubes at your server:

The connections will be automatically aligned to the source databases. After that you must process your cubes to fill them with data. This can be done with the Azure DataFactory and then you need to setup the permission. Details for these two tasks can be found here:
https://www.flip-design.de/?p=1048
https://www.flip-design.de/?p=996