How to add users into roles with Azure DevOps by using PowerShell to a Analysis Service Cube

To add users users after a deployment to a Azure Analysis Services cube you can do this manually with the Management tools or with the Visual Studio. But with a Continiuos Deployment (CI/CD) and different stages, you need mostly add different users/groups to add to the roles. So, you need todo this dynamiccaly with some scipts. I found in the marketplace a addin to do this with DevOps by using a pipline. But, unfornatially this adding does not work and you can only onee user/group by using one step. So, I decided to do this with PowerShell to add more than one user in one step.
I added my deployment user as a SSAS admin with the portal (you can also do this with the Management Studio)

After that, I created this PowerShell script and added this to my Repo and published it to DevOps to use it inside a Pipeline: This script connects to my Analysis Service and adds a user to a role. use it inside a Pipeline: This script connects to my Analysis Service and adds a user to a role.


[CmdletBinding()]
param (
    $ParamUser,
    $ParamPassword
)

$myPassword = $ParamPassword
$myUsername = $ParamUser

$password = ConvertTo-SecureString $myPassword -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential ($myUsername, $password)
Login-AzAccount -Credential $credential
Import-Module SqlServer -DisableNameChecking

Add-RoleMember `
    -Server asazure://westus.asazure.windows.net/ssas001 `
    -membername "asadmin@plenz.onmicrosoft.com" `
    -database "adventureworks" `
    -rolename "Internet Sales Manager" `
    -Credential $credential

Here you see the script inside my Repo at DevOps

Next, I created a pipeline, and I added the needed credentials to connect to the SSAS server as variables.

After doing this, I created an Azure PowerShell task inside my pipeline this one uses the PowerShell script from the repo and pushes the variables as parameters to the script.

After tunning the pipeline, the user from the script is added to the role.

Continuous Deployment of Power BI Reports with Azure DevOps (Part 2, Live Connection to a OLAP Cube)

This is the second part of the series “Continuous Deployment of Power BI reports. This article is dedicated to the deployment of reports which are using Analysis Services (OLAP) cubes as a data source. Here I’m using a Premium (or Premium per user works as well) capacity. I’ve setup a workspace with a deployment pipeline which are responsible to deploy the reports to the other stages. The workspace contains a report which is connected to a SSAS service.

Next, I’ve configured a deployment rule to modify the data source when the report/dataset is deployed to the other stages. This works well, but when we have a bigger scenario with deployments of other artifacts, like SQL, AS etc., this must be orchestrated with the other pieces and a pipeline only controlled by the Power BI services will not fit the requirements in the most cases.

So, back to Azure DevOps where you can manage, configure and setup your pipelines for the other artifacts and here, I have created a new pipeline where I’m using the “Power BI Pipeline Tasks” Task.

In this task you must configure the connection and refer to your newly created pipeline from above. There it is also possible to choose which stage should be affected.

After the pipeline runs successfully, you can check your connections.

I think it makes sense, to use Power BI Deployment pipelines for a scenario like this, because you use native functions from Power BI and control this only by DevOps – but yes, it depends on 😊

Continuous Deployment of Power BI Reports with Azure DevOps (Part 1, Direct Query and Import Mode)

In an Enterprise Environment the Power BI Report development and deployment is a little bit different. The reports will mostly not create by business users and self-deployed. The reports will be developed by IT Professionals with depending data sources like SQL Server, Analysis Services and so on. So, this Blog post will cover the deployment with Azure DevOps and data sources which are connected by Direct Query and/or the Import Mode. This post handles a scenario without using Premium or Premium per user. So, I don’t use deployment pipelines of Power BI.

To begin, I’ve created a new repository for saving and organize my Power BI reports.

Next, I have cloned this repo to Visual Studio and added a blank solution and added my PBIX file to it and published the new files to DevOps.

At the Web portal of DevOps you will find the new project and the Power BI file.

The PBIX file contains a parameter for the data source, and it is used by the Query. The result of this query is only the server name. Again, in an enterprise environment you have not only one data source. Mostly you have different stages like Development, Integration/Testing and a Productive Environment.

So, if you change the server name the report will automatically connect to the new data source and display in this case the actual server name.

I have also created a new, blank workspace at the Power BI portal and now, we need to publish the report with DevOps to this workspace.
Now I created a new Release Pipeline (using the blank template):

Next, we need an artifact. This is our Power BI file. I selected my new repo, the project and the PBIX.

For the next step we need a step/task inside out stage to deploy the report to the newly created workspace. Here I have added the Task template from the store “Power BI report Tasks”

To configure this task, we need a valid connection with a Pro license. You need also to configure the new workspace name where  the report should be deployed.

After saving and creating a new release of this pipeline the report should be deployed.

After the deployment is succeed you will find the artifact/the PBIX at the workspace.

But the connection is the same as before. So, we need to add a new task to our pipeline:

Here we need to change the action to modify our parameter and we need a small JSON script

    {
      "name": "server",
      "newValue": "sql090219.database.windows.net"
    }

After running the pipeline again, the report would be overwritten, and the connection will be changed because our parameter has received a new value.

The next blog post will be dedicated by changing the source for a live connection and using a Premium capacity with deployment pipelines.

What can you do with Power BI and perspectives?

Perspectives are very popular when you are using Analysis Services Tabular. With this feature you can create different views to your cube and hide tables, columns and measures and make the cube easier to use for each user group. So, that means for example, that salespeople do not see measures which are used for deep dive analysis of your data. Mostly I see that the organization created different cubes, but mostly with the same underlying data model and the same programming of each measure and so on. This can also manage by using OLS, but for business needs, perspectives are easier to handle.
Now, if you want to use perspectives with Power BI, you can create them with the Tabular Editor. For this example, I have created a Power BI file and with the Tabular Editor I created a perspective named “Sales “and hidden the column “country”.

After publishing the report tom Power BI, you cannot use the perspective. If you create a new report based at this dataset, there is no question which perspective do want to use:

But if you have Premium or Premium per User, you can access to the XMLA endpoint:

With this URL you can access to the cube and select the perspective or the whole cube. This URL can also used with Paginated Report Builder or Excel.

After selecting the Sales perspective, you see, that the column “Country” is not available.

Conclusion: Mostly ‘OLS is by my point of view better if you want to secure your data. But if you only create different views of your data for different use cases, I think you should use this feature. But it depends 😉

Paginated Report Visual and exporting data for printouts.

The story is very simple, but it is often needed. In this case, you have a Power BI report and your Sales managers needs reports, which can be used offline. In Germany we like Printouts 😊
So, I created a Power BI Reports which connects to a database and provides insights to each customer. But  this report is not designed for print out and the consumers want to analyze the data, and if needed, they want to generate a printout with some needed data.

When you build a Power BI which should be re-used with Paginated Reports, you should build explicit Measures, because with RS you cannot build implicit Measures. This report is published to the service, because with Paginated Reports you can connect to this dataset and so, you must not rebuild the model. Now I have created a Paginated Reports which connects to this data model which shown below.

Now I created the reports which are designed for printouts and saved them to the service.

Next, I used the Paginated Report visual inside of Power BI Desktop and embedded the Paginated Report with the Company Name property as the parameter. I also activated in the properties of this visual the Toolbar (needed for exporting) and the auto-apply of filters.

After publishing this Power BI Report again, it looks like this and your Sales managers can analyze the data and, export the needed information or print out the pages:

So, I think it is very useful to re-use your datasets, because your model only exists on time, and this makes the maintenance a little bit easier.

Process SQL Analysis Services Cubes via the Azure Data Factory

If you work with Analysis Services on Azure, it is another way to process a cube. On premises you use mostly the SQL Server Agent to do this, but this option is not given on Azure. There you can use Azure Automation, PowerShell, or a Data Factory (ADF). This post is dedicated to the ADF, because mostly you need to process a table, partition, or the whole database after you made a data intake.

First we need to grant permissions to the SSAS Cube from the ADF. So, we need to create a managed identity. To get the needed data, open the properties of the ADF at the Azure Portal.

With the Tenant and the ADF ID, we can build the credential-

Next, connect to the AS instance via the Management Studio and go the server administrators and add this user. Via the Azure portal it is not possible, but after that, you can view the result.

To check the result, if a refresh has worked, I have added a View to my data source and imported the data inside the cube.

You should format the column as a date type and format the column as a G, so you also get the time inside the output.

Okay, the infrastructure is ready! Now, we can add a new pipeline inside our ADF with a web activity.

https://eastus.asazure.windows.net/servers/blog001/models/blog/refreshes

This is the schema: https://<region>.asazure.windows.net/servers/<servername>/models/<modelname>/refreshes
For the fifth point add following URL for the authentication. https://*.asazure.windows.net

The following XMLA code will initiate a full process
{

    „Type“: „Full“,

    „CommitMode“: „transactional“,

    „MaxParallelism“: 2,

    „RetryCount“: 2,

    „Objects“: []

}
After running the ADF pipeline you should get a full refresh of the cube: