What can you do with Power BI and perspectives?

Perspectives are very popular when you are using Analysis Services Tabular. With this feature you can create different views to your cube and hide tables, columns and measures and make the cube easier to use for each user group. So, that means for example, that salespeople do not see measures which are used for deep dive analysis of your data. Mostly I see that the organization created different cubes, but mostly with the same underlying data model and the same programming of each measure and so on. This can also manage by using OLS, but for business needs, perspectives are easier to handle.
Now, if you want to use perspectives with Power BI, you can create them with the Tabular Editor. For this example, I have created a Power BI file and with the Tabular Editor I created a perspective named “Sales “and hidden the column “country”.

After publishing the report tom Power BI, you cannot use the perspective. If you create a new report based at this dataset, there is no question which perspective do want to use:

But if you have Premium or Premium per User, you can access to the XMLA endpoint:

With this URL you can access to the cube and select the perspective or the whole cube. This URL can also used with Paginated Report Builder or Excel.

After selecting the Sales perspective, you see, that the column “Country” is not available.

Conclusion: Mostly ‘OLS is by my point of view better if you want to secure your data. But if you only create different views of your data for different use cases, I think you should use this feature. But it depends 😉

Paginated Report Visual and exporting data for printouts.

The story is very simple, but it is often needed. In this case, you have a Power BI report and your Sales managers needs reports, which can be used offline. In Germany we like Printouts 😊
So, I created a Power BI Reports which connects to a database and provides insights to each customer. But  this report is not designed for print out and the consumers want to analyze the data, and if needed, they want to generate a printout with some needed data.

When you build a Power BI which should be re-used with Paginated Reports, you should build explicit Measures, because with RS you cannot build implicit Measures. This report is published to the service, because with Paginated Reports you can connect to this dataset and so, you must not rebuild the model. Now I have created a Paginated Reports which connects to this data model which shown below.

Now I created the reports which are designed for printouts and saved them to the service.

Next, I used the Paginated Report visual inside of Power BI Desktop and embedded the Paginated Report with the Company Name property as the parameter. I also activated in the properties of this visual the Toolbar (needed for exporting) and the auto-apply of filters.

After publishing this Power BI Report again, it looks like this and your Sales managers can analyze the data and, export the needed information or print out the pages:

So, I think it is very useful to re-use your datasets, because your model only exists on time, and this makes the maintenance a little bit easier.

Process SQL Analysis Services Cubes via the Azure Data Factory

If you work with Analysis Services on Azure, it is another way to process a cube. On premises you use mostly the SQL Server Agent to do this, but this option is not given on Azure. There you can use Azure Automation, PowerShell, or a Data Factory (ADF). This post is dedicated to the ADF, because mostly you need to process a table, partition, or the whole database after you made a data intake.

First we need to grant permissions to the SSAS Cube from the ADF. So, we need to create a managed identity. To get the needed data, open the properties of the ADF at the Azure Portal.

With the Tenant and the ADF ID, we can build the credential-

Next, connect to the AS instance via the Management Studio and go the server administrators and add this user. Via the Azure portal it is not possible, but after that, you can view the result.

To check the result, if a refresh has worked, I have added a View to my data source and imported the data inside the cube.

You should format the column as a date type and format the column as a G, so you also get the time inside the output.

Okay, the infrastructure is ready! Now, we can add a new pipeline inside our ADF with a web activity.

https://eastus.asazure.windows.net/servers/blog001/models/blog/refreshes

This is the schema: https://<region>.asazure.windows.net/servers/<servername>/models/<modelname>/refreshes
For the fifth point add following URL for the authentication. https://*.asazure.windows.net

The following XMLA code will initiate a full process
{

    „Type“: „Full“,

    „CommitMode“: „transactional“,

    „MaxParallelism“: 2,

    „RetryCount“: 2,

    „Objects“: []

}
After running the ADF pipeline you should get a full refresh of the cube:



How to make a Power BI report more useful for your users?

In the past I have seen so many reports which are not very handy for the users. The reports had a good data model, cool measures, good visualizations, but mostly it can only handle by the authors or product owners. The reason was mostly, that other users do not get help how to use the reports and/or visuals. So, here are some ideas how to provide some help to the report consumers:

1. An introduction page
Build up a page which provides some explanation about the report and the story. You can also build up s page like this for every chapter.

It is also very good to provide here some screenshots of every page with some explanations and some buttons which refers the users to the page. It is also a good idea to provide some links to your data catalog which explains your used data and KPI’s.

2. Use tooltips for providing help

If you have some visualizations who need an explanation, maybe you have a chart which have a hierarchy and a forecast. This can be very complex for a business user to use this visual, so it is good idea use a tooltip for this.

3. Use page help with bookmarks

You can implement bookmarks to display help for your whole page. To do this, make a screenshot of your page, put this image into Power Point or another tool to create images, descibe the visuals and put this picture back to a bookmark inside Power BI

So, this can be very helpful for you report users …

…. to be continued.

Why should you create explicit measures?

Some people ask me, why should I create a measure by using Power BI? Yes, you can use every column which contains a number and put them into a value box in Power BI and create different calculations on it. My first answer by this question is, if you need a calculation e.g., Sales Amount without canceled orders, you must filter on ever page or on the whole report the canceled order out. But if you want to compare the canceled orders to the others, then it is maybe a little bit complicated. F you create a measure with and without the filter, it is much easier.
Another point is, when you want to analyze the dataset with Excel to use it with an XMLA endpoint by using Excel. There it is not possible to put a column, which is not a measure, into the values. The reason is, that the Power BI dataset is an Analysis Services Model in behind.
So, I suggest creating for every calculation a measure, it does not matter if it is only a simple SUM() or a complex calculation. If you do so, then you do not have troubles by using other tools than Power BI.

Stream Analytics and Power BI

Azure Stream Analytics can used to stream data to different targets and to Power BI. In this post, I will show how to stream data to a Power BI dataset from a Blob storage. Normally you will get at this scenario data from IoT devices or something which push very frequently data, but to make this post a little bit easier for me for you, when you want to reproduce something of this, it can be created with resources which are normally available. Okay, let us start to build up the environment.

First, I create a storage account. Please make sure, that you create this account in an area, where you also want to create the Stream Analytics. This is important to make the setup easier and to save money and minimize the latency.

Next, we need our data intake container where we store the data which will imported into out Power BI environment.

To this container I upload a simple CSV file
Category;Date;OrderId;Amount
Trucks;9/5/2021;763;2289

Imagine that your store will send you for ever sale a file which should be immediately available in your Power BI report.

Next, we need our Stream Analytics Job, you will find this resource at the Azure marketplace:

Please ensure that you create this job at the same region as you created the storage account. By creating this job, I leave everything by default. The Units are by default at 3 and can scaled up by need and after creating this resource.

After a successful deployment we need to configure the data in-take. You can configure this by navigating zo input and select “new” and Blob storage. For the Authentication mode I select “connection string”. By selecting this, maybe you need to provide a key, this key can be grab from your storage account under access keys. Please make also sure, that you configure the data format to CSV with semicolon as a delimiter.

Now we need our output. For this I created at my Power BI tenant only a new workspace. You can use an existing, but to get a better overview for myself, I created a new one. By selecting outputs, I created a new with Power BI.

By going back to the Overview, we can edit out query which selecting our data from the input and push the data to the output.

Now everything is ready, and we can start the job. After that, and we provide new data, we get a new dataset in our Power Bi workspace where we get our data from the blob storage. To provide new data, you can easily upload a new file. You can check this at the overview page.

By creating a new report at Power BI based on the new dataset, you can check the data. The dataset contains additionally some information about the data file and the import process. I think it makes more fun when you automatically get new data, for this approach I created a PowerShell script. You need by configuring this script also the access key from your blob storage. After that, you can run this script in a loop.

$accesskey = "f1vxAsiHHWvdEF8lfAExLq66rghWfqzXZoR3ugq9p5qnqDTnIt4DAUvz4luHEJ5w1Gbcg+yy6jT+EEvHJGaXtA=="
$StorageAccountName = "streamingdata0105"
$containerName = "data"

$date = Get-Date
$category = "Trucks"
$amount = $date.Millisecond


$filename = $category + $date.Hour + $date.Minute + $date.Second + ".csv"
$path = "c:\temp\files\"
$tempfilepath = "c:\temp\files\" + $filename

$output =  @()
$output =  @("Category;Date;OrderId;Amount")
$output += @($category + ";" + $date.Day + "/" + $date.Month + "/" + $date.Year + ";" + $amount + ";" + $amount * 3) 
$output | Out-File -FilePath $tempfilepath

Connect-AzAccount
$StorageContext = New-AzStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $accesskey 
Set-AzStorageBlobContent -File $tempfilepath -Container $containerName -Blob $filename -Context $StorageContext