
Ok, so we’ve got our semantic model sorted, time to start looking at dashboards. One thing I’d say, at least in terms of PowerBI and probably most other visualisation tools, is the development of dashboards/reports and the underlying models is an iterative cycle. Quite often you’ll realise you need something that you didn’t realise you needed before. So, be prepared to make frequent trips between your dashboard and your semantic model to make adjustments, tweaks or new additions. Don’t be tempted to just create new calculations in the report, put them where they belong. In some cases, this even means going back and making changes to the dimensional model in the warehouse.
Anyway, lets get to some visualisations now. First of all, in any dashboard you need to know your target audience. In this example I’ve obviously not got anyone, but for arguments sake, lets say its for someone who is at a national level looking at the state of the current 18 week referral to treatment targets and wants to know some rough information about how many patients, where they might be, and how the numbers have changed or not over the last year.

From this, we’ll try and create a simple dashboard with a small number of visualisations to tell them at a glance what they want to know. Note that I’m not going to spend too much time here making it super pretty with backgrounds, branding or other little visual touches, but ideally that would be something you would do given the time. We don’t want to overwhelm the user, so any good dashboard should only contain relatively small number of visualisations. Remember a car dashboard, its got to be something you can very quickly look at and tell almost instantly what you need to know! Obviously in this instance we’re not driving a car, so there is some relaxation compared to that example, but the same principle applies. If the user has to think or search too much to find out what they want to know, your dashboard needs reworking! With that in mind, lets look at what we might create:-
- Heat map – This we could use this to show areas of the country where there are more people over the 18 week limit, so ‘red’ areas they are more people over the limit than in the ‘green’ areas.
- KPI – A ‘Key Performance Indicator’, this is usually just a number with perhaps a ‘RAG’ (Red Amber Green) colour associated with it to give a figure and current state. Sometimes it might also present or show a trend or trajectory that the indicator is following.
- Line chart – A line chart is good for showing change over time of a single measure, or a small number of measures at most otherwise it can get confusing to read. We could use this to show the number of new referral to treatment periods (i.e. the number of people starting a ‘wait’ for treatment) over time. This way we can see how many new referrals are coming in each month for example.
- Bar chart – We could use a bar chart as they can be useful to compare a small number of measures over time to each other. This we could use to see how under/over 18 weeks compare to each other over a period of time.
We’ll start by looking at the heat map…

With this visualisation, its easy to spot where the areas with the highest percentage of patients waiting over the 18 week target are. You don’t get exact figures with a heat map, unless you’re using a discrete ‘block’ heat map, instead here we’re simply after a quick way to see at a glance any ‘hot’ spots.
One thing to note with this, in the original data there is no ‘location’ information. So as I eluded to at the top of this post, sometimes you’ll need to revisit your data and amend it. For this I looked for some geographical data for the ‘provider organisations’. As it was a ‘one off’ I just uploaded it manually to the landing lakehouse then fed it through the import process as normal, then added it to the semantic model (see below). This gave me the town/city/postcode, either work ok but I went with the town/city field. This is of course the town or city of the hospital provider organisation, not the patient, but with this data there is no personal patient information so the town or city of the hospital is the best we can do here… Also note that I’m using the ‘period end’ measures, as I only want the data at latest moment in time that we’ve got. Otherwise if I’d used a ‘total’ measure instead, this would just aggregate all the data over the entire range in time for what we’ve got. This would of course obscure any spikes or other point in time values and give us potentially a distorted idea of the measure we’re after…

The KPI… This one’s pretty straightforward, we just take 2 measures, with one being the same calculation but for the same period in the previous year compared to the current year. This lets use quickly see if we’re doing the same, better or worse than the same period last year.


The line chart, again like the others, we’re using the ‘period end’ measures. However, we’re breaking it down over year and month (see the field list below). One thing I had never realised that is not yet possible in PowerBI (strangely), is having a dynamic colour for the line. Although in this instance I would keep a single colour as the number of new referrals doesn’t need a second visual cue in this chart.


Note the filter though on this visual, since we only want ‘new’ referrals. We need to select only those from the waiting status type, otherwise we’ll get new referrals but also number of patients stopped waiting or still waiting etc…

Finally, the bar chart… You could argue this might be better served as a line chart with 2 lines. However, I think there’s an argument for either. We couldn’t use something like a waterfall chart though which is useful for tracking change between categories over time. In our case we don’t want to compare the 2 to each other, we just want to see the 2 over time against themselves.


We need to make sure we’re filtering out new RTT periods though and only include ‘incomplete pathways’ (i.e. patients still waiting). This way we can see only the patients who are still waiting, but are over/under the 18 week limit!

Putting this together, we have our final dashboard. I could maybe add a better background and change or move the title or colour palette. However, this is just a brief look not a full on deep dive into dashboard design. Think of it more as a starting point and some basics…

There we go… 🫡

And with that, we reach the end of this little walkthrough. I hope this has been useful to someone out there, as with any development there are things I would change and improve. However, hopefully this gives you a little insight into one of the many ways you could work with Microsoft Fabric and your data.
For reference purposes I’ve copied the entire Git repo for the workspace I’ve used and uploaded it to my GitHub here if you want to clone it, use it as a base, modify it, change it or whatever in your own Fabric tenant.
I’m already thinking of improvements, changes and upgrades I could do with this process. What would you change or improve?
Looks like they released some little sneaky Fabric updates… 😮
I don’t remember reading about this in any of the Fabric blogs, but it looks like they’ve finally updated data factory so it can dynamically specify connection details (as they said they would). You can now actually add dynamic content for the connection name itself and the database name! Finally! This makes a full metadata driven pipeline process like what we’ve been designing, completely viable in Fabric. Before this, you’d have had to have individual pipelines per database connection! See below…

Then here you can see below, the old ‘Add dynamic content’ so we can specify a database name.
