T-SQL comparing columns in two tables

Just another little T-SQL nuget here which I hope might help others, I recently had to do a quick check between two tables where they were both copies of the same original table but had been modified slightly. I wasn’t interested in the data, just which columns had been added or changed between the two tables. Luckily there is a handy way to do this using the T-SQL set operator EXCEPT. Quite often you’ll see this operator used to compare result sets of two tables to find where rows exist in one but not the other, a bit like this:-

SELECT * FROM table1

EXCEPT

SELECT * FROM table2

However this is quite simple to modify to allow us to check which columns exist in one table but not the other, we can use the INFORMATION_SCHEMA.COLUMNS view to get the columns from both tables and in combination with the EXCEPT operator we can see where there are differences:-

-- Set the names of the tables we want to check
DECLARE @tableSchema1 VARCHAR(500) = 'dbo'
DECLARE @tableName1 VARCHAR(500) = 'SomeTableName'
DECLARE @tableSchema2 VARCHAR(500) = 'dbo'
DECLARE @tableName2 VARCHAR(500) = 'OtherTableName'

-- Find columns in table 1 that are NOT in table 2
SELECT
[ColumnsInTable1ThatAreNotInTable2] = [COLUMN_NAME]

FROM
[INFORMATION_SCHEMA].[COLUMNS]

WHERE
[TABLE_SCHEMA] = @tableSchema1
AND [TABLE_NAME] = @tableName1

EXCEPT

SELECT
[COLUMN_NAME]

FROM
[INFORMATION_SCHEMA].[COLUMNS]

WHERE
[TABLE_SCHEMA] = @tableSchema2
AND [TABLE_NAME] = @tableName2

ORDER BY
[COLUMN_NAME]

-- Find columns in table 2 that are NOT in table 1
SELECT
[ColumnsInTable2ThatAreNotInTable1] = [COLUMN_NAME]

FROM
[INFORMATION_SCHEMA].[COLUMNS]

WHERE
[TABLE_SCHEMA] = @tableSchema2
AND [TABLE_NAME] = @tableName2

EXCEPT

SELECT
[COLUMN_NAME]

FROM
[INFORMATION_SCHEMA].[COLUMNS]

WHERE
[TABLE_SCHEMA] = @tableSchema1
AND [TABLE_NAME] = @tableName1

ORDER BY
[COLUMN_NAME]

Data Warehousing Snippets – a slightly different ISDATE() function

The T-SQL function ISDATE() is great for determining if a string is a valid date or not. I imagine you’ve used it many times if you’ve been writing T-SQL for a long time. However, there is something that can make it act quite unexpected… the language settings!

Eh? The language settings? What has that to do with dates???

Well, it has to do with the various formats that constitute a valid date in different parts of the world. For example, in the U.S.A. they often refer to dates by month first followed by day like April 1st. In the U.K. we’re the other way around, so you’d likely see 1st April. Taking this back to T-SQL we have the following date ‘2019-25-01’, this is a perfectly valid date. However, we also could have ‘2019-01-25’ which is also a perfectly valid date. The problem is when using the ISDATE() function it takes into account the language. Try this T-SQL below and you’ll see which dates are valid or not. Note that we’re using the SET LANGUAGE to explicitly set the language (duh!) for the session:-

DECLARE @invalidDateInUK VARCHAR(10) = '2019-25-01';
DECLARE @validDateInUK VARCHAR(10) = '2019-12-01';  
  
SET LANGUAGE italian;  
SELECT 
ISDATE(@invalidDateInUK)
,ISDATE(@validDateInUK)

SET LANGUAGE english;  
SELECT 
ISDATE(@invalidDateInUK)
,ISDATE(@validDateInUK)

Hopefully now you can see what might happen if you don’t know (or don’t set) the language? Say for example you ran this code below, you will see different results for the ISDATE() function depending on your language setting.

DECLARE @invalidDateInUK VARCHAR(10) = '2019-25-12'; 

SELECT 
ISDATE(@invalidDateInUK)

This of course makes the use of ISDATE() potentially risky if you aren’t explicit in the language settings. Is there any way we can improve or fix this? Well yes, there’s several options. The simplest being just set the language explicitly I hear you cry! Yes, in most cases this should suffice and you should only write the least amount of code you can to solve the problem. Unfortunately this isn’t possible in some scenarios, for example you cannot set the language in a view. We can get around this by using a function. So how can we improve the function itself so language doesn’t matter? Let’s try this:-

CREATE FUNCTION [Chronological].[IsValidDate] (
	@year INT
	,@month INT
	,@day INT
)
RETURNS BIT AS
BEGIN
	-- this function checks for a specific year, month and day to see
	-- if its a valid date

	-- first get the year as a varchar
	DECLARE @yearStr VARCHAR(4) = CAST(@year AS VARCHAR(4))

	-- generate a 2 character leading zero month
	DECLARE @monthStr VARCHAR(2) = CASE
		WHEN @month < 10 THEN '0' + CAST(@month AS VARCHAR(1))
		ELSE CAST(@month AS VARCHAR(2))
	END

	-- generate a 2 character leading zero day
	DECLARE @dayStr VARCHAR(2) = CASE
		WHEN @day < 10 THEN '0' + CAST(@day AS VARCHAR(1))
		ELSE CAST(@day AS VARCHAR(2))
	END

	-- now we can use the standard ISDATE() function
	-- note we are using the YYYYMMDD format of the ISO 8601 
	-- standard here as this should prevent problems with 
	-- dates being interpreted differently depending on the
	-- culture or language settings
	RETURN ISDATE(@yearStr + @monthStr + @dayStr)
END

Pretty simple and straightforward, the important part really is the last bit. We can use one of the ISO 8601 formats with the original ISDATE() function to achieve our goal. The preceding code is really only there just to generate the string in that specific format. If we explicitly ask for the year, month and day as simple integer parameters then we’re sorted ๐Ÿ˜‰

Hope you like this one, it’s not something that you should use if there’s a simpler solution to the problem. However, if you find yourself stuck needing ISDATE() in a view then this, or something like it may be the solution.

Data Warehousing Snippets – Fiscal year function

So, I figured I’d start a little series and post various little (mainly) data warehousing related functions, procedures and other bits of (hopefully) useful T-SQL code. I’ll put these all in a complete Visual Studio SQL project and publish to our GitHub repository (eventually) if you just want to download the whole thing.

Note that for some/most of the series, I will use specific schemas to categorise each function or procedure. This should lead to a nice clean database with a logical and easy to follow structure.

Since our function today is about dates, I’ve created a schema called ‘Chronological‘ to make it clear what this function is for and any other related functions, procedures or tables etc. If you want to create this schema, just run the T-SQL script below on your database. Or if you prefer, alter the function code further down to use another schema, such as ‘dbo‘.

CREATE SCHEMA [Chronological]

So, without further ado, first in the series (and today’s snippet) will about calculating the fiscal year:-

CREATE FUNCTION [Chronological].[FiscalYear] (
	@date DATE
	,@separator VARCHAR(1)
)
RETURNS VARCHAR(7)
AS
BEGIN
	-- first calculate the starting year, if the month is 
	-- is before April then the starting year is the previous year
	-- otherwise its just the current year
	DECLARE @startingFiscalYear INT = CASE 
		WHEN MONTH(@date) < 4 THEN YEAR(@date) - 1
		ELSE YEAR(@date)
	END	

	-- return our fiscal year, note that we just add 1 to 
	-- the starting year to get the ending year
	RETURN CAST(@startingFiscalYear AS VARCHAR(4)) + @separator + RIGHT(CAST(1 + @startingFiscalYear AS VARCHAR(4)), 2)
END

Note that I’ve used a DATE data type for the first parameter, although this function will work if you pass a DATETIME or SMALLDATETIME value. The second parameter is the separator you would like to use, it’s likely this will either be a ‘-‘ or ‘/’ as most fiscal years are display like 2019-20 or 2019/20.

Anyway, that’s it for today. I’ll try and post regular entries in this series so that eventually you should have a decent amount of infrastructure that you could use for all sorts of projects.

SSIS and Excel drivers 32bit vs 64bit

I recently had to revisit an SSIS package that required some changes to import data from an Excel file. It had been a long time since I’d last done one of those, I still remember the pain now…

If you’ve ever tried to read Excel files using SSIS then its likely you’ve come across the old 32bit vs 64bit driver problem. So in my case, I’ve got 64bit Office installed but when developing SSIS packages using SSDT you can only do it if you’ve got the 32bit driver installed. Don’t get me started on why…

This wouldn’t be a problem but unfortunately the 32bit driver will complain if you try and install it, saying something like:-

You cannot install the 32-bit version of Microsoft Access Database Engine as you have 64-bit Office products installed etc…

Well that’s great…

But wait, there is a way to install it without getting any complaints! Open up a command line (usually just type “cmd” and hit Enter) and change directory to wherever you downloaded the driver. Once you’re there just type the name of the file followed by a space then “/quiet” (at least for the latest version of the drivers as of 2019, or “/passive” for older versions of the drivers). So in your command line window/console you should have something like:-

C:\Users\SomeUser\Downloads\AccessDatabaseEngine.exe /quiet

Hit Enter and after about 30 seconds it should have installed, note that it will not inform you that it has installed. Open up your SSIS project and try again. Hopefully this time it should work and no more errors like:-

The โ€˜Microsoft.ACE.OLEDB.12.0โ€™ provider is not registered on the local machine.

So, assuming that you are developing SSIS packages that will need to read Excel files, and you’re on a 64 bit version of Windows, when you download the driver you must choose the 32 bit version NOT the 64 bit version!

The latest available version of the driver can downloaded from here https://www.microsoft.com/en-us/download/details.aspx?id=54920

Developing an Angular 7, Angular Material app with a .NET Core 2.2 API using Visual Studio 2017 (part 1)

After using both Angular and Visual Studio together for quite a while I figured I’d start writing a post about getting everything going. Why, aren’t there are many tutorials out there that deal with this already? Yes, but (me personally) I’ve always had to pull together various aspects from many of them and/or end up adding or changing parts to suit what I was trying to do. I’m hoping that most developers have hit the same issues as I have so this is to help them…

The other issue I have, at least from my point of view, is that quite a lot of development and tutorials out there regarding Angular are done using an editor such as Visual Studio Code, which is a fantastic editor and great for pure Angular development, but it’s not a full on IDE like Visual Studio itself. So whilst developing a pure Angular app using Visual Studio Code is a most excellent experience, it’s unlikely to be an all encompassing experience if your particular project say also involves something like a .NET Core API for example and likely some kind of database too. Lastly creating a .NET Core API (IMHO) is a much better experience using full fat Visual Studio.

This leads me onto my own personal problem, I’m lazy… I would much rather just be able to use one or the other, rather than having to switch between them. In fact, to be honest, I’d love to just stay in Visual Studio for everything and not have to come out and write my T-SQL in SSMS (SQL Server Management Studio) either. Yes, yes, yes, I know you can do that in Visual Studio but I’ve been using SSMS for years and Visual Studio has never matched it (yet).

I know what you’re thinking…

If you’re switching between Visual Studio and SSMS, why not Visual Studio Code?

Good point, but my personal development routine (at least) involves working on front end (Angular) and back end (.NET Core) more or less at the same time. The database side of things is usually something that I only flit into occasionally, quite often it already exists or I’ve mostly developed it before hand.

That’s my excuse and I’m sticking to it…

So let’s get moving, first you’ll need to install Visual Studio 2017 community edition (or higher) if you’ve not already. If you’ve not got Visual Studio 2017 installed you can download it from here. When installing via the Visual Studio installer, you’ll need to add the parts we’ll need for .NET Core development and Node.js which are listed below (note the blue ticked selections):-

There’s also a really useful Visual Studio extension that comes in handy when developing Angular apps, the ‘open command line‘ extension. You can find it here if you want to add it to Visual Studio. This is another great extension by Mads Kristensen, who has a large number of other Visual Studio extensions to his name. Be sure to check them out when you’ve got a chance. It basically allows you to open a command prompt by right clicking anywhere in your solution explorer.

Installing Angular 7 with Angular Material

Right, next part is to get Angular 7 installed. There’s a complete guide to this on the official Angular website here so I won’t bother repeating it here – just come back here when you’ve got it installed.

However, before we install Angular Material we need to create our Visual Studio project first. Open Visual Studio and create a new project, when the ‘New Project‘ dialog pops up, select the ‘ASP.NET Core Web Application‘ (see image below). In this example we’re going to be using C#:-

You’ll then be presented with another dialog asking you which project template to use (see image below). For the purposes of our tutorial we’re going to choose the ‘API‘ template – ignore the ‘Angular’ template as this is currently only at version 6 of Angular at the time of writing.

The latest stable version of the .NET Core framework is version 2.2, which you’ll notice is selected in one of the drop downs at the top of the dialog. You should also see a button at the bottom right of the dialog which can be used to ‘Change Authentication’, but for our purposes we are not going to use any authentication. At least not for this first tutorial…

OK, you should eventually end up with a .NET Core 2.2 API project with files and folders listed in your ‘Solution Explorer’ window in Visual Studio. Something a bit like this:-

Adding Angular

Now, we’re going to add an Angular app to the project. To do this, we need to open a command prompt and navigate to the location of our project folder. If you installed the ‘open command prompt‘ extension mentioned earlier then just right click on the project name and go to the ‘Open Command Line’ option and select ‘Default (cmd)’ from the menu. Otherwise just open a command prompt like normal and change directory to your project folder.

Assuming that your command prompt is now open at the location of your project, all we need to do is use the Angular CLI (which if you followed the install instructions on the Angular site, should be installed) to create an app for us, in this case we’ll call the app ‘Angular7‘, so type the following into the command prompt and press enter:-

ng new Angular7

You should then be asked some questions, we’ll need routing so choose ‘Y’ for routing. Almost any app will end up needing that. We’re also going to use SASS for styles, so choose ‘SASS (.SCSS)‘ when it asks you to choose styles. If you’ve never heard of, or never used SASS before don’t worry, we’ll get to that later…

Now we’ve got an Angular app we can add Angular Material to it. If you’ve never heard of this before it’s a collection of components that follow Google’s Material Design specification and they’re built specifically for use with Angular. This is what we’ll use to build our UI (user interface) here in the particular app. In previous app’s I’ve built I used Bootstrap, but once I began experimenting and using Angular Material I just fell in love with the whole idea, look and feel of it. It’s personal preference of course and since this is my blog, we’ll use it here ๐Ÿ™‚

To add Angular Material to an Angular app is very easy now. In the command prompt type the following and press enter (this will change the directory to the Angular7 folder that was just created):-

cd Angular7

Now we should be inside our Angular app folder with the command prompt. Let’s type the following into the command prompt and press enter:-

ng add @angular/material

Hopefully you should then see it doing stuff and eventually it will ask you to choose a prebuilt them from a list, for our tutorial we’ll just choose the first in the list, so just press enter. You’ll then be asked if you want to add ‘HammerJS’ for gesture recognition, we don’t really need this for our tutorial but so you can choose Y or N. You’ll then be asked if you want browser animations, we’ll choose Y for this one. Now it goes off an installs and configures everything you need to get going. Previously you had to do most of the configuration manually but now that’s it.

Wiring Angular up to the Visual Studio project

Now we’ve got our basic Angular app, we need to tie wire it up to work with Visual Studio. Most of this configuration comes from the standard Angular template in Visual Studio but there’s a bit of extra configuration that’s usually needed.

To open the project configuration file, just right click on the project name in Visual Studio and choose ‘Edit **** .csproj’ where the **** is the name of your project, see below:-

You should end up with something a bit like this below, see here for more information on what you can do:-

Delete what’s in there for now and copy and paste the code below. I’ve put comments above each part to explain what is going on. Note that some parts are taken straight from the standard Visual Studio Angular template, others are changes or additions I’ve made:-

<Project Sdk="Microsoft.NET.Sdk.Web">
  
  <PropertyGroup>
    <TargetFramework>netcoreapp2.2</TargetFramework>

    <!--
    Currently there is a bug with 'InProcess' so for 
    now just use 'OutOfProcess' until its fixed
    
    See: https://github.com/aspnet/AspNetCore/issues/4206
    -->
    <AspNetCoreHostingModel>OutOfProcess</AspNetCoreHostingModel>

    <!-- 
    Set this to the folder name (relative to the project) 
    where your active Angular app is
    -->
    <SpaRoot>Angular7\</SpaRoot>

    <!-- 
    Use this to set the 'base href' in Angular's 'index.html' file, so for example
    if you're using a virtual directory then it would be set to '/myvirtualdirectory'
    or if its at the site root then just '/'
    -->
    <AngularBaseHref>/</AngularBaseHref>

    <!-- 
    This is needed to stop TypeScript compiling and IDE error highlighting, the
    TSCONFIG.JSON settings alone *DO NOT* (or did not) work! This maybe fixed in a 
    future version of Visual Studio as it seems to come and go with updates, so
    this code below could potentially be removed in the future
    
    N.B. the 'TypeScriptBaseUrl' value should be the same as the 'SpaRoot' that
    was set previously above!!!
    -->
    <TypeScriptCompileBlocked>True</TypeScriptCompileBlocked>
    <TypeScriptToolsVersion>3.1</TypeScriptToolsVersion>
    <TypeScriptBaseUrl>Angular7\</TypeScriptBaseUrl>
    <TypeScriptExperimentalDecorators>True</TypeScriptExperimentalDecorators>
    <TypeScriptEmitDecoratorMetadata>True</TypeScriptEmitDecoratorMetadata>

    <!--
    Default is true for IsPackable, so we're going to have to turn it off
    -->
    <IsPackable>false</IsPackable>
    
    <!--
    Exclude various folders
    -->
    <DefaultItemExcludes>$(DefaultItemExcludes);$(SpaRoot)node_modules\**</DefaultItemExcludes>
  </PropertyGroup>

  <!-- 
  In addition, because there is no TSCONFIG.JSON in the 'root' of the 
  Visual Studio project, Visual Studio ignores above settings sometimes, 
  so here we must put some stuff to stop it compiling .TS into .JS files! 
  Again, this seems to come and go with updates to Visual Studio, I leave 
  it here just in case. 
  
  N.B. If some of the fundamental configuration of Angular changes 
  though, you may need to update some of these settings!
  -->
  <PropertyGroup Condition="'$(Configuration)' == 'Debug'">
    <TypeScriptTarget>ES5</TypeScriptTarget>
    <TypeScriptJSXEmit>None</TypeScriptJSXEmit>
    <TypeScriptModuleKind />
    <TypeScriptCompileOnSaveEnabled>False</TypeScriptCompileOnSaveEnabled>
    <TypeScriptNoImplicitAny>False</TypeScriptNoImplicitAny>
    <TypeScriptRemoveComments>False</TypeScriptRemoveComments>
    <TypeScriptOutFile />
    <TypeScriptOutDir />
    <TypeScriptGeneratesDeclarations>False</TypeScriptGeneratesDeclarations>
    <TypeScriptNoEmitOnError>True</TypeScriptNoEmitOnError>
    <TypeScriptSourceMap>True</TypeScriptSourceMap>
    <TypeScriptMapRoot />
    <TypeScriptSourceRoot />
  </PropertyGroup>

  <ItemGroup>
    <!-- 
    Don't publish the SPA source files, but do show 
    them in the project files list 
    -->
    <Content Remove="$(SpaRoot)**" />    
    <None Include="$(SpaRoot)**" Exclude="$(SpaRoot)node_modules\**" />
  </ItemGroup>  

  <ItemGroup>
    <!--
    This includes the basic .NET Core NuGet packages we'll need
    -->
    <PackageReference Include="Microsoft.AspNetCore.App" />
    <PackageReference Include="Microsoft.AspNetCore.Razor.Design" Version="2.2.0" PrivateAssets="All" />
    <PackageReference Include="Microsoft.AspNetCore.SpaServices.Extensions" Version="2.2.0" />
    <PackageReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Design" Version="2.2.1" />
  </ItemGroup>

  <ItemGroup>
    <!--
    If you're using web.config files on IIS then this is a 
    tool which can transform web.config files depending on
    the project configuration (e.g. Debug, Release etc...)
    
    See these links for more information:-
    https://github.com/nil4/dotnet-transform-xdt#msbuild
    https://msdn.microsoft.com/en-us/library/dd465326.aspx
    
    .NET Core XML Document Transformation usage: 
      dotnet transform-xdt [options]    
    -->
    <DotNetCliToolReference Include="Microsoft.DotNet.Xdt.Tools" Version="2.0.0" />
  </ItemGroup>
  
  <!--
  This code below is taken straight from the standard template for Angular projects
  in Visual Studio. I've modified it a little to pass our 'AngularBaseHref' value
  to the npm build process
  -->
  <Target Name="DebugEnsureNodeEnv" BeforeTargets="Build" Condition=" '$(Configuration)' == 'Debug' And !Exists('$(SpaRoot)node_modules') ">
    <!-- 
    Ensure Node.js is installed, as its required to run and build the client side
    stuff!
    -->
    <Exec Command="node --version" ContinueOnError="true">
      <Output TaskParameter="ExitCode" PropertyName="ErrorCode" />
    </Exec>
    <Error Condition="'$(ErrorCode)' != '0'" Text="Node.js is required to build and run this project. To continue, please install Node.js from https://nodejs.org/, and then restart your command prompt or IDE." />
    <Message Importance="high" Text="Restoring dependencies using 'npm'. This may take several minutes..." />
    <Exec WorkingDirectory="$(SpaRoot)" Command="npm install" />
  </Target>

  <Target Name="PublishRun" AfterTargets="ComputeFilesToPublish">
    <!-- 
    As part of publishing, ensure the JS resources are freshly built in production mode. Note
    the $(AngularBaseHref) which is passing our build variable value to the NPM script!
    -->
    <Exec WorkingDirectory="$(SpaRoot)" Command="npm install" />
    <Exec WorkingDirectory="$(SpaRoot)" Command="npm run build -- --prod --base-href=$(AngularBaseHref)" />

    <!-- 
    Include the newly-built files in the publish output 
    -->
    <ItemGroup>
      <DistFiles Include="$(SpaRoot)dist\**; $(SpaRoot)dist-server\**" />
      <DistFiles Include="$(SpaRoot)node_modules\**" Condition="'$(BuildServerSideRenderer)' == 'true'" />
      <ResolvedFileToPublish Include="@(DistFiles->'%(FullPath)')" Exclude="@(ResolvedFileToPublish)">
        <RelativePath>%(DistFiles.Identity)</RelativePath>
        <CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory>
      </ResolvedFileToPublish>
    </ItemGroup>
  </Target>

  <!--
  Use the web.config transformation tool as detailed previously above...
  
  See these links for more information:-
  https://github.com/nil4/dotnet-transform-xdt#msbuild
  https://msdn.microsoft.com/en-us/library/dd465326.aspx
    
  .NET Core XML Document Transformation usage: 
    dotnet transform-xdt [options]
  -->
  <Target Name="ApplyXdtConfigTransform" BeforeTargets="_TransformWebConfig">
    <PropertyGroup>
      <_SourceWebConfig>$(MSBuildThisFileDirectory)Web.config</_SourceWebConfig>
      <_XdtTransform>$(MSBuildThisFileDirectory)Web.$(Configuration).config</_XdtTransform>
      <_TargetWebConfig>$(PublishDir)Web.config</_TargetWebConfig>
    </PropertyGroup>
    <Exec Command="dotnet transform-xdt --xml "$(_SourceWebConfig)" --transform "$(_XdtTransform)" --output "$(_TargetWebConfig)"" Condition="Exists('$(_XdtTransform)')" />
  </Target>

</Project>

The main ones to note are the ‘SpaRoot‘ – which is the relative path to from the Visual Studio project folder to the Angular app folder. Then there’s ‘AngularBaseHref‘ – which is something I’ve added as we had a server with several apps each under a virtual directory like:-

OK, that’s enough of that. Let’s move onto the code in our .NET Core API… Open up the ‘Startup.cs’ file in the root of the project. We’ll need to add some code here to fully wire up our Angular project to the rest of the project:-

using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.SpaServices.AngularCli;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;

namespace VisualStudio2017WithAngular7
{
    public class Startup
    {
        public Startup(IConfiguration configuration)
        {
            Configuration = configuration;
        }

        public IConfiguration Configuration { get; }

        // This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_2);

            // We need to specify where the files that Angular has built 
            // are located when the production build is being done
            services.AddSpaStaticFiles(configuration =>
            {
                configuration.RootPath = "Angular7/dist";
            });
        }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IHostingEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }
            else
            {
                // The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
                app.UseHsts();
            }

            app.UseHttpsRedirection();

            // We'll need static files and the SPA static files!
            app.UseStaticFiles();
            app.UseSpaStaticFiles();

            app.UseMvc(routes =>
            {
                routes.MapRoute(
                    name: "default",
                    template: "{controller}/{action=Index}/{id?}");
            });

            app.UseSpa(spa =>
            {
                // During development this is the folder where the Angular app is located
                spa.Options.SourcePath = "Angular7";

                if (env.IsDevelopment())
                {
                    // You can use either 'start', or if you've configured HMR for
                    // your Angular app then use 'hmr'. Make sure there's a 'hmr' entry
                    // in your 'package.json' file though!
                    spa.UseAngularCliServer(npmScript: "start");
                }
            });
        }
    }
}

Note the last part ‘spa.UseAngularCliServer(npmScript: “hmr”);‘ – the parameter value of ‘hmr’ is usually replaced with ‘start’ in most apps, but in our tutorial we’re going to use HMR (Hot Module Replacement). This is a cool feature of webpack that lets us change code in our modules and have them swapped out live. In other words, make a change to your code in Visual Studio and the browser gets updated automatically with the new code!

Luckily for us the complex configuration of webpack is handled behind the scenes in Angular 7 apps so we don’t need to worry about how all that works. Be glad, its not for the faint hearted! Note that this feature is only used for development, it does not get included or configured when the production code is finally built!

One final thing to change then we should be able to run our app and see something. Under the properties in solution explorer, there’s a file called ‘launchSettings.json’, we need to remove a line from this file otherwise when we run the app we’ll just get some results spat back to us from the .NET Core API example ‘Controller’:-

Open the file and remove the lines that say:-

“launchUrl”: “api/values”

There are two entries we need to remove, one is highlighted below then further down there’s one more. Just delete the whole line and save the file:-

OK, I think we’re ready to go, if you start debugging with Visual Studio the Angular app should get built and eventually (give it a while) you should see the standard Angular template app page show up in whichever browser Visual Studio has been configured to use like this:-

If you’ve got here then you’ve now successfully got a Visual Studio 2017 project running with Angular 7 and .NET Core 2.2 together. Or sort of… Neither Angular or .NET Core is doing much here and we’re not using our .NET Core API yet either. The browser is only showing results from our Angular code really…

OK, that’s our basic setup done (more or less), we’ll move on to using Angular Material and some API functionality in the next part…

SQL Server database in suspect mode

Over the many years I’ve been using the various different versions of Microsoft SQL Server there’s an error that I’ve only ever seen once, a long time ago, and never since. It was the dreaded database in ‘suspect’ mode (see here for a list of database modes).

After a few moments of bewilderment, face pulling and a cry of whaaaat the…? I headed off to the internet, as we all do, to see what on earth it meant – apart from the obvious meaning that something was ‘suspect’ of course…

According to Microsoft, this particular error means:-

At least the primary filegroup is suspect and may be damaged. The database cannot be recovered during startup of SQL Server. The database is unavailable. Additional action by the user is required to resolve the problem.

Oh s**t, what the? why? This is a bl**dy SQL Server cluster with a SAN server, how has this happened?? It was not something I was expecting to see on the setup I was working with at that time, which I thought was pretty decent…

Somehow, and I’m still not sure how, we managed to fix it with no issues. Although I’m sure others aren’t so lucky as we were. I was fully expecting that we’d have to restore from last backup. We tried various fixes like trying to take it offline/online. I luckily resisted the temptation to detach the database as apparently that is not a good idea I’ve learned since.

At the time we used the sp_resetstatus stored procedure, however that appears to now be on the list for deprecation, then set the database to emergency mode, followed by a DBCC CHECKDB command…

-- reset status
EXEC sp_resetstatus 'NAME_OF_SUSPECT_DB';

-- set emergency mode
ALTER DATABASE NAME_OF_SUSPECT_DB SET EMERGENCY;

-- perform integrity check
DBCC CHECKDB(NAME_OF_SUSPECT_DB);

Unfortunately it didn’t end there, we had to then set the database back into SINGLE_USER mode and run another CHECKDB with the repair option, lastly setting the database back to MULTI_USER mode:-

-- put into single user mode
ALTER DATABASE NAME_OF_SUSPECT_DB SET SINGLE_USER WITH ROLLBACK IMMEDIATE

-- try and repair errors
DBCC CHECKDB(NAME_OF_SUSPECT_DB, REPAIR_ALLOW_DATA_LOSS)

-- back to normal ;-)
ALTER DATABASE NAME_OF_SUSPECT_DB SET MULTI_USER

In our case this fixed the problem and to this day Iโ€™ve no idea why it actually happened. It’s something I’ve never seen again and hopefully never will. Take care of your databases and they’ll take care of you ๐Ÿ™‚

T-SQL dynamic calendar in a view

A while back I had a problem crop up where I needed to create a calendar but could only do this in a view. At first I thought, sure no problem, just wheel out the old recursive CTE (common table expression) pattern with some dates like so:-

;WITH cteDates AS (
	SELECT
	CAST('1900-01-01' AS DATE) AS DateValue

	UNION ALL

	SELECT
	DATEADD(DAY, 1, DateValue)

	FROM
	cteDates

	WHERE
	DateValue <= GETDATE()
)

SELECT
DateValue

FROM
cteDates OPTION (MAXRECURSION 32767) -- limit is 32767!

Note that I’m starting at 1st January 1900 (using the ISO 8601 date format of ‘YYYY-MM-DD‘ – see more about T-SQL dates here) as this calendar needs to link to birth dates. Then we finish at today’s date of course.

There is one small problem however, the limit for recursion when using a CTE in SQL Server is 32767. If you run this query you’ll soon see the error message:-

The statement terminated. The maximum recursion 32767 has been exhausted before statement completion.

The CTE solution gets us up to ‘1989-09-18’ (18th September 1989), which would be OK if we were still in the eighties. Of course we could use a much later start date and never hit the limit, but if you need to start at 1st January 1900 and go up to the year 2019 then it IS a problem.

How do we get around this?

In a stored procedure this is not a problem, a simple loop would do the trick. However, remember that we need to write our code as a view so we’re limited by a number of factors. Mainly the fact that we can’t use variables. Essentially we need another method to dynamically generate rows, enter the T-SQL VALUES function which should be familiar to all SQL developers.

Err… OK, how does that solve the problem??

First of all (if you didn’t know already) we can use the VALUES function to generate rows of data. However, writing thousands of VALUES statements is obviously not going to by dynamic or practical. We must break it down a little further… What we can do is break a date into parts, so:-

  • Year
  • Month
  • Day

So if we can generate years, months and days for all the dates we need that might just work. Something like this, but of course we’d still need over 100 values for all the years, in the example I’ve stopped at 1909 but I’m sure you get the idea:-

SELECT
YearValue

FROM (
	VALUES 
	(1900), (1901), (1902), (1903), (1904), (1905), (1906), (1907), (1908), (1909)
) v(YearValue)

So, we must break down years into something smaller and more manageable. Remember primary school (or elementary school for those folk outside of Britain) when you learned about units, tens, hundreds and thousands?

Oh please no, not maths from school…!

SELECT
Thousands.MilleniumValue
,Hundreds.CenturyValue
,Tens.DecadeValue
,Units.YearValue

,TheYear = Thousands.MilleniumValue 
	+ Hundreds.CenturyValue 
	+ Tens.DecadeValue 
	+ Units.YearValue

FROM (
	VALUES 
	(1000), (2000)
) Thousands(MilleniumValue)

CROSS JOIN (
	VALUES 
	(0), (100), (200), (300), (400), (500), (600), (700), (800), (900)
) Hundreds(CenturyValue)

CROSS JOIN (
	VALUES 
	(0), (10), (20), (30), (40), (50), (60), (70), (80), (90)
) Tens(DecadeValue)

CROSS JOIN (
	VALUES 
	(0), (1), (2), (3), (4), (5), (6), (7), (8), (9)
) Units(YearValue)

ORDER BY
TheYear DESC

Right, things are starting to get interesting. The above script will generate a year sequence using a combination of the units, tens, hundreds and thousands values. Note the CROSS JOIN‘s which join all values to all other values, so for each ‘unit’ we get all the values for each ‘ten’, ‘hundred’ and ‘thousand’ value and so on… You have noticed that this will give us year values from 1000 to 2999 which is a little more than we want, but we can filter those unwanted years out later as you’ll see. If we apply the same logic for months and days we end up with:-

SELECT
TheYear = Thousands.MilleniumValue 
	+ Hundreds.CenturyValue 
	+ Tens.DecadeValue 
	+ Units.YearValue

,MonthOfTheYear.MonthValue
,DayOfTheMonth.DayValue

FROM (
	VALUES 
	(1000), (2000)
) Thousands(MilleniumValue)

CROSS JOIN (
	VALUES 
	(0), (100), (200), (300), (400), (500), (600), (700), (800), (900)
) Hundreds(CenturyValue)

CROSS JOIN (
	VALUES 
	(0), (10), (20), (30), (40), (50), (60), (70), (80), (90)
) Tens(DecadeValue)

CROSS JOIN (
	VALUES 
	(0), (1), (2), (3), (4), (5), (6), (7), (8), (9)
) Units(YearValue)

-- 12 months in a year
CROSS JOIN (
	VALUES 
	(1), (2), (3), (4), (5), (6), (7), (8), (9), (10), (11), (12)
) MonthOfTheYear(MonthValue)

-- max of 31 days in any month
CROSS JOIN (
	VALUES
	(1), (2), (3), (4), (5), (6), (7), (8), (9), (10)
	,(11), (12), (13), (14), (15), (16), (17), (18), (19), (20)
	,(21), (22), (23), (24), (25), (26), (27), (28), (29), (30)
	,(31)
) DayOfTheMonth(DayValue)

ORDER BY
TheYear DESC

Now we can put it all together and get a reasonably fast and simple query to generate date values which we use as the basis for a simple calendar view:-

SELECT
DateValue
,DateNumber
,YearValue
,MonthValue
,DayValue

FROM (
	SELECT	
	CAST(CAST(rd.DateNumber AS VARCHAR(8)) AS DATE) AS DateValue
	,rd.DateNumber
	,rd.YearValue
	,rd.MonthValue
	,rd.DayValue

	FROM (
		SELECT
		-- convert our values into an integer e.g. 19000101
		-- we can use this to convert to a date later and/or a calendar key value
		((((Thousands.MilleniumValue + Hundreds.CenturyValue + Tens.DecadeValue + Units.YearValue) * 100) + MonthOfTheYear.MonthValue) * 100) + DayOfTheMonth.DayValue AS DateNumber

		-- add the units, tens, hundreds and thousands to get our year
		,Thousands.MilleniumValue + Hundreds.CenturyValue + Tens.DecadeValue + Units.YearValue AS YearValue
		
		-- month and day values
		,MonthOfTheYear.MonthValue
		,DayOfTheMonth.DayValue

		FROM (
			VALUES 
			(1000), (2000)
		) Thousands(MilleniumValue)

		CROSS JOIN (
			VALUES 
			(0), (100), (200), (300), (400), (500), (600), (700), (800), (900)
		) Hundreds(CenturyValue)

		CROSS JOIN (
			VALUES 
			(0), (10), (20), (30), (40), (50), (60), (70), (80), (90)
		) Tens(DecadeValue)

		CROSS JOIN (
			VALUES 
			(0), (1), (2), (3), (4), (5), (6), (7), (8), (9)
		) Units(YearValue)

		-- 12 months in a year
		CROSS JOIN (
			VALUES 
			(1), (2), (3), (4), (5), (6), (7), (8), (9), (10), (11), (12)
		) MonthOfTheYear(MonthValue)

		-- max of 31 days in any month
		CROSS JOIN (
			VALUES
			(1), (2), (3), (4), (5), (6), (7), (8), (9), (10)
			,(11), (12), (13), (14), (15), (16), (17), (18), (19), (20)
			,(21), (22), (23), (24), (25), (26), (27), (28), (29), (30)
			,(31)
		) DayOfTheMonth(DayValue)

		WHERE
		-- filter out date values we don't need
		((((Thousands.MilleniumValue + Hundreds.CenturyValue + Tens.DecadeValue + Units.YearValue) * 100) + MonthOfTheYear.MonthValue) * 100) + DayOfTheMonth.DayValue 
		BETWEEN 19000101 AND CAST(CONVERT(VARCHAR(8), GETDATE(), 112) AS INT)	
	) rd

	WHERE
	-- only get values where its a valid date!	
	ISDATE(CAST(rd.DateNumber AS VARCHAR(8))) = 1
) d

ORDER BY
d.DateValue DESC

There we go, the basis of a simple calendar but in a view, no table or stored procedure required. The only thing to do now is add more fields for different aspects of the date e.g. fiscal year, fiscal month of the year etc…

P.S. remember to remove the ORDER BY clause if you do make this into a view, either that or change the SELECT into a SELECT TOP (100) PERCENT WITH TIES

Quick T-SQL functions #2

Here’s another little function for some string parsing, whilst basic and limited has it’s uses. I needed something to parse some very simple text that we were getting from somewhere as part of an import. We needed to get the string values between a ‘start’ and ‘end’ string. It was never going to change format so I didn’t need to over complicate it, the text we were getting was similar to this for example:-

entry:value1:end…entry:value2:end…entry:value3:end

and so on…

CREATE FUNCTION [dbo].[fnFindStringMatchesBetweenSimpleStartAndEndStrings] (
	@text NVARCHAR(MAX)
	,@startString NVARCHAR(1024)
	,@endString NVARCHAR(1024)
)
RETURNS @t TABLE (
	StringMatch NVARCHAR(MAX)
)
AS
BEGIN
	-- we'll use this to store our current 'start index' position			
	DECLARE @startIndex INT = CHARINDEX(@startString, @text)

	-- and this will of course be the end index position
	DECLARE @endIndex INT = CHARINDEX(@endString, @text, @startIndex + LEN(@startString)) 		
	
	-- now loop through and get the rest, until no more are found
	WHILE @startIndex > 0 AND @endIndex > 0
	BEGIN
		-- save our match			
		INSERT INTO @t (
			StringMatch
		) VALUES (
			SUBSTRING(@text, @startIndex + LEN(@startString), @endIndex - (@startIndex + LEN(@startString)))
		)		

		-- next positions
		SET @startIndex = CHARINDEX(@startString, @text, @endIndex)
		SET @endIndex = CHARINDEX(@endString, @text, @startIndex + LEN(@startString))
	END

	-- send back our matches (if any)
	RETURN
END

So to use the function to find the ‘value**’ strings inbetween the ‘entry:’ and ‘:end’ strings you would use:-

SELECT
StringMatch

FROM
dbo.fnFindStringMatchesBetweenSimpleStartAndEndStrings(N'entry:value1:end...entry:value2:end...entry:value3:end', N'entry:', N':end')

The function just returns a simple table with a single column ‘StringMatch‘ with all the values:-

  • value1
  • value2
  • value3

As I mentioned earlier this function is simple and limited, so you couldn’t use it for parsing text that could have nested ‘start’ and ‘end’ strings for example. That being said, I am a firm believer in writing the least amount of code to solve a specific purpose and in this particular case to extend the function to parse nested strings would have added a little performance hit and for no reason – essentially I would be writing redundant code.

What would you do in this case, would you make it more advanced and write the redundant code or not?

T-SQL snippets #1

Here’s another little snippet which comes in handy when wanting to know the state of your SQL servers memory. Using sys.dm_os_sys_memory it’s super easy…

SELECT
(total_physical_memory_kb / 1024) / 1024 AS TotalPhysicalMemoryInGB
,(available_physical_memory_kb / 1024) / 1024 AS AvailablePhysicalMemoryInGB
,((total_physical_memory_kb / 1024) / 1024) - ((available_physical_memory_kb / 1024) / 1024) AS UsedPhysicalMemoryInGB
,(total_page_file_kb / 1024) / 1024 AS TotalPageFileInGB
,(available_page_file_kb / 1024) / 1024 AS AvailablePageFileInGB
,system_cache_kb / 1024 AS SystemCacheInMB
,kernel_paged_pool_kb / 1024 AS KernelPagedPoolInMB
,kernel_nonpaged_pool_kb / 1024 AS KernelNonpagedPoolInMB
,system_high_memory_signal_state AS SystemHighMemorySignalState
,system_low_memory_signal_state AS SystemLowMemorySignalState

FROM
master.sys.dm_os_sys_memory

Note the divide by 1024, this just takes the value which is in kilobytes and gives us a value in megabytes since 1024 kilobytes = 1 megabyte. If we divide this figure again by 1024 we’ll get the value in gigabytes since 1024 megabytes = 1 gigabyte.

Note that you’ll find some places where values are given slightly differently and/or are rounded down, for example… 1000 megabytes = 1 gigabyte. Of course this is not mathematically or technically correct as it should be a base 10 number (i.e. decimal) representing a base 2 number (i.e. binary). However, it has become an adopted standard so commonly you’ll find things like hard disk manufacturers listing disk space or drive capacity in this way, whereas some operating systems give the correct figure.

Why has this happened? Who knows, but I suspect it’s probably because its simpler to remember and calculate with 1000 rather than 1024 ๐Ÿ™‚

T-SQL stored procedure dependencies in a database

I recently needed to get a what tables, views and other stored procedures that every stored procedure in a specific database was referencing. Pretty straight forward, we just need to use some of the system tables/views. There’s lots of examples on the web but here’s my little take on the subject…

SELECT
ss.name AS StoredProcedureSchema
,so.name AS StoredProcedureName

,sed.referenced_database_name AS ReferencingDatabaseName
,sed.referenced_schema_name AS ReferencingSchemaName
,sed.referenced_entity_name AS ReferencingObjectName
,rso.type_desc AS ReferencingObjectType

FROM
sys.sql_expression_dependencies sed

-- join objects so we can get details like the name of 
-- the stored procedure
INNER JOIN
sys.objects so
ON
sed.referencing_id = so.object_id

-- and its schema
INNER JOIN
sys.schemas ss
ON
so.schema_id = ss.schema_id

-- get information about the objects that
-- the stored procedure is referencing
INNER JOIN
sys.objects rso
ON
sed.referenced_id = rso.object_id

-- and their schema as well
INNER JOIN
sys.schemas rss
ON
rso.schema_id = rss.schema_id

WHERE
so.type = 'P' -- just stored procedures

You’ll find that the ‘ReferencingDatabaseName‘ will be NULL if your stored procedures are only referencing objects in the same database.

There’s quite a lot of other information regarding dependencies that can be gleaned from SQL Server, so stay tuned for more little tit bits like this soon… ๐Ÿ™‚