Webinar: Unit Testing with SQL Server and tSQLt

I ran into Sebastian Meine at the PASS Summit a few weeks ago and we were talking testing. Sebastian is the founder and developer of tSQLt, which I really like using. We’ve done some teaching together and I’ve delivered a number of sessions on tSQLt at various events, but we wanted to get more people interested in testing code.

I had a session at PASS, which was very well attended. 150+ people came, which was stunning to me. I was expecting to see 20, and afterwards Sebastian and I started talking about what else we could do.

We’ve decided to do a webinar, but one driven by you. We are looking for you to ask questions about code you’d like tested, or which you’re unsure of how to approach. Leave a comment here, or put your question in the webinar registration. The details are:

Unit Testing in SQL Server with tSQLt
https://attendee.gotowebinar.com/register/7623481833734658561
Thurs, Nov 19, 2015 11:00 AM – 12:00 PM EDT

Join unit testing experts Steve Jones and Sebastian Meine for this exciting opportunity to learn about unit testing and the tSQLt framework to improve your T-SQL code quality and maintainability. If this day/time is not good for you, register anyway so you receive a link to the recording when it is available.

Using Automated Tests to Raise Code Quality

Abstract

Agile development practices can speed the development of applications and increase the rate at which you can deploy features for your customers. But unless you include a high level of test coverage in your code, these practices can also increase the number of bugs that are released. Databases can be particularly challenging for developers. This session examines database refactorings that introduce bugs, which are detected by automated tests. This approach allows developers to rapidly fix their code before a customer is affected.

Level: 200

Demos

These are the demos shown in this talk.

  • Adding test data inline
  • Added test data in a procedure
  • Adding test data from a separate set of tables.
  • Exclusions to SQL Cop or other tests with Extended Properties.
  • Using FakeFunction
  • Using SpyProcedure
  • The boundary issues with multiple requirements for a function.
  • 0-1-Some testing
  • Catching dependencies.

Downloads

Here are the downloads for the talk.

A tSQLt Mistake – Debugging a Test

While I was working on a test the other day, it kept failing. Not a big surprise, but I couldn’t figure out why. When I looked at tsqlt.testresults, I saw extra rows. Double rows in fact, and that threw me.

This was my Assemble code.

-- Assemble
CREATE TABLE #Expected (
yearnum int
, monthnum TINYINT
, salestotal money
)


INSERT INTO #Expected
( yearnum
, monthnum
, salestotal
)
VALUES
( 2012, 11, 2500.23 )
, ( 2012, 12, 2200.15 )
, ( 2013, 1, 2656.75 )

SELECT *
INTO #actual
FROM #Expected AS e

EXEC tsqlt.FakeTable @TableName = N'MonthlySales', @SchemaName='dbo';

INSERT dbo.MonthlySales
VALUES
( 11, 1000.00)
, ( 11, 1500.23)
, ( 12, 2200.15)
, ( 13, 1000.00)
, ( 13, 1656.00)
, ( 13, 0000.75);

Here was the output (ignoring the failure messages):

[tArticles].[test sum of sales by month for multiple months] failed: (Failure) The calculations are incorrect

|_m_|yearnum|monthnum|salestotal|

+—+——-+——–+———-+

|=  |2012   |11      |2500.2300 |

|=  |2012   |12      |2200.1500 |

|=  |2013   |1       |2656.7500 |

|>  |2013   |1       |2656.7500 |

|>  |2012   |12      |2200.1500 |

|>  |2012   |11      |2500.2300 |

 

Hmmm. What’s going on? Why don’t the rows match? If I run the query, I see the results I expect. Is it the query or test?

In this case, you read the results as showing that I have 3 rows that are the same in my expected and actual tables (@expected and @actual variables in the assert). However I also have 3 extra rows in the actual table, which appear to be duplicates.

If I go back to the Assemble, I see a pattern that’s a problem. Some people might think these hassles are a way to give up on testing. Some might build a better pattern. I’ll do the latter.

In this case I create the expected table and then I insert the expected results. Then I create my actual table from the expected one to keep the schema the same and avoid repeating code. However in this case I have a bug.

The bug is I’m moving the expected results to actual. If I asserted at this point, I’d pass. However then I run the query and insert the results, which happen to be the same as the expected results (my query works). If the query didn’t work, I might really spend a lot of time debugging it, but here I can tell my test code is buggy.

I have two choices to fix this.

  1. Add a WHERE clause of WHERE 1 = 0 (no rows inserted)
  2. Move the creation of the actual table.

My first thought was to adjust the pattern to this:

CREATE TABLE #Expected (
yearnum int
, monthnum TINYINT
, salestotal money
)

SELECT *
INTO #actual
FROM #Expected AS e


INSERT INTO #Expected
( yearnum
, monthnum
, salestotal
)
VALUES
( 2012, 11, 2500.23 )
, ( 2012, 12, 2200.15 )
, ( 2013, 1, 2656.75 )

I move the #Actual and #Expected tables together, so that once I get the results set, I immediately create the #Actual copy. I could leave things and do this:

CREATE TABLE #Expected (
yearnum int
, monthnum TINYINT
, salestotal money
)


INSERT INTO #Expected
( yearnum
, monthnum
, salestotal
)
VALUES
( 2012, 11, 2500.23 )
, ( 2012, 12, 2200.15 )
, ( 2013, 1, 2656.75 )

SELECT *
INTO #actual
FROM #Expected AS e
WHERE 1 = 0

EXEC tsqlt.FakeTable @TableName = N'MonthlySales', @SchemaName='dbo';

Maybe the best thing is to be careful and do this:

-- Assemble
CREATE TABLE #Expected (
yearnum int
, monthnum TINYINT
, salestotal money
)

SELECT *
INTO #actual
FROM #Expected AS e
WHERE 1 = 0

INSERT INTO #Expected
( yearnum
, monthnum
, salestotal
)
VALUES
( 2012, 11, 2500.23 )
, ( 2012, 12, 2200.15 )
, ( 2013, 1, 2656.75 )

Combine the ideas and keep this insulated from refactoring moving or adding code in there.

Remember, tests are code. This is why they should fail first, so that you have some confidence in your code working and causing a test to pass.

Testing Sum By Month

I’ve been on a testing kick, trying to formalize the ad hoc queries I’ve run into something that’s easier to track. As a result, when I look to solve a problem, I’ve written a test to verify that what I think will happen, actually happens.

The Problem

I saw a post recently where someone wasn’t sure how to get the sum of a series of data items by month, so I decided to help them. They asked for a year number, a month number, and a total, so something like this:

Year   Month   Sales

2012       1   1500.23

2012       2   1480.00

2012       3   1945.00

2015       7   8933.11

They mentioned, however, that the had sales data stored as an integer. Not as 201201, but as 1, 2, 3, with a base date being Jan 1, 2012. That’s strange, but it’s a good place to write a test.

I like to start with the results, since if I don’t know the results, how can I tell if my query works? Let’s get a test going. I’ll start by created my expected results. I’ve come to like using temporary tables, and limited data. I also like to test some boundaries, so Iet’s cross a year.

CREATE PROCEDURE [tArticles].[test sum of sales by month for multiple months]
AS
BEGIN
-- Assemble
CREATE TABLE #Expected (
yearnum INT
, monthnum TINYINT
, salestotal NUMERIC(10,2)
)


SELECT *
INTO #actual
FROM #Expected AS e



 

INSERT INTO #Expected
( yearnum
, monthnum
, salestotal
)
VALUES
( 2012, 11, 2500.23 )
, ( 2012, 12, 2200.15 )
, ( 2013, 1, 2656.75 )

I like to create the actual results table here as well, which allows me to then easily insert into this table from a procedure as well as a query. In this case, I’ll use a query, but I could use insert..exec.

Once I have results, I need to setup my test data. In this case, I’d probably go grab the rows from a specific period and put them in a temp table and use Data Compare to get them. Or make them up. It doesn’t matter. I just need the data that allows me to test my query.


EXEC tsqlt.FakeTable @TableName = N'MonthlySales';

INSERT MothlySales
VALUES
( 11, 1000.00)
, ( 11, 1500.23)
, ( 12, 2200.15)
, ( 13, 1000.00)
, ( 13, 1656.00)
, ( 13, 0000.75);

I don’t try to make this hard. I use easy math, giving myself a few cases. One, two, three rows of data for the months. If I think this isn’t representative, I can add a few more. I don’t try to be difficult, I’m testing a query. If I had rows that might not matter, or I wanted to test if 0 rows are ignored, I could do that.

Now I need a query. Something simple, a SUM() with a GROUP by is needed. However I need to also change 11 into 2012 11, so that’s an algorithm.

An easy way to do this is start with a base date. I’d prefer this is in a table, but I can do it inline.

INSERT #actual

SELECT
yearnum = DATEPART( YEAR, DATEADD( MONTH, datenum, '20120101'))
, MONTHNUM = DATEPART(MONTH, DATEADD( MONTH, DATENUM, '20120101'))
, SALESTOTAL = SUM(ms.salesamount)
FROM dbo.MonthlySales AS ms
GROUP BY
DATEPART( YEAR, DATEADD( MONTH, datenum, '20120101'))
, DATEPART(MONTH, DATEADD( MONTH, DATENUM, '20120101'))
ORDER BY
DATEPART( YEAR, DATEADD( MONTH, datenum, '20120101'))
, DATEPART(MONTH, DATEADD( MONTH, DATENUM, '20120101'))
;
GO

I’ll insert this data into #actual, which tests my query.

The final step is to assert my tables are equal.

-- Assert
EXEC tsqlt.AssertEqualsTable
@Expected = N'#EXPECTED',
@Actual = N'#actual',
@FailMsg = N'The calculations are incorrect';

The Test

What happens when I execute this test? I can use tsqlt.run, or my SQL Test plugin.

2015-09-28 16_07_49-Photos

In either case, I’ll get a failure.

2015-09-28 16_08_11-Photos

When I check the messages, I see the output from tSQLt. In this case, none of my totals seem to match.

2015-09-28 16_15_23-Photos

What’s wrong? In my case, I’m adding the integer to the base month, but that means a 1 means 2012 02, not 2012 01. I’m a month off. Let’s adjust the query.

-- Act
INSERT #actual
SELECT

yearnum = DATEPART( YEAR, DATEADD( MONTH, datenum, '20111201'))
, MONTHNUM = DATEPART(MONTH, DATEADD( MONTH, DATENUM, '20111201'))
, SALESTOTAL = SUM(ms.salesamount)
FROM dbo.MonthlySales AS ms
GROUP BY
DATEPART( YEAR, DATEADD( MONTH, datenum, '20111201'))
, DATEPART(MONTH, DATEADD( MONTH, DATENUM, '20111201'))
ORDER BY
DATEPART( YEAR, DATEADD( MONTH, datenum, '20111201'))
, DATEPART(MONTH, DATEADD( MONTH, DATENUM, '20111201'))

Now when I run my test, it passes.

2015-09-28 16_18_09-Photos

Why Bother?

This seems trivial, right? What’s the point of this test? After all, I can easily check this with a couple quick queries.

Well, let’s imagine that we decide to move this base date into a table, or that we alter it. We want our queries to continue to work. I can have this test as part of an automated routine that ensures this test will run each time the CI process runs. Or each time a developer executes a tsqlt.runall in this database (shared or populated from a VCS). I prevent refactoring queries.

More importantly, I can take results and alter them first, say if someone decides to change this to a windowing query. I could plug a new query in the test (or better yet, use a proc and put that call in the test) , and if I change code, I can verify it still works.

Write tests. You need them anyway, so why not formalize them? The code around this query, mocking test data, is something I do anyway, so this gets me a few more minutes to verify that the code works. I can tune the query, alter indexes, perf test, and be sure that code is still running cleanly.

http://www.sqwhere lservercentral.com/Forums/Topic1716471-1292-1.aspx#bm1716535

tSQLt in Azure SQL Database

I was excited to hear about the v12 Azure databases supporting CLR assemblies. Mainly because I’ve been doing testing work with tSQLt and wanted to run tests in an Azure database.

I upgraded a SQL Azure database to v12 and connected in SSMS. My first step was to open the tSQLt file.

2015-08-14 14_27_22-Start

I had the file open, connected to Azure. This was the easy part. Now I need to actually send the batch across and compile the code:

2015-08-14 14_27_33-Start

My next step was to execute it. However that didn’t work as well as I expected.

2015-08-14 14_27_58-New notification

There are multiple errors here, but it’s possible that one error causes others. I pinged Sebastian Meine, the creator of tSQLt about the External_Access and he noted there was only one method that needs it.

So I decided to make a change. First, a search.

2015-08-14 14_28_18-tSQLt.class.sql - dkranchapps.database.windows.net,1433.Predictions (sjones (54)

That got me the assembly installation.

2015-08-14 14_28_28-Start

I decided to try and change this to something that’s contained inside the database. Since Azure is a bit of a black box, I thought safe was the way to go.

2015-08-14 14_28_42-Start

With that change made, I compiled the entire file again. This was the only change I made.

2015-08-14 14_30_18-Movies & TV

That’s cool, but does it work? I connected in Object Explorer and then opened SQL Test. I saw my database, and I could create a test.

2015-08-14 14_31_36-Cortana

However I got an error on the connection and creation of a procedure.

2015-08-14 14_42_31-Movies & TV

Even from Object Explorer, I refreshed the procedures, but got this:

2015-08-14 14_42_44-Microsoft SQL Server Management Studio

This appears to be a SQL Server 2014 RTM error. Actually I was on a CU, but not SP1. It is supposed to be corrected in SP1.

However the procedure was created, and I could alter it.

2015-08-14 14_43_50-Movies & TV

More importantly, I can execute it.

2015-08-14 14_44_04-SQLQuery4.sql - dkranchapps.database.windows.net,1433.Predictions (sjones (52))_

If I change the code.

2015-08-14 14_44_23-Movies & TV

Now it fails.

2015-08-14 14_44_33-SQLQuery4.sql - dkranchapps.database.windows.net,1433.Predictions (sjones (52))_

It appears that tSQLt can now work in v12 databases in Azure, so start adding those unit tests to your database projects.

Unit Testing in Philadelphia

I’m helping teach a pre-conference session on Friday, June 5, 2015 at the Microsoft office in Philadelphia. This is an all day, paid for event, that looks at how you can use a framework to write unit tests for your T-SQL code. We’re the day before SQL Saturday #390 in Philadelphia

I really believe in testing, and am trying to advocate for it in many places. I’ve delivered a testing session that has been well received at quite a few events and this is the first time I’m trying a full day training class.

I am working with Sebastian Meine, the founder of tSQLt, to present the class. We’ve got a busy outline, looking at a variety of ways that you can write tests and use them to find problems in code. Here’s what we’re covering.

  • Introduction to Unit Testing
  • What is tSQLt?
  • Your First Test
  • Executing Tests correctly
  • Effective use of Assertions
  • Separation of Concerns
  • Testing Exceptions
  • Test Case Heuristics
  • Dealing with Test Data
  • Other Types of Testing
  • How Unit Testing fits into your Development Process

At the end of the class, you should have some good ideas on how to build and structure tests in your own environment and be ready to start testing on Monday.

I hope to see you there, and register today if you want to learn more about unit testing in SQL Server.

Refactoring Mistakes Are Why We Write Tests

I wrote a short piece the other day trying to show how one can use tSQLt to test code. It’s a simple test built against a user defined function. It works well and when the test is run, it passes.

Here was my code:

ALTER function [dbo].[calculateEstimateOfReadingTime] ( @value varchar(max) ) returns int as begin declare @ret as int = 1 , @i as int = 1; while @i <= len(@value) begin if substring(@value, @i, 1) = ' ' begin set @ret = @ret + 1; end set @i = @i + 1; end return @ret / 250; ; end

Someone in the comments pointed out that we can more efficiently refactor this code to :

ALTER FUNCTION [dbo].[calculateEstimateOfReadingTime] ( @value varchar(max) ) RETURNS int AS BEGIN RETURN ( SELECT LEN(@value) - LEN(REPLACE(RTRIM(@value), ' ', '')) + 1 ) END

However when I run the test, I get these results:

functiontesta

That’s not good, but that’s why we test.

I could easily see someone refactoring the code, finding a more elegant method of rewriting this code and after running some quick tests, they check this in to source control (hopefully) and maybe deploy it to production. Hopefully QA catches this, but wouldn’t we want to notice this in development?

The refactored code misses a divide by 250.

Write tests, use them to catch mistakes. These simple ones slip through at times and are what make deployments really, really stressful.

Why Test Table MetaData Tests with tSQLt

I wrote at SQLServerCentral about using tSQLt to check table metadata. In essence we are testing the API of our table. However, since the table could change, and may need to, what’s the value of having a test fail if the table changes?

In my mind, I don’t necessarily want to have table structure tests for all my tables. After all, developers need to have flexibility to work with and change tables in our applications. If it’s a pain for a developer to change every table, because a test fails and they have to go change the test, that’s an issue.

There’s also the problems of a developer changing a table, changing the test, and then having everything pass, without passing along information that a schema change was made.

I would limit the API tests for a metadata to those tables that are important, with the caveat that anytime someone fails a metadata test, they need to inform the team.

But Steve, isn’t every table important?

Yes and no. Certainly all tables should be important to the application in some way, but really many of them are contained in the application. If changes are made, it’s not necessarily a problem to change other objects to catch up to the table change. However, some tables may cross teams or applications and they are an issue.

As an example, I have lots of tables in the SQLServerCentral database.

tablemetadata_1

If the Blogs table, or the Articles table changes, then we need to alter stored procedures and possible ASP.NET code for our application. In fact, in this list, pretty much all of these tables could be changed by a developer without a large impact, assuming they’re going to look at the other objects or code affected.

However the table highlighted, the emails table, along with a few others, are important. These tables not only support SQLServerCentral, the web app, they are also called by our emailer process, which is a completely separate application. In essence, these tables are the opposite of a microservice. They’re shared.

If someone wants to change the Emails table, I want to be sure that others are informed. In fact, I might choose to include a note in the test header that various groups need to be informed or that the table affects another application. In that case, before a developer went to change the test, they might at least have a chance or noting this has far reaching implications.

tablemetadata_2

It’s not a perfect solution, but it does help. The other thing I could do is limit access to metadata tests for various tables/views and merely call these tests in a CI, or other automated, process. That way failures would be public, and a variety of people could be informed, preventing a developer from making changes without a discussion.

As I mentioned, I wouldn’t do this for all tables. In fact, I’d limit this to particularly sensitive tables that might require lots of rework if they were changed. We want to speed development, and ensure code works, not slow developers down.

tSQLt with TRY..CATCH

Someone asked me the question recently about how tSQLt works with TRY..CATCH blocks and the exceptions that we might test for. It works fine, just as it would with other code, but you need to understand that a CATCH still needs to re-throw an exception.

Here’s a short example. I’ve got this query, which has issues.

SELECT TOP 10
         cs.CustomerID
     ,   cs.LastSale
     ,   cs.Salesman
     ,   CAST(cs.SaleValue AS NUMERIC)
     FROM
         dbo.CustomerSales AS cs;

If I run it, I get this:

Msg 8115, Level 16, State 6, Line 1
Arithmetic overflow error converting varbinary to data type numeric.

The CAST here has issues, but that’s fine. Perhaps it’s a data issue, perhaps something else. I can test for that, but for now, I want to be sure I handle these errors correctly.

Now, I embed that in a TRY..CATCH block.

    BEGIN TRY
        SELECT TOP 10
                cs.CustomerID
            ,   cs.LastSale
            ,   cs.Salesman
            ,   CAST(cs.SaleValue AS NUMERIC)
            FROM
                dbo.CustomerSales AS cs;
    END TRY
    BEGIN CATCH
        SELECT @@ERROR
            ,  ‘A CASTing Error has occurred.’
        ;

    END CATCH;

If I do this, and in the CATCH block I "handle" the error, I’m not really error handling. I’m error swallowing. Here are my results.

EXEC spGetCommission 12

casterror

I could log this, or try to return some data with a new query, maybe alter something that ensures the client gets results, but what I really need to do is give an error back, but one I’m aware of.

We could delve into error handling, but I won’t do that here. Instead, I want to be sure the application gets an error, when we have an error. It can then decide what the user does or sees.

If I write this test:

ALTER PROCEDURE [misc procs].[test spGetCommission Exceptions]
AS
BEGIN

— Assemble
EXEC tsqlt.ExpectException;

— ACT
EXEC dbo.spGetCommission @userid = 0 — int

— Assert
END;

Now I can run it, but it fails. I see the failure

test1

and I see this in the results

test2

What I should have is something more like this:

    BEGIN CATCH
        THROW 51001,  ‘An CASTING Error has occurred.’, 1;
    END CATCH;

 

Then my test should be looking for that message.

ALTER PROCEDURE [misc procs].[test spGetCommission Exceptions]
AS
BEGIN

— Assemble
EXEC tsqlt.ExpectException
   @ExpectedMessage = ‘An CASTING Error has occurred.’
   , @ExpectedErrorNumber = 51001
;

— ACT
EXEC dbo.spGetCommission @userid = 0 — int

— Assert

 
END;

If I do that, things work well. The error is handled, but also re-thrown, and my test passes.

test3

tSQLt – SQLCop – Checking Naming Conventions

I’ve been using tSQLt a bit to do some testing and one of the things I’ve tested is standards for code. I’ve been using a framework on top of tSQLt called SQLCop. These are a series of tests written to look for specific things. One of the items I do check is for sp_ named procedures. I’ve mostly gotten out of the habit of doing this, preferring spProcName, but at times I make a mistake in typing. This catches those simple errors.

Using SQL Cop

You can Download the SQLCop tests and install them in your database after you’ve setup tSQLt. If you are using SQL Test, then you also get the SQLCop tests installed when you add the framework to a database. For me, I see the tests in the SSMS plugin.

tsqlt7

There are a lot of tests, but in this piece, I’ll look at the Stored Procedures Named sp_ test.

If I edit the test, I see it’s fairly simple code. I’ve included it here.

USE [EncryptionPrimer]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [SQLCop].[test Procedures Named SP_]
AS
BEGIN
-- Written by George Mastros
-- February 25, 2012
-- http://sqlcop.lessthandot.com
-- http://blogs.lessthandot.com/index.php/DataMgmt/DBProgramming/MSSQLServer/don-t-start-your-procedures-with-sp_

SET NOCOUNT ON

Declare @Output VarChar(max)
Set @Output = ''

SELECT @Output = @Output + SPECIFIC_SCHEMA + '.' + SPECIFIC_NAME + Char(13) + Char(10)
From INFORMATION_SCHEMA.ROUTINES
Where SPECIFIC_NAME COLLATE SQL_LATIN1_GENERAL_CP1_CI_AI LIKE 'sp[_]%'
And SPECIFIC_NAME COLLATE SQL_LATIN1_GENERAL_CP1_CI_AI NOT LIKE '%diagram%'
AND ROUTINE_SCHEMA <> 'tSQLt'
Order By SPECIFIC_SCHEMA,SPECIFIC_NAME

If @Output > ''
Begin
Set @Output = Char(13) + Char(10)
+ 'For more information: '
+ 'http://blogs.lessthandot.com/index.php/DataMgmt/DBProgramming/MSSQLServer/don-t-start-your-procedures-with-sp_'
+ Char(13) + Char(10)
+ Char(13) + Char(10)
+ @Output
EXEC tSQLt.Fail @Output
End
END;

This code looks at the meta data in the database for an routines, stored procedures, that start with sp_ as part of their name. If any results are returned from the query, the IF statement will be true and the @output will be returned as part of the tSQLt.Fail call.

Using the Test

Let’s write a stored procedure. If I do this:


CREATE PROCEDURE spLetsTestThis
AS
BEGIN

SELECT TOP 10
e.EmployeeID
, e.EmpTaxID
, e.FirstName
, e.lastname
, e.lastfour
, e.EmpIDSymKey
, e.EmpIDASymKey
, e.hashpartition
FROM
dbo.Employees AS e;

RETURN 0;
END;

GO

This is a simple procedure. I wrote it, execute it a few times and be sure it’s what I want. I’ve done basic testing, not let’s check it before I commit it to VCS.

The easy way to execute all the SQLCop tests is to right click them in SQL Test and execute them. I can also use T-SQL to run tests. However since I just want to show this one, I’ll right click it and select "Run Test".

tsqlt8

This runs the test selected. I can also run an entire class, or all tests, but clicking in the right spot. In this case, the test passes and I see a green mark.

tsqlt9

Now let’s write a new procedure:

CREATE PROCEDURE sp_GetArticles
AS
SELECT *
FROM dbo.Articles

GO

This is a bad procedure for a variety of reasons, but let’s execute my test. I see it fail, and a red mark appears next to my test.

tsqlt10

In this case I also get a window from SQL Test popping up with more details. This contains the output from the test, which is also inserted into a table by the tSQLt framework.

tsqlt11

Note that there is a URL with more information on this particular test. That is a part of the SQL Cop test code above. I could easily replace this with something particular to my environment if I chose.

At this point, I can rename the object, drop and recreate it, etc. to correct the issue. However running this test helps me to be sure I’ve gotten good code into the VCS. If I have this also run as a part of a CI process, it then prevents bad code from other developers appearing.

Meeting Standards

There are all sorts of SQLCop tests, and I’ll write about more, but this is an easy one to implement to prevent a bad practice in your coding by a team of developers. Allowing each developer to test themselves, as well as an overall check by some CI process means that our code quality improves.

If I have other standards, I can even write my own tests to enforce them, which I’ll do in another piece.

Downloads