Rename a Primary Key–#SQLNewBlogger

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

Having system named objects is one of those things that people may debate, but as you move to more automated and scripted processes can cause you issues. In the case of a Primary Key (PK), if you do something like this:

      OrderID INT IDENTITY(1, 1)
                  PRIMARY KEY ,
      OrderDate DATETIME

What you get is something like this:

2016-06-27 13_58_47-SQLQuery4.sql - (local)_SQL2016.EncryptionDemo (PLATO_Steve (60))_ - Microsoft S

When you compare that with the same table in another database, what’s the likelihood that you’ll have the PK named PK__OrderDet__D3B9D30C7D677BB4? Probably pretty low.

This means that if you are looking to deploy changes, and perhaps compare the deployment from one database to the next, you’ll think you have different indexes. Most comparison tools will then want to change the index on your target server, which might be using this technique. Or the choice might be something that performs much worse.

What we want to do is get this named the same on all databases. In this case, the easiest thing to do with rename the constraint on all systems. This is easy to do with sp_rename, which is better than dropping and rebuilding the index.

I can issue an easy query to do this:

exec sp_rename ‘PK__OrderDet__D3B9D30C7D677BB4’, ‘OrderDetail_PK’

When do this, I see the object is renamed.

2016-06-27 13_59_20-SQLQuery4.sql - (local)_SQL2016.EncryptionDemo (PLATO_Steve (60))_ - Microsoft S

This table has over a million rows, and while that’s not large, it does take time time to rebuild the index. With a rename, the change is to the metadata and takes a split second.


These quick, easy, administrator items are great to blog about. As an exercise, if this table has millions of rows, how much longer does the index rebuild take?

This is a great topic for you to write about (or learn about) and show how you can better administer your SQL Server databases.

Getting Table Change Scripts–#SQLNewBlogger

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

One of the really basic things I think everyone should understand is how to get scripts from Management Studio (SSMS) and saving them. In fact, I’ve written that everyone should use this button and really not ever execute their GUI changes. Capture the script, save that, and automate things.

However, that’s not what this post is about. This post is about how you get a script to look at changes, or better understand how SSMS might implement your changes.

Editing a Table

Let’s say that you want to redesign a table, so you Edit it in the SSMS Table Designer. Here, you can see I have small table with a few fields.

2016-06-27 09_33_55-PLATO_SQL2016.EncryptionDemo - dbo.OrderDetail - Microsoft SQL Server Management

I want to rename the field with incorrect casing as well as insert an OrderDate column in the middle. I have made those changes below.

2016-06-27 09_34_31-PLATO_SQL2016.EncryptionDemo - dbo.OrderDetail_ - Microsoft SQL Server Managemen

Now, I’m not sure how these changes will be made in SSMS, and I certainly want to be careful in production. We want a script we can examine and approve.

Certainly, I could use something like SQL Compare to generate a script between two databases. That would include transactions and error handling and more. That’s my preferred method. However, since not everyone has SQL Compare (a mistake! Winking smile ), let’s just use SSMS.

Instead of saving, I’ll click this button.

2016-06-27 09_37_09-PLATO_SQL2016.EncryptionDemo - dbo.OrderDetail_ - Microsoft SQL Server Managemen

Or I’ll go to this menu item.

2016-06-27 09_37_52-PLATO_SQL2016.EncryptionDemo - dbo.OrderDetail_ - Microsoft SQL Server Managemen

Once I do that, after a warning, I get a script dialog.

2016-06-27 09_39_28-PLATO_SQL2016.EncryptionDemo - dbo.OrderDetail_ - Microsoft SQL Server Managemen

I can now save the script and then open it back  up in SSMS. I can see all the changes that the scripting engine thinks we should make.

2016-06-27 09_41_29-OrderDetail.sql - (local)_SQL2016.master (PLATO_Steve (57)) - Microsoft SQL Serv

This allows me to learn about one way to make these changes, as well as see things that might concern me, such as poorly named constraints and indexes.


This is a great productivity and learning technique, but also a core thing I’d hope most DBAs knew. You could certainly write about how you use this, or how this might have been helpful in a situation. Showcase your knowledge on this topic with the #SQLNewBlogger hashtag.

Using sp_executesql Parameters –#SQLNewBlogger

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

I haven’t used sp_executesql much. Instead, my habitually way of executing dynamic SQL has been with EXEC(). There are a few differences between these commands, but I had to look at sp_executesql recently and realized I didn’t know much about it.

One of the neat things with sp_executesql is that you can pass in parameters.  That’s pretty cool. I hadn’t ever bothered, but if you read the docs, you’ll see that if you execute the same code over and over, with different parameters, you might get the same execution plan. This can be a performance boost.


I’m not going to delve into deep details, but Kimberly Tripp does so read her post (and then write your own thoughts).

The Code

Here’s some code to demonstrate. I have a simple table with 3 columns to insert. In this case, here’s my insert:

INSERT EventLogger VALUES (@m, @d, @u)

Now, I went to use this over and over, but with different values for the parameters. Obviously I can just do this:

SET @m = ‘Error Message’

INSERT EventLogger VALUES (@m, @d, @u)

SET @m = ‘New Error Message’

INSERT EventLogger VALUES (@m, @d, @u)

However, imagine that I’m building this INSERT string dynamically because it’s more complex. How do I execute this over and over with new values? With EXEC(), I rebuild the string. With sp_executesql, I do this:

DECLARE @msg VARCHAR(200) = ‘An error occured’;
DECLARE @usr VARCHAR(10) = ‘Steve’;

SELECT @cmd = N’INSERT EventLogger VALUES (@m, @d, @u)’

SELECT @p = N’@m varchar(200), @d datetime, @u varchar(10)’

EXEC sp_executesql @cmd, @p, @m = @msg, @d = @dt, @u = @usr;

     , @msg = ‘A new error occured’
     , @usr = ‘Bob’;

EXEC sp_executesql @cmd, @p, @m = @msg, @d = @dt, @u = @usr;
SELECT top 10
FROM dbo.EventLogger AS el

Now, I check the table:
2016-06-22 15_00_08-Settings

I thought that was cool.


This isn’t a deep post. It’s a light look, with a little explanation. I’ll do more later. However, I’m hoping this serves as a way to show you how to start investigating a topic. I’ve spent a bit of time experimenting and learning. I’m fairly confident I could play and use sp_executesql more.

You could do this as well, start digging into a topic and then show how you’re learning.

Basic XML Node Query–#SQLNewBlogger


Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

I saw a question recently about querying an XML document. Certainly avoid this in the database if you can, but there are times you need to. Rather than link to the post, I wanted to show the basics of how you query a node.

Let’s suppose I have an XML document like this:

    <CustomerName Type=”Individual”>
  <Customer Type = “Company”>

Now, I saw someone query this with code like this to get the OrderID.

SET @xml = N’
    <CustomerName Type=”Individual”>
  <Customer Type = “Company”>

      t.b.value(‘(ORDERID)[1]’, ‘NVARCHAR(100)’) AS MSGID
    @xml.nodes(‘/Order’) t(b);

This doesn’t work.

2016-06-15 13_09_07-Photos

The reason this doesn’t work is that XML is case sensitive. Meaning ORDERID != OrderID. The former is in the query, the latter in the XML document. If I change the query, this works (note I have OrderID below).

2016-06-15 13_11_23-Photos

This would also apply to the .Nodes call. If I had .ORDER, this also wouldn’t work.

2016-06-15 13_11_54-Photos

The @xml.nodes() call determines the root at which I’ve essentially set the document. I could have this as /Order/Customer if I wanted. In that case, I couldn’t access the OrderID. The OrderID isn’t below the Customer node.

2016-06-15 13_13_21-Photos

However, from below Customer, I can get to the names.

2016-06-15 13_14_05-Photos

There is a lot more to know about XML, but you can experiment with the various nesting levels by including different paths. I’ll show a few more things in another post.


Querying XML is hard, and can be frustrating as the document size grows and complexity grows. However, this is a good way to showcase your skills (or build them), but tackling different query questions or challenges and writing about them.

Hint: this will also help solidify your XML skills.

Getting the Previous Row Value before SQL Server 2012

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

I ran across a post where someone that was trying to access the previous value in a table for some criteria. This is a common issue, and  one that’s very easily solved in SQL Server 2012+ with the windowing functions.

However, what about in SQL Server 2008 R2-?

NOTE: I’m solving this quickly, the way many people do, but this is an inefficient solution. I’ll show that in another post. However, I’m showing how you can describe and solve a problem here. If you need to solve this, look for a temp table solution (or find a later post from me).


It’s pretty easy. Let’s get some data together. I’ll use a big sample since that’s easier to see the differences.

( myid INT
, myvalue INT
VALUES (1, 10 ),
        (1, 20 ),
        (2, 400),
        (2, 500),
        (2, 600),
        (3, 8000),
        (3, 9000),
        (3, 10000),
        (3, 11000);

Now, what I want is something that returns the previous row, assuming we’re ordering by the ID and value. If there is no previous value, let’s return a zero. Essentially what we want is something like this:

select MyID        , MyValue       , MyPrevValue = ISNULL( x, 0)
from …

That’s the pseudocode. Obviously I need to fill in blanks. However, let’s build a test. Why? Well, I can then see my result data, and I can re-run the test over and over as I experiment with the query. It’s not hard, I promise.

EXEC tsqlt.NewTestClass
  @ClassName = N'WindowTests';
CREATE PROCEDURE [WindowTests].[test check the previous row value for MyID]
-- assemble
CREATE TABLE #expected (id INT, myvalue INT, PrevValue int) INSERT #expected
VALUES (1, 10  , 0  ),
        (1, 20  , 10 ),
        (2, 400 , 20 ),
        (2, 500 , 400),
        (2, 600 , 500),
        (3, 8000 , 600),
        (3, 9000 , 8000),
        (3, 10000 , 9000),
        (3, 11000 , 10000) SELECT *
INTO #actual
  FROM #expected AS e
  WHERE 1 = 0 -- act
INSERT #actual
EXEC dummyquery;
-- assert
EXEC tsqlt.AssertEqualsTable
  @Expected = N'#expected'
, @Actual = N'#actual'
, @FailMsg = N'Incorrect query' END

If you examine the test, you’ll see that I create a table, insert the results I expect, and then call some procedure. I compare the results of the procedure with the table I built.

That’s it. A simple test, but I’ll let the computer compare the result sets rather than trusting my eyes.

Last thing, I’ll build my dummy procedure, which can look like this:

-- alter procedure dummyquery
BEGIN select MyID   , MyValue , PrevValue = MyValue from MyID

Now I have the outline of what I need. If I run the test now, I’ll get this:

2016-06-07 10_18_23-Photos

The test output tells me it has failed, the values in the #expected table (with a <), and the values from my query in the #actual table (with a >).

Now I can debug and work on this.

Solving the Problem

First, I want to order the data and get a number that counts the order. The ROW_NUMBER function does this, which is available in SQL Server 2005+. I won’t go into SQL 2000- solutions because, well they’re more complex and there should be very few SQL 2000 instances left coming up with new problems.

I can do this with this code:

2016-06-07 10_21_40-Photos

Note that I have a sequential counter that lets me order every row with an index. Now, I can access the previous row, since I know the MyKey value will be one less than the current row.

With this in mind, let’s turn this into a CTE (removing the previous value). Outside of the CTE, I’m going to self-join the CTE to itself. I’ll use a LEFT JOIN since not every row will have a previous row. In fact, the first row won’t.

The join condition, which you can play with, will be on the outer table’s ID being one less than the first table’s key. You could reverse the math as well, but that’s up to you.

2016-06-07 10_29_37-Photos

One last issue. Add an ISNULL to the previous value to return a 0 if there is no match. Now, let’s run the test.

2016-06-07 10_31_58-Photos


This was a slightly longer post, where I tried to explain how I setup the problem and solved it. I included a test, which didn’t add much coding time. In fact, the writing took far longer than the coding itself.

This is the type of problem I’d encourage you to solve on your blog. If you want to repeat this, look for a solution with temp tables, as the CTE incurs a lot of reads. This isn’t really what you’d like to do in production code.

LAST_VALUE–The Basics of Framing

I did some work a 3-4 years ago, learning about the Windowing functions and enjoying them so much I built a few presentations on them. In learning about them, and trying to understand them, I found some challenges, and it took some experimentation to actually understand how the functions work in small data sets.

I noticed last week that SQLServerCentral had re-run a great piece from Kathi Kellenberger on LAST_VALUE, which is worth the read. There’s a lot in there to understand, so I thought I’d break things down a bit.


The important thing to understand with window functions is that there is a frame at any point in time when the data is being scanned or processed. I’m not sure what the best term to use is.

Let’s look at the same data set Kathi used. For simplicity, I’ll use a few images of her dataset, but I’ll examine the SalesOrderID. I think that can be easier than looking at the amounts.

Here’s the base dataset for two customers, separated by CustomerID and ordered by the OrderDate. I’ve included amount, but it’s really not important.

2016-06-06 13_38_55-Phone

Now, if I do something like query for LAST_VALUE with a partition of CustomerID and ordered by OrderDate, I get this set. The partition divides the set up into the two customer sets. Without an ORDER BY, these sets would exist as the red set and blue set, but in no particular order. The ORDER BY functions as it does in any query, guaranteeing the same order every time.

2016-06-06 13_46_36-Movies & TV

Now, let’s look at the framing of the partition. I have a few choices, but at any point, I have the current row. So my processing looks like this, with the arrow representing the current row.

2016-06-06 13_49_22-Movies & TV

The next row is this one:

2016-06-06 13_49_33-Movies & TV

Then this one (the last one for this customer)

2016-06-06 13_49_44-Movies & TV

Then we move to the next customer.

2016-06-06 13_49_54-Movies & TV

When I look at any row, if I use “current row” in my framing, then I’m looking at, and including, the current row. The rest of my frame depends on what else I have. I could have UNBOUNDED PRECEEDING and UNBOUNDED FOLLOWING in there.

If I used UNBOUNDED PRECEEDING and CURRENT ROW, I’d have this frame, in green, for the first row. It’s slightly offset to show the difference.

2016-06-06 13_53_22-Movies & TV

However, if I had CURRENT ROW and UNBOUNDED FOLLOWING, I’d have this frame (in green).

2016-06-06 13_54_21-Movies & TV

In this last case, the frame is the entire partition.

What’s the last value? In the first case, the last part of that frame is the current SalesOrderID (43793). That’s the only row in the frame. In the second frame, the last one is 57418, the last row in the frame, and partition.

What if we move to the next row? Let’s look at both frames. First, UNBOUNDED PRECEEDING and CURRENT ROW.

2016-06-06 13_56_16-Movies & TV

Now the frame is the first two rows. In this case, the last value is again the current row (51522). Below, we switch to CURRENT ROW and UNBOUNDED FOLLOWING.

2016-06-06 13_56_29-Movies & TV

Now the frame is just the last two rows of the partition and the last value is the same (57418).

There’s a lot more to the window functions, and I certainly would recommend either Kathi’s book (Expert T-SQL Window Functions in SQL Server) or Itzik’s book (Microsoft SQL Server 2012 High-Performance T-SQL Using Window Functions). Either one will help. We’ve also got some good articles at SQLServerCentral on windowing functions.

It’s 2016 RLS for T-SQL Tuesday #79

tsqltuesdayIt’s T-SQL Tuesday time again. I missed last month, being busy with travel, though I should go ahead and write that post. Maybe that will be next week’s task.

In this case, Michael J Swart is hosting this month’s blog party and he asks us to write about something to do with SQL Server 2016. Read the rules at his invitation.

Row Level Security

I’ve wanted this feature to be easy for a long time. In fact, I’ve implemented a similar system a few times in different applications, but it’s been a cumbersome feature to meet, plus each developer needs to understand how the system works for it to work well. Even in the case where we once used views to hide our RLS, it was a performance issue.

Microsoft has made things easier with their Row Level Security feature. This was actually released in Azure in 2015, but it’s now available in SQL Server 2016 for every on premise installation as well.

Essentially for each row, there is some data value that is checked to determine if a user has access. This doesn’t mean a join. This doesn’t mean you write a lot of code. The implementation is simple, and straightforward, and I like it.

Security Predicate Functions

The one piece of code you need is an inline table valued function (iTVF) that returns a 1 for the rows that a user should see. You need to have some way to match up a row with a user, and that can be tricky, but if you identify a row, even in another table, you can use it.

For example, I have this table.

    OrderID INT IDENTITY(1, 1)
                PRIMARY KEY
  , Orderdate DATETIME2(3)
  , CustomerID INT
  , OrderTotal NUMERIC(12, 4)
  , OrderComplete TINYINT
  , SalesPersonID INT

There’s nothing in this table that really helps me identify a user that is logged into the database. However, I do have a mapping in my SalesPeople table.

    SalesPersonID INT IDENTITY(1, 1)
                      PRIMARY KEY
  , SalesFirstName VARCHAR(200)
  , SalesLastName VARCHAR(200)
  , username VARCHAR(100)
  , IsManager BIT

Granted, this could mean some change of code, but perhaps you can somehow use a user name in tables to query AD or other directory and map this to a user name.

Once I have that mapping, I’m going to create a function. My function will actually look at the SalesPeople table, and map the parameter passed into the function to the value in the table.

CREATE FUNCTION dbo.RLS_SalesPerson_OrderCheck ( @salespersonid INT )
            1 AS [RLS_SalesPerson_OrderCheck_Result]
            dbo.SalesPeople sp
              @salespersonid = sp.SalesPersonID
              OR sp.IsManager = 1
            AND USER_NAME() = sp.username;

In the function, I look at the USER_NAME() function and compare that to a value in the table. This is in addition to checking the SalespersonID column.

I can use a Security Policy to bind this function to my OrderHeader table as shown here:

CREATE SECURITY POLICY dbo.RLS_SalesPeople_Orders_Policy
  ADD FILTER PREDICATE dbo.RLS_SalesPerson_OrderCheck(salespersonid)
  ON dbo.OrderHeader;

This sets the function, passing in a column from the OrderHeader table, which is the column I want evaluated in the function.When I now query the OrderHeader table, I get this:

2016-06-13 11_42_16-Photos

There is data in the table. However, I don’t get rights by default, even as dbo. My USER_NAME() doesn’t match anything in the table, therefore no SalesPersonID matches. However, for other users, it works.

2016-06-13 11_42_32-Photos

There is a lot more to the RLS feature, but I think it’s pretty cool and it’s something that will be highly used in many applications moving forward, especially those multi-tenant systems.

Go ahead, get the free Developer Edition and play around with RLS.



One of the things that is needed in quite a few functions is the object_id of a particular table/view/procedure/function in SQL Server. For example, I was looking at STATS_DATE recently, and it has this definition.

STATS_DATE (object_id, stats_id)

In the past, I’d run something like this:

SELECT @i = object_id
 FROM sys.objects 
 WHERE name = 'SalesOrderHeader'

Actually, I’d really do this as two batches.

SELECT * FROM sys.objects WHERE name = 'SalesOrderHeader'
SELECT STATS_DATE ( 1266103551 , 2)  

I’d run the first, get the ID, and paste it into the second. However I’ve learned that isn’t the best way to do this. In fact, when I started doing  a lot of encryption testing and research, I started to take advantage of functions like OBJECT_ID.

Now, here’s what I’d do:

SELECT STATS_DATE ( OBJECT_ID(‘Sales.SalesOrderHeader’) , 2) 

Simple, easy, and I can do this inline. With SQL Prompt, I’m also pretty quick getting this out. Of course, I do need to remember to include the schema, because this won’t work:

SELECT STATS_DATE ( OBJECT_ID(‘SalesOrderHeader’) , 2) 

Three warnings. First, qualify your objects. In this case, I should have used Sales.SalesOrderHeader to be sure I get the correct object. There are people that use schemas with the same object in multiple schemas (etl.SalesOrderHeader, audit.SalesOrderHeaders, etc.).

Second, the object_id() isn’t guaranteed to be unique across databases. I should have pointed that out.


When I find quick tricks or techniques I use often, I try to make a note and then write about them later. It helps me remember, but it also lets me share things with others.

Perhaps most important, it shows I’m doing and learning things in my career. Winking smile

Quick Tests for a Function

I was writing a poorly performing UDF the other day and then wanted to replace it with a better performing one. However, I wanted to be sure that the function was the acting the same externally. In other words, does all my code that calls the function work the same?

It’s a no brainer for me to use tSQLt to do this. I can quickly put together a few tests for my function. In my case, my function was proper casing a string. In this case, I make a class and add a quick test.

My function is dbo.udfProperCase(@string). This takes a string value and returns a string value. My test needs then only a few variables.

DECLARE @i VARCHAR(500) = ‘steve’
, @expected VARCHAR(500) = ‘Steve’
, @a VARCHAR(500)

These are my input, my expected, and actual values. The rest of the test is simple.

EXEC @a = dbo.udfProperCase @input = @i

EXEC tsqlt.AssertEquals @Expected = @expected, @Actual = @a, @Message = N’single name failure’

This calls the function, gets the return, and the asserts this is equal to the Expected value. I wrap this in a procedure definition. My complete definition is then:

EXEC tsqlt.NewTestClass
  @ClassName = N’StringTests’;
CREATE PROC [StringTests].[test propercase single name]
DECLARE @i VARCHAR(500) = ‘steve’
, @expected VARCHAR(500) = ‘Steve’
, @a VARCHAR(500)

— Act
EXEC @a = dbo.udfProperCase @input = @i

— assert
EXEC tsqlt.AssertEquals @Expected = @expected, @Actual = @a, @Message = N’single name failure’


That test took me about 2 minutes to write. It’s fairly trivial, but this gives me a happy path test. I easily copied this multiple times, changing the input and Expected values.

DECLARE @i VARCHAR(500) = ‘steve jones’
, @expected VARCHAR(500) = ‘Steve Jones’


DECLARE @i VARCHAR(500) = ‘steve von jones’
, @expected VARCHAR(500) = ‘Steve von Jones’


DECLARE @i VARCHAR(500) = ‘J steve Jones’
, @expected VARCHAR(500) = ‘J Steve Jones’

That gives me a few items. However I also want to look for issues, so I include a few other items.

, @expected VARCHAR(500) = ”

as well as

DECLARE @i VARCHAR(500) = null

, @expected VARCHAR(500) = null

This lets me quickly run a series of tests against my function. While this might not seem like much, they do give me flexibility. If I change the function from a loop to something more like Tony Rogerson’s code, I should get the same results.

That’s the power of testing. Not so much that this verifies my code is correct, though it does that. Testing provides me the freedom to change code, without worrying I’ve subtlety broken things. I get a complete test run against new code quickly.

Certainly I could have bugs in code, but I can easily write a new test when I find a bug and include it in my suite of tests to run against the function for the future.

Testing isn’t that hard, and the more you practice writing tests, the better (and faster) you’ll get at it.

Another Recursive CTE–Doing Math

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

I showed how to write a simple recursive query recently that calculated an amount of money paid out each day. Now I want to extend this a bit to include a few math formulas.

Note: These aren’t terribly useful, but they are good practice for just writing recursive queries.

Fibonacci Series

I’m sure many of you have dealt with a Fibonacci series at some point in your life. This is a series where the current value is the sum of the two previous values. In other words,

term 3 = term 1 + term 2

term 4 = term 2 + term 3


The series starts with 0, 1 and goes from there. Can we do this in SQL? Sure.

Let’s start by looking at what we need. We need some counter, a current term, and a previous term. That’s 3 columns in our query. We start by building an anchor, which has the first two terms. The counter is n and the two terms are i and j.

— Anchor

select n = 1

, i = 0

, j = 1

Now we add the recursive part. In this case, the counter increases by 1. I only include the counter so I know when to stop. SQL Server has a finite size of various values, and we can exceed that without a way to stop.

The first value will be calculated from the current value + the next call. The current value will become the second one on the next last call, so we move that over. This gives us.

select counter + 1

  , first = first + second

, second = first

With this, we can then add a WHERE clause to stop. I’ll stop at the first ten terms. Here’s the CTE.

WITH myFib (n, i, j)
  — anchor
  SELECT ‘n’ = 1
       , ‘i’ = 0
       , ‘j’ = 1
   — recursive section
   SELECT n + 1
        , i + j
        , i
       FROM myFib
WHERE myFib.n < 10
‘Level’ = myFib.n
, ‘Fibonacci’ = i FROM myFib

If we run this, we see:

2016-05-17 19_16_28-Cortana

We can extend this by altering the WHERE clause.

A Little Calculus

What about math functions? Have any of you worked with a series in calculus? If so, you might remember something like this:


This is a repetitive calculation, and should either converge or diverge. Can we implement this as a recursive CTE? Sure.

This one is really simple. I use the POWER() function in my recursive member to raise –1 to whatever counter I’m using for n. I then use a SUM() across all previous values in the outer query to get the sum.

WITH myPartialSum (n, s)
SELECT ‘n’ = 1
     , ‘s’ = POWER( -1, 0)
     SELECT n + 1
       , POWER(-1, n)
       FROM myPartialSum
       WHERE n < 100
SELECT myPartialSum.n
FROM myPartialSum

Note: This series does not converge, as it alternates to infinity.

Neither of these is terribly useful, but they do allow some practice in writing CTEs that will recurse.


Implement some other series or sequence yourself and explain how it works.