CROSS APPLY v InLine Functions

While working on the Advent of Code problems in SQL, I ran across something interesting. Day 4 involves hashing, which is done with the HASHBYTES function in SQL Server. This is a computation and given the problem, there is no good way to do this without brute force. The problem says

  • hash a specific string + an integer.
  • If the leftmost digits are 0 (5 or 6 of them), stop
  • increment the integer
  • repeat

Since a hash doesn’t lend itself to a pattern, you can’t start with 100,000 and determine if the integer you need is higher or lower. Instead you need to work through the integers.

I decided to try this with a tally table and hashing with TOP 1. BTW, TOP 1 makes a huge difference.

However, my structure was to query my tally table like this:

     , HASHBYTES(‘MD5’, ‘iwrupvqb’ + CONVERT(VARCHAR(15), n))
          FROM cteTally

This was in a second CTE, and in the main query I then use a WHERE clause to filter the list down to the entry with leading zeros. When I ran this, I noticed it was rather slow at first, at least, what I considered slow. I checked with a few other people that had solved the problem, and I found their times were faster than mine.

I wasn’t sure the brute force technique would benefit from a TOP clause, but I added a TOP 1 to the outer query. This made the entire process run much quicker, which is interesting. Apparently the filtering is collapsed across the tally table join with the hash computation and as soon as a valid match is found, this ends the calculations. My average went down by a factor of 10.

However, I wondered if moving the calculation to a join, with CROSS APPLY, would be quicker. I couldn’t imagine why, but I decided to try this. I moved the calcuation by changing the HASHBYTES calculation to a SELECT statement in a derived table for the CROSS APPLY and then taking the result of that as part of my column list. This changed my CTE to this:

     , hb.hashvalue
  FROM cteTally
   CROSS APPLY (SELECT HASHBYTES(‘MD5’, ‘iwrupvqb’ + CONVERT(VARCHAR(15), n))) AS hb(hashvalue)

That resulted in a slightly faster query time. When I added a TOP to this, the times improved slightly from using HASHBYTES in the column list with a TOP. Intuitively this doens’t make sense, as it would seem the same number of function calls need to be completed, but the CROSS APPLY handles them a bit more efficiently. I’m sure someone has a much more in-depth understanding of the query optimizer here, and I won’t try to explain things myself. The times are close enough that I suspect some minor optimization from CROSS APPLY.

As a comparison, I also ran a brute force loop, with this code, that calculates the values sequentially until the result is determine. This should be equivalent to the results from TOP 1, and we find that they aren’t. The tally table solution with CROSS APPLY is much quicker.

WHILE @t = 1
   IF LEFT( CONVERT(VARCHAR(50), HASHBYTES(‘MD5’, ‘iwrupvqb’ + CAST(@i AS VARCHAR(10))), 2), 6) = ‘000000’
       SELECT @i
       SELECT @t = 0
   SELECT @i = @i + 1
   –IF @i > 10000000
   — SELECT @t = 0
SELECT starttime = @start
     , seconds = DATEDIFF(SECOND, @start, GETDATE())

Here’s a summary of the code timings (averaged across 5 executions), for the second part of the puzzle, which looks for 6 leading zeros and has a result in the 9million range.

Query Timings (sec)
Hashbytes in column list, no TOP




Hasbytes in columns list, TOP




Brute Force, WHILE loop



The conclusion I’d take here is that CROSS APPLY ought to be a tool you keep in the front of your toolbox and use when you must execute a function for each row of a set of tables. This is one of the T-SQL  techniques that I never learned early in my career (it wasn’t available), and I haven’t used much outside of looking for execution plans, but it’s a join capability I will certainly look to use in the future.

However, if you are using UDFs instead of system functions, I’d certainly recommend you read Adam Machanic’s post on Scalar Functions and CROSS APPLY, and perhaps you can change to ITVFs and get some great performance gains.

Creating a User Without a Login

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

This is one of those simple things that many people should be able to do in order to build in better security for their database code. However I’m sure many people haven’t ever built one. I ran into this recently, and thought it would be a great SQLNewBlogger post.

Creating the User

It’s a bit anticlimactic, but creating a user in a database without a login is simple:

CREATE USER UpdateStatsUser

This creates a new user in my database,

2016-01-25 13_38_56-Start

that is not associated wtih a login.

2016-01-25 13_39_13-Netflix

I can assign this user permissions, like any other user.

GRANT CONTROL ON dbo.Authors TO UpdateStatsUser;

I could then use these permissions any other way.

Why would you do this?

A short summary from Pinal Dave, which is pretty good. Essentially you want to give different permissions to a user, without using something like an Application Role, which has a name and password that must be managed. Here, you can allow a user to execute a routine as another, more privileged user, without giving the original user additional permissions.


This is a really simple post that took my longer to write than create the user. About 10 minutes. I wouldn’t expect most of you to stop here. I’d want a post that shows you understand something about how this user can be used, show me an example of reading or writing a table as a user with this impersonation in action.



Loading a Text File with a Line Feed

Loading text files is a skill that probably every DBA needs. I know that the import wizard is available, but there are times that you might want to automate this without using SSIS. In those cases, it’s nice to know how to load data from a programmatic standpoint.

I had the need to do this recently with a text file that looked normal. When I opened it in my editor, it looked like a normal text file, one column of data.

2016-01-28 17_44_07-Start

I thought this would be easy to load, so I created a simple table:

CREATE TABLE MyTable ( teststring VARCHAR(100))

I then ran a simple BULK INSERT command.

BULK insert MyTable
     from ‘C:\SampleFiles\input.txt’

And I received this:

Msg 4863, Level 16, State 1, Line 3
Bulk load data conversion error (truncation) for row 1, column 1 (mychar).

That’s not good. I suspected this was because of the format of the file, so I added a row terminator.

BULK insert MyTable
     from ‘C:\SampleFiles\input.txt’
with ( ROWTERMINATOR = ‘\r’)

That didn’t help. I suspected this was because of the terminators for some reason. I also tried the newline (\n) terminator, and both, but nothing worked.

Since I was worried about formatting, I decided to look at the file. My first choice here is XVI32, and when I opened the file, I could see that only a line feed (0x0A) was used.

2016-01-28 15_43_19-Settings

However, I wasn’t sure how to get this in my code.

I tried CHAR(), and that didn’t work.

2016-01-28 17_58_47-Cortana

I could look to edit the code with XVI32, but that seems odd. However, let’s try that.

2016-01-28 17_57_28-Settings

I replaced the \r with 0x0A and then deleted the r. Once I saved this, and reloaded into SSMS (do not normalize to CRLF), I could run this.

2016-01-28 17_58_09-Start

I suppose I could also do this with the ALT key, and a number pad, though I couldn’t get that to work on my laptop. I need to try that on my desktop, but it’s not a great way to code. Easy to forget that characters are in the code.

I tried searching a bit and found that SQLDenis had a solution. He used dynamic SQL, but with a little formatting, the code is still easy to read, and this works fine.

SELECT @cmd = ‘BULK insert Mytable
                from ”C:\SampleFiles\input.txt”
                with ( ROWTERMINATOR = ”’ + Char(10) + ”’)’

I executed this and loaded my file just fine.

It’s not often you might need to do this, but it’s a handy little trick for those files that might be formatted from other OSes.

Loading a Text File from T-SQL

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

One of the interesting things I’ve had to work on with the Advent of Code puzzles is loading files into SQL Server. Some of the inputs are large strings, but many are files with lines of code that need to be loaded into SQL Server.

I thought this might be a nice, simple SQLNewBlogger post. Do you know how to load a text file? Certainly the Import/Export wizard can work, but can you quickly load a file from T-SQL itself?

If you can’t, go work that out. If you get stuck, come back or search for help.

Loading a Text File

Obviously you need a place to load the file. I created a table for each puzzle, and here is the table for Day 2.

create table Day2_WrappingPresents
( dimensions varchar(12)

Now ordering doesn’t matter for this puzzle, so I have a very simple table. If ordering mattered, I’d have to do this differently.

To load this file, I’ll use the BULK INSERT command. This takes a table as a target, and optionally has a number of parameters.  Since this is a simple load of a simple file with one column of data to a table with one column of data, I can use the defaults.

bulk insert Day2_WrappingPresents
from ‘C:\Users\Steve\Documents\GitHub\AdventofCode\Day 2 – Wrapping\input.txt’

In this case, the insert will load all 1000 rows into the table. A simple query shows this works:


Now I can get on with the rest of my puzzle solution.


This is a great example of a simple thing that we might not need to do often, but we may need to do at times. Knowing how to do this, a simple operation, showcases that you are improving your SQL Server skills. This post took me about 5 minutes to write.

Advent of Code–Day 2

I’ve continued working along, and while I found Day 2 to be straightforward in Python and PowerShell, I thought it was tricky in SQL I decided this one was worth a post, since I had to futz around a few times to solve it, and I managed a slightly different way than some others.

If you haven’t solved it, then try. Come back here later and compare solutions, but give it a try first.


Solution coming below, so don’t scroll if you don’t need the solution.






But first,







I had a misstep in this one. I loaded the entire list of packages as separate lines into separate rows into a single column table. My challenge to myself was not to use ETL work to break this apart, or updates. I wanted a simple solution, thinking I didn’t want to take up extra space in the database.

As a result, I wanted a single query from a single string column that had the package size stored as one column, ‘2x3x4’ as an example.

My first attempt used the Moden Splitter function, which seemed to work well. I got three rows for each package. I then used a WIndow function to grab that data, order by the sizes, and then start performing calculations. When I didn’t get the right result, I started digging in.

One of the first things I saw was that I had multple packages with the same sizes. So I had two 22x3x1 packages, and when I used a partition based on the dimensions, I had calculation problems. That’s because the window partition doesn’t know that three rows are one package and three are another.

I could have fixed this with some other value to capture the package, maybe a row_number even, but I decided not to go down this route.





My Solution

I decided to break this down, and I used a series of CTEs to do this. I haven’t gone back to optimize things, or combine CTEs, which is possible, but instead left the CTEs as I wrote them to solve parts of the puzzle. Multiple CTEs are east, and they help examine the problem in pieces.

My first step was to parse the string. I don’t love this solution as it is limited to a three dimension package, but it does seem to be the easiest way to break down the dimensions of the package. My query looks to find the string positions for:

  • end of the first dimension
  • start of the second dimension
  • start of the third dimension.

This gives me the simple query:

with cteSplit (d, el, sw, sh)
, endlength = charindex(‘x’, dimensions) – 1
, startwidth = charindex(‘x’, substring(dimensions, charindex(‘x’, dimensions),20)) + charindex(‘x’, dimensions)
, startheight = len(dimensions) – charindex(‘x’, reverse(dimensions))  + 2
from day2_wrappingpresents d

Once I had these values, a little math gives me the length, width, and height.

, cteDimensions
   , l = cast(substring(d, 1, el) as int)
   , w = cast(substring(d, sw, sh-sw-1) as int)
   , h = cast(substring(d, sh, len(d)) as int)
from cteSplit d

Now I’m in business. These two queries were fairly simple, despite all the nested functions. I’ve got integers with the dimensions of each package.

Now the tricky part. I want these ordered. They’re columns, not rows, and I can’t put an ORDER BY in the CTE, so I need to use some comparisons.

, cteOrder
( select
, small = case
            when l <= w and l <= h then l
            when w <= l and w <= h then w
            when h <= l and h <= w then h
, middle = case
            when (l >= w and l <= h) or (l <= w and l >= h) then l
            when (w >= l and w <= h) or (w <= l and w >= h) then w
            when (h >= l and h <= w) or (h <= l and h >= w) then h
, large = case
            when l >= w and l >= h then l
            when w >= l and w >= h then w
            when h >= l and h >= w then h
  from cteDimensions

Not the prettiest code, and perhaps there are better ways to determine this, but this passed all my tests, and seemed to work.

I could have put the next part in the final query, but I decided to make this a separate CTE to easily read the math. I know some people don’t like lots of CTEs, but in this case, I think they make the query very readable. I should look back at this in six months and see what I think.

, cteFinal
  , area = (2 * small * middle) +
           (2 * small * large) +
           (2 * middle * large)
  , slack = (small * middle)
from cteOrder

Now I use a final outer query to sum things up.

sum(area + slack)
from cteFinal

The other thing I noticed here is that when I needed to solve the second part, I only had to change the math in the cteFinal to get the new values. It took longer to re-read the second part than to change the code and solve it.

I looked over how Wayne Sheffield and Andy Warren solved this in T-SQL, and I thought their approaches were interesting. I didn’t want to PIVOT or UNPIVOT anywhere, nor did I look at performance here. This runs so quickly, I’m not sure it matters, though I wonder if we were calculating across 1mm rows, would one be better?

I may look, but for now, I’ll leave that to someone else.

T-SQL Tuesday #74–The Changes

It’s T-SQL Tuesday time, the monthly blog party on the second Tuesday of the month. This is the first T-SQL Tuesday of 2016, and all you have to do to participate is write a blog post. It’s supposed to be today, but write one anytime you can and look for the invitation next month.

I’ve got a list of topics here, but watch the #tsql2sday tag on Twitter.

This month’s topic comes from @SLQSoldier, Robert Davis. The topic is Be the Change, and it’s a good one.

Quick Changes

I’m going to write about SQLServerCentral here. Years ago we were updating our email system to send a high volume of email in two ways. At the time we’d considered purchasing software from others, but found the cost to be significant at our volumes (5-6 figures a year). Instead we needed to handle emails stored in our SQL Server database in two ways:

  • Thousands of bulk emails sent overnight, as quickly as possible
  • Quick, high priority emails sent in response to actions

These two conflicting requirements meant that a simple queue of emails to send wasn’t easy for us to design around. We also needed to deal with the issues of scaling, so we wanted to have mutliple separate machines that could help spread the load.  We were building a small .NET process that would run every minute and send a series of emails.

Our design process led us to the need to build in priority levels into our table. We couldn’t think of more priorities, but we allowed for them with a smallint. Our bulk emails were inserted with a priority of 2, and the registration emails, forum notes, etc, were stored with priority 1.

Once we had a separaton of emails, we needed a way to determine what was sent already. To do this, we used a NULL date for the sending date. This allowed each process to determine when new information had been inserted into the table, and needed to be processed.

This worked well for a single machine. The process would:

  • query for xx priority 1 emails
  • send priority 1 emails
  • update sent priority 1 emails with the sent date/time.
  • query for yy priority 2 emails
  • send priority 2 emails
  • update priority 2 emails with sent date/time.

The updates actually occurred for each email sent, so we could easily track the time/order of sends for troubleshooting purposes. We would query a few hundred emails each minute, let’s say 500, knowing that was the rate at which we could send emails. We wanted all priority 1 emails to go, so our value for yy would be  500 – xx.

As we worked to scale things out, we also needed to track what items were queried by which client. Our solution here was to add a machine name to the data, which was blank when emails were inserted, but would be updated by a client with its name as it queried rows. Since we were looking to determine which emails to send, we’d update xx rows with the name of a client process and then query back those rows. The query used the sent date of NULL with the client name to get the correct rows.

Using a combination of the date sent, the client name, and the priority, we could easily manage detecting and working with changes to this table and build a high volume queue that worked extremely well on SQL Server 2000, and all versions since.

Advent of Code Day 1, Puzzle B

As I continue through the Advent of Code, albeit slowly, I’m solving each puzzle 3 days. I worked through 6 of them in Python before moving on to other languages, and this is my chance to catch up with both PowerShell and T-SQL. I likely won’t post all solutions, but I was having fun rewriting code, so here are the ways I looked at things.


Note: You should try this on your own. I logged into the AventofCode with GitHub and things worked great for me.


Go on, give it a try.


I’ll wait.


Solutions coming.




I started here, using iterations, which are very powerful in Python. In this case, I took advantage of the multiple variable assignment in Python to enumerate the array and get each value and index. I think use a comparison to determine if I add or subtract one. Finally, an IF returns the current index if I hit –1. I should probably have a break in there for efficiency as well.

def calculate_negative(directions):
start = 0
for i, c in enumerate(directions):
if c == '(':
start += 1
start -= 1
if start == -1:


This was a bit trickier for me. I wasn’t sure how to work with a string and pull out values. I did some searching and ran across the .ToCharArray function. That doesn’t feel like the best way to do this, but I decided to use it.

The rest of the function is similar and gave me the correct answer, so there you go.

$count = 0
$floor = 0
foreach ($c in $input.ToCharArray())
if ($c -eq '(') {$floor += 1}
elseif ($c -eq ')') {$floor -= 1 }
$count += 1
if ($floor -eq -1)


A more complex situation here, given that we need to work through a string, calculating a running total. I broke the input up using a tally table and a splitter with substring. This was fairly easy, and not complex as I was just getting individual charaters.

Then it was a simple running total of the CTE to get me the totals at each point in time. This worked well, but I needed the first –1 total.

Finally, I used the outer query to get the min value out of the code and use that, which gave me the lowest value where the –1 occurred.

WITH tally (n)
FROM (VALUES (0), (1), (2), (3), (4), (5), (6), (7), (8), (9)) a(n) -- 10
CROSS JOIN (VALUES (0), (1), (2), (3), (4), (5), (6), (7), (8), (9)) b(n) -- x 10 = 100
CROSS JOIN (VALUES (0), (1), (2), (3), (4), (5), (6), (7), (8), (9)) c(n) -- x 10 = 1000
CROSS JOIN (VALUES (0), (1), (2), (3), (4), (5), (6), (7), (8), (9)) d(n) -- x 10 = 10000
, direction (n, d)
SELECT tally.n
, d = CASE WHEN SUBSTRING(@input, n, 1) = '(' THEN 1 ELSE -1 end
FROM tally
), currfloor
, 'currentfloor' = SUM(d.d) OVER (ORDER BY d.n ROWS UNBOUNDED PRECEDING)
FROM direction d
SELECT MIN(currfloor.n)
FROM currfloor
WHERE currentfloor = -1

Advent of Code Day 1, Puzzle A

I have been working my way through the Advent of Code series across the last few weeks. This is a side project, allowing me to use some Python skills to solve the various puzzles. I also started to use PowerShell to solve the puzzles, mostly becauase this means I practice new skills.

Note: I urge you to try to work on this yourself, withough reading my solutions. It’s fun and worth some practice. Once you’ve done that, feel free to read on.

Go ahead

Write your own.

Here’s the input:

DECLARE @input VARCHAR(MAX) = ‘()(((()))(()()()((((()(((())(()(()((((((()(()(((())))((()(((()))((())(()((()()()()(((())(((((((())))()()(()(()(())(((((()()()((())(((((()()))))()(())(((())(())((((((())())))(()())))()))))()())()())((()()((()()()()(()((((((((()()())((()()(((((()(((())((())(()))()((((()((((((((())()((()())(())((()))())((((()())(((((((((((()()(((((()(()))())(((()(()))())((()(()())())())(()(((())(())())()()(()(()((()))((()))))((((()(((()))))((((()(()(()())())()(((()((((())((((()(((()()(())()()()())((()((((((()((()()))()((()))()(()()((())))(((()(((()))((()((()(()))(((()()(()(()()()))))()()(((()(((())())))))((()(((())()(()(())((()())))((((())))(()(()(()())()((()())))(((()((()(())()()((()((())(()()((())(())()))()))((()(())()))())(((((((()(()()(()(())())))))))(()((((((())((((())((())())(()()))))()(())(()())()())((())(()))))(()))(()((()))()(()((((((()()()()((((((((()(()(())((()()(()()))(())()())()((())))()))()())(((()))(())()(())()))()((()((()(()()())(())()()()((())())))((()()(()()((()(())()()())(((()(()()))))(())))(()(()())()))()()))))))()))))((((((())))())))(()(())())(()())))))(()))()))))))()((()))))()))))(()(()((()())())(()()))))(((())()))())())())(((()(()()))(())()(())(())((((((()()))))((()(()))))))(()))())(((()()(()))()())()()()())))))))))))))(())(()))(()))((()(())(()())(())())(()())(())()()(()())))()()()))(())())()))())())(())((())))))))(())))(())))))()))))((())(()(((()))))(()))()((()(())))(()())(((((()))()())()()))))()))))()))())(()(()()()))()))))))((()))))))))))()((()))((()(())((())()()(()()))()(()))))()()(()))()))(((())))(())()((())(())(()())()())())))))))())))()((())))()))(()))()()))(((((((()))())(()()))(()()(()))()(()((()())()))))))(((()()()())))(())()))()())(()()))()()))))))))(())))()))()()))))))()))()())))()(())(())))))()(())()()(()()))))())((()))))()))))(()(((((()))))))))())))())()(())()()))))(())))())()()())()()())()(()))))()))()))))))))())))((()))()))()))())))()())()()())))())))(()((())()((()))())))))())()(())((())))))))))))())()())(())())())(()))(()))()))())(()(())())()())()()(()))))(()(())))))))(())))())(())))))))())()()(())())())))(())))))()))()(()())()(()))())())))))()()(()))()))))())))))))))()))))()))))))())()())()()))))()())))())))))))))))()()))))()()(((()))()()(())()))))((()))))(()))(())())))(())()))))))(()))()))))(())())))))()))(()())))))))))))))())))))))))()((()())(()())))))))((()))))(())(())))()(()())())))())())(()()()())))()))))))())))))())()()())))))))))))()()(()))))()())()))((()())(()))))()(()))))))))))()())())(((())(()))))())()))()))()))))))()))))))(()))))()))))()(())))(())))(()))())()()(()()))()))(()()))))))))()))(()))())(()()(()(()())()()))()))))))))(())))))((()()(()))())())))))()))())(()())()()))())))()(()()()()))((())())))())()(()()))()))))))))(()))(())))()))))(()(()())(()))))()())())()))()()))())))))))))))())()))))))()))))))))())))))()))))())(()())))(())()))())())))))()()(()()())(()())))()()))(((()))(()()()))))()))))()))))((())))()((((((()()))))))())))))))))))(((()))))))))))))(())())))))())(()))))))(()))((()))())))()(()((()))()))()))))))))))())()))()(()()))))())))())(())()(()))()))())(()))()))))(()()))()()(())))))()))(())(()(()()))(()()())))))(((()))))))()))))))))))))(())(()))))()())())()()((()()))())))))(()))))())))))))()()()))))))))())))()(((()()))(())))))(((())())))))((()))()(()))(()))))(()())))(()))())))))()))))(())(())))()((()))(())())))()()))()))))))))()))(()()()(()()()(()))())(())()())(((()))(())))))))))(((()())))()()))))))))()(())(()))()((((())(())(()())))()))(((())()()()))((()))(()))())())))())))(()))())()())())(()(())())()()()(())))())(())))(())))(())()))()))(()((()))))))))())(()))))))())(()()))()()))()(()(()())))()()(()((()((((((()))(())))()()()))())()))((()()(()))())((()(()(()))(()()))))()())))()))()())))))))()()((()())(())))()))(()))(())(()))())(()(())))()()))))))(((()(((()()))()(()(())())((()()))()))()))()))()(()()()(()))((()())()(())))()()))(((())()()())(())()((()()()()(()(())(()()))()(((((()())))((())))))(()()()))))(((()(())))()))((()((()(())()(()((())))((()())()(()))(((()())()()(()))(())(((()((()())()((())()())(((()()))((()((())(()))(()())(()()()))((()))(())(()((()()())((()))(())))(())(())(())))(()())))(((((()(()(((((()())((((()(()())(())(()()(((())((()(((()()(((()()((((((())))())(()((((((()(()))()))()()((()((()))))()(()()(()((()()))))))(((((()(((((())()()()(())())))))))()))((()()(())))(())(()()()())))))(()((((())))))))()()(((()(()(()(()(()())()()()(((((((((()()())()(()))((()()()()()(((((((()())()((())()))((((((()(()(()(()())(((()(((((((()(((())(((((((((())(())())()))((()(()))(((()()())(())(()(()()(((()(())()))())))(())((((((())(()()())()()(((()(((())(()(((())(((((((()(((((((((()))(())(()(()(()))))((()))()(())())())((()(()((()()))((()()((()(())(())(()((())(((())(((()()()((((((()()(())((((())()))))(())((()(()((())))(((((()(()()())())((())())))((())((()((()()((((((())(((()()(()())())(()(()))(()(()))())())()(((((((()(((()(())()()((())((()(()()((()(()()(((((((((((())((())((((((())((()((((()(()((((()(((((((())()((()))))())()((()((((()(()(((()((()())))(())())(((()(((())((((((()(((((((((()()(())))(()(((((()((((()())))((()((()((()(()()(((())((((((((((((()(((())(()(((((()))(()()(()()()()()()((())(((((((())(((((())))))())()(()()(()(()(((()()(((((())(()((()((()(((()()((()((((())()))()((((())(())))()())(((())(())(()()((()(((()()((((((((((()()(()())())(((((((((())((((()))()()((((())(()((((()(((())())(((((((((((()((((())))(())(()(((()(((()((())(((((()((()()(()(()()((((((()((((()((()(()((()(()((((((()))))()()(((((()((()(()(())()))(())(((((((()((((()())(()((()((()(()))())))(())((()))))(((((((()()()())(()))(()()((()())()((()((()()()(()(()()))(()())(())(((((()(((((((((((()((()(((()(((((((()()((((((()(((((()(()((()(((((())((((((()))((((())((()()((())(((())()(((((()()(((((()((()(()(((((((()(((((()((()((()((())(())((())(()))()()))(()()(()(()()(((((((()(((()(((())()(((((()((((((()())((((())()((()((()(()()())(()))((((()()((((((()((()(()(()((((()((()((())((((((()(()(())((((((()((((((((((()((())()))()(()(()(((((()()()))((())))()(()((((((((((((((()(((()((((()((())((()((()(((()()(()(((()((())(()()())))()(()(()(((((()()(()(()((((()(((((())()(()(()))(((((()()(((()()(())((((((((((((((())((())(((((((((((())()()()(())()(()(()(((((((((())(((()))(()()())(()((((()(())(((((()())(())((((((((())()((((()((((((())(()((()(())(((()((((()))(((((((((()()))((((()(())()()()(())(()((())((()()))()(((())(((((())((((((()()))(((((((((()((((((())))(((((((()((()(()(())))())(()(()))()(((((()())(()))()(()(())(((()))))())()())))(((((()))())()((()(()))))((()()()((((((()))()()((((((((())((()(()(((()(()((())((()())(()((((())(()(((()()()(()(()()))())())((((((((((())())((()))()((())(())(())))())()(()()(())))())(()))(((()(()()(((()(((())))()(((()(())()((((((())()))()))()((((((()(()(((((()())))()))))())()()(((()(((((())((()()(()((()((()(()(()(())))(()()()()((()(())(((()((()))((((()))())(())))())(()))()()()())()))(((()()())()((())))(())(()()()()(()())((()(()()((((())))((()((()(())((()(()((())()(()()(((()())()()())((()))((())(((()()(())))()()))(((()((())()(((((()())(())((())()())())((((((()(()(((((()))(()(‘;

Now solve it.

Your turn.

Here come my solutions, so don’t scroll down until you’re ready.




I’ve been trying to learn Python, so this was a good chance to practice. Day 1 is fairy simple in Python, since I can do a .count of a string. In my case, I made a function that takes the input and does a count of a particular string.

def calculate_floor(directions):
print(str(directions.count('(') - directions.count(')')))

This function takes the input and counts the number of open parenthesis, an up, and then subtracts the count of the close parenthesis, the down. I print that, and with the input given on the site, this gives me the correct answer.


I have been trying to improve my PowerShell skills, and this doesn’t really help with lots of the types of things I do in PoSh, but it does help me learn a bit more about the language and practice some code.

In this case, I take a similar approach, which may not be the best. I start by converting the input to an array and then using Where-Object to look for the open parenthesis. I do the same for the close, and then subtract. I don’t know if I could do this in one statement, and didn’t research much. What I found wasn’t helpful, but I was able to solve the puzzle. Perhaps someone will tell me if I can assign two variables at once with different counts.

I did try RegEx, but coulnd’t get it to work, so I moved on. Perhaps I’ll go back to that. Input abbreviated.

$input = ‘()(((()))(()()()((((()(((())(()(()((((((()(()(((()’

$up = ($input.ToCharArray() | Where-Object {$_ -eq ‘(‘} ).Count
$down = ($input.ToCharArray() | Where-Object {$_ -eq ‘)’} ).Count

Write-Host(“Up:” + $up)
Write-Host(“Down:” + $up)
Write-Host(“Final:” + ($up-$down))


This was interesting to me since some of the puzzles I did in Python (I was 6 ahead when I tried new languages), required some parsing. In this case, I decided that I’d do some replacement. I thought that I wanted to count all open parens. How could I do this?

The easy way is to pull out the opens, or remove the closes. REPLACE() does this, so my up was a CTE

WITH UpFloors (UpCount)
( SELECT LEN(REPLACE(@input, ‘)’, ”)) ‘UpFloors’)

Then I did the same for the closes.

WITH UpFloors (DownCount)
( SELECT LEN(REPLACE(@input, ‘(‘, ”)) ‘DownFloors’)

Now I can put them together.

WITH UpFloors (UpCount)
( SELECT LEN(REPLACE(@input, ‘)’, ”)) ‘UpFloors’)
DownFloors (DownCount)
( SELECT LEN(REPLACE(@input, ‘(‘, ”)) ‘DownFloors’)
SELECT UpFloors.UpCount – DownFloors.DownCount
FROM UpFloors,  DownFloors

Multiple CTEs – #SQLNewBlogger

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as#SQLNewBloggers.

One of the techniques that I find very handy in solving SQL problems is the CTE. This is much easier to read, for me, than embedding code into a derived table. For example, which of these is easier to decode, or perhaps more importantly, debug?

WITH calendarquarters (qtr, yr, dt)
( SELECT qty = ‘Quarter ‘ + CAST(c.QtrNum AS VARCHAR(3))
       , yr = ‘Our year ‘ + CAST(YEAR(c.CalDate) AS VARCHAR(4))
       , c.CalDate
   FROM dbo.Calendar AS c
FROM calendarquarters cq
INNER JOIN dbo.CustomerSales AS s
  ON cq.dt = s.LastSale

Or this:

FROM (  SELECT qty = ‘Quarter ‘ + CAST(c.QtrNum AS VARCHAR(3))
       , yr = ‘Our year ‘ + CAST(YEAR(c.CalDate) AS VARCHAR(4))
       , dt = c.CalDate
   FROM dbo.Calendar AS c
) cq
INNER JOIN dbo.CustomerSales AS s
  ON cq.dt = s.LastSale

I’d argue the first becomes easier, especially when I have multiple tables in the join. In this way I can more easily see in the first example I’m joining two tables/views/CTEs together. If I want to know more about the details of one of those items, I can easily look up and see the CTE at the beginning.

However when I want multiple CTEs, how does this work?

That’s pretty easy. It’s actually just listing each CTE, separated by commas. In my case, I wanted to do this:

with upcte


(select uplen = len(replace(floorstring,’)’,’’)

from Day1


, downcte


(select downlen = len(replace(floorstring,’(’,’’)

from Day1


select uplen – downlen

from upcte, downcte

That’s it. I use the WITH once, and then each CTE stands along. I could add the column names if I wanted, but here I can easily see I’m querying two numbers from two “tables”, CTEs in this case, and performing subtraction. If I want the details, I get that from the previous definitions.


A quick post as I used this technique in solving Day 1 of the Advent of Code. This took about 5 minutes to write, and I got to add my own twist to the concept.


A quick one from my first Google result. While I knew how to do this, I double checked myself with a search.

Multiple CTE in One Select Statement Query –

Getting all Yesterday’s Sales, or Finding Midnight Yesterday

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as#SQLNewBloggers.

I see questions like this regularly. How do I get all the sales from yesterday? I tried using DATEADD(day, –1, getdate()), but I only get some of the sales.

Many people working with T-SQL know this is an issue. They know that getdate() returns the date and time of this instant (roughly). At the time of this writing, that’s 3:11 pm.

2015-11-25 15_11_29-Photos

However if I want sales from yesterday, I really want all timestamps from midnight on. So I probably want code that looks like this:

SELECT SUM(ordertotal)
 FROM sales
 WHERE SalesDate > '20151124 00:00:00'
 AND SalesDate < '20151125 00:00:00'

How do I get the time to be midnight?

The easy answer is one I’ve been using quite a bit lately to answer questions, and I’ve refreshed my knowledge of the datetime trick. I use a combination of DATEADD and DATEDIFF to get to a 0 based datetime.

SELECT DateAdd(Day, Datediff(Day,0, GetDate()), 0)

In this case, I’ll get midnight yesterday, or 2015-11-24 00:00:00. This is because I’m using 0 as my base date and looking for the days (in DATEDIFF) since that 0 based date. When I add those days with DATEADD to the same zero based date, I get the correct date, but with a 0 based time.

This same technique works to find the first of this month.

SELECT DateAdd(Month, Datediff(Month,0, GetDate()), 0)

You can also use other datetime values to normalize those times.


This was a quick post. I had answered the question and spent less than ten minutes putting this together.