Running Multiple Threads Quickly

Recently I was looking to generate a quick workload against an instance. I wanted to add to the amount of work my system was doing by executing a number of queries simultaneously. There are a variety of ways to do this, but I just wanted a quick set of queries. I’ll show a couple ways to do this and then look at a few other tools in later posts.

The Cumbersome Way

I can open queries in two windows in SSMS. Note, each of these will execute 50 times.

2016-05-20 14_17_34-SQLQuery1.sql - (sa (56))_ - Microsoft SQL Server

Now I have two windows, and I can click execute in one, then click to the other, and click execute again. That’s easy. When I do this, I’ll have two threads, each running a query 50 times.

A Better Way

A better way is to use a SQLCMD call with my query in it. In this case, I’ll create a notepad file and add multiple SQLCMD calls in it.

2016-05-20 14_29_03-SalesDemo-2015-12-01-1745-export-i-fgod4b6h - VMware Workstation

The key here is the “start” at the beginning of the line. This will spawn a new thread with the program being called in it. In this case, I’ll get 5 windows very quickly, each running my query. My query is in another file:

2016-05-20 14_28_19-SalesDemo-2015-12-01-1745-export-i-fgod4b6h - VMware Workstation

If each query is set to run multiple times, I’ll have a simple load generated. In my case, I’ll run the .CMD file from the command line, but I could double click it. When I do, I see this:

2016-05-20 14_30_13-SalesDemo-2015-12-01-1745-export-i-fgod4b6h - VMware Workstation

You can see the window where I started the queries in front. Three of the command windows are in the background, each of them running queries over and over. The output from the query, with all the dashes for spacing between the headers and data, are in each window.

A Basic Recursive CTE and a Money Lesson

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

When I was a six or seven year old, my Mom asked me a question. She asked if I’d rather have $1,000,000 at the end of the month, or a penny on day 1, with the note that each day of the month, she’d double what I’d gotten the first day. Doing quick math in my head, $0.01, $0.02, $0.04, etc, I said a million.

Was I right? Let’s build a recursive CTE.

Recursion is an interesting computer science technique that stumps lots of people. When I was learning programming, it seemed that recursion (in Pascal) and pointers (in C), were the weed out topics.

However, they aren’t that bad, and with CTEs, we can write recursion in T-SQL. I won’t cover where this might be used in this post, though I will give you a simple CTE to view.

There are two parts you need: the anchor and the recursive member. These are connected with a UNION ALL. There can be multiple items, but we’ll keep things simple.

I want to first build an anchor, which is the base for my query. In my case, I want to start with the day of the month, which I’ll represent with a [d]. I also need the amount to be paid that day, which is represented with [v]. I’ll include the $1,000,000 as a scalar at the end. My anchor looks like this:

WITH myWealth ( d, v)
AS (

— anchor, day 1
‘d’ = 1
, ‘v’ = CAST( 0.01 AS numeric(38,2))

Now I need to add in the recursive part. In this part, I’ll query the CTE itself, calling myWealth as part of the code. For my query, I want to increment the day by 1 with each call, so I’ll add one to that value.

myWealth.d + 1

For the payment that day, it’s a simple doubling of the previous day. So I can do this a few days: addition or multiplication. I’ll use multiplication since it’s easier to read.

myWealth.d + 1
, myWealth.v * 2

My FROM clause is the CTE itself. However I need a way to stop the recursion. In my case, I want to stop after 31 days. So I’ll add that.

UPDATE: The original code (<= 31) went to 32 days. This has been corrected to stop at 31 days.

myWealth.d <= 30

Now let’s see it all together, with a little fun at the end for the outer query.

WITH  myWealth ( d, v )
AS (
— anchor, day 1)
‘d’ = 1
, ‘v’ = CAST(0.01 AS NUMERIC(38, 2))
— recursive part, get double the next value, end at one month
myWealth.d + 1
, myWealth.v * 2
myWealth.d <= 31
‘day’ = myWealth.d
, ‘payment’ = myWealth.v
, ‘lump sum’ = 1000000
, ‘decision’ = CASE WHEN myWealth.v < 1000000 THEN ‘Good Decision’
ELSE ‘Bad decision’

When I run this, I get some results:

2016-05-17 18_48_04-Start

Did I make a good choice? Let’s look for the last few days of the month.

2016-05-17 18_48_16-Start

That $1,000,000 isn’t looking too good. If I added a running total, it would be worse.


If you want to try this yourself, add the running total and explain how it works.

Changing Your PASS Credentials

I got an email today from PASS, noting that credentials were changing from username to email. That’s fine. I don’t really care, but I know I got multiple emails to different accounts, so which account is associated with which email?

I clicked the “login details” link in the email and got this:

2016-05-24 12_31_06-PASS _ User Login

Not terribly helpful, but I was at least logged in. If I click my name, I see this:

2016-05-24 14_09_26-Movies & TV

Some info, including the email, which I’m not sure is linked to the email I clicked on, or is based on browser cookies. However, there’s no username here.

If I click the edit profile link, I get more info, but again, no username. No way to tie back anything I’ve done in the past to this account.

2016-05-24 14_12_28-Movies & TV

I have always used a username to log into the SQLSaturday site, so I went there. On this PASS property, I’ve got my username.

2016-05-24 14_14_46-Movies & TV

If I click the username, I go back to the PASS site, to the MySQLSaturday section, but again, no link to this username. However I realize now which email is related to which username.

Hopefully the others will go dormant soon and I won’t get multiple announcements, connectors, ballots, etc.

The point here isn’t to pick on PASS as much as it is to point out some poor software and communication preferences. Changing to email from username (or vice versa) can be a disruptive change. I’d expect the email would include some information on username and email relation, or at least username since it was sent to a specific email. That would allow me to determine where I might need to contact PASS to update things, or which username was affected for me.

I’d also expect that the username to be stored somewhere and visible on the site. Even if this isn’t valid login information, why not just show it? When we migrated SQLServerCentral from one platform to another, we kept some columns in the database that showed legacy information. This information wasn’t really used, but it did help track down a few problems we had with the migration. Having a bit of data is nice, and it doesn’t cost much (at least in most cases).

This wasn’t a smooth process, though not too broken for me. I like that PASS sent the communication, and I’m glad the old method still works. I logged in with username today. I wish there was a bit more consistency between PASS applications, and that they included a date when username will no longer work. I also hope they update their testing (or test plan) with any issues they discover this week, so the problems aren’t repeated.

Changing a Computed Column–#SQLNewBlogger

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

I was working with a computed column the other day, and realized I had the wrong definition. In this case, I was performing some large calculation, and the result was larger than an int. However the first part of the formula was an int, which resulted in an implicit conversion to an int.

I needed to change the formula, and then realized that plenty of people might not work with computed columns much, and not realize how you alter a computed column.

You don’t.

In fact, you need to drop the column and add it back. In my case, this was what I did. Here was my table:

, StatMonth TINYINT
, StatYear int
, PageVisits INT
, TimeOnSite TIME
, Engagement AS (PageVisits * DATEDIFF(SECOND, CAST(’00:00:00′ AS TIME), TimeOnSite))

I wanted to cast the PageVisits part of the column to a bigint to solve the issue. I first needed to do this:

ALTER TABLE dbo.SiteStats
DROP COLUMN Engagement

Once that’s done, I can do this:

ALTER TABLE dbo.SiteStats
  ADD Engagement AS (CAST(PageVisits AS BIGINT) * DATEDIFF(SECOND, CAST(’00:00:00′ AS TIME), TimeOnSite));

Now I have a new definition that works great.

Some of you might realize that this could be an issue with columns in the middle of the table, and it is. However you shouldn’t worry about column order. Select the columns explicitly and you can order them anyway you want.


A quick post, five minutes. Even if you had to search for how this works, you could do this in 10-15 minutes, tops. Research, write why you did this and potential issues with your system.

The New Global Dashboard

Redgate recently released SQL Monitor 5.2, which is the latest upgrade to our monitoring/alerting/troubleshooting product for DBAs. This was the big change that the team has spent a lot of time developing and refining. It’s been available for a few weeks as a hidden URL, but with 5.2, this becomes the default main screen for SQL Monitor.

I think it’s a good move forward. In general, I don’t like things moving around physically in applications, as I get used to them being in a certain spot, or I expect them. However in this case, it makes sense.

In a dashboard for monitoring and alerting, you want to know what’s broken. Having a list of 20 servers at the top, and 1 broken one potentially “beneath the fold” (in newspaper parlance) and requiring scrolling would be bad. As a DBA, I’d want to see those items that are problematic. When I look at the site, I see:

2016-05-12 08_39_11-Global Dashboard

The cluster has a long running query, which is an active, high priority alert. If I were to clear this, along with the other active alerts, this “card” would move to the end, and the sm-cluster2 item would take the top left spot.

Note there are options to configure what is a high or low priority, and even pin specific servers at the top, but the general behavior is to let you know what’s broken now.

There’s one other cool feature in this. If I have a high level alert, like a machine unreachable, and the alert clears itself (the machine reboots), I may see the machine as “green” on the dashboard when I login. This is because current alerts are shown, not historical ones. I can still get the historical data, but the intention is to make this a responsive tool for right now, not last night.

I think this is a great change, and I’m excited to see how well it works in practice as customers roll this out.

SQL Nexus

I was lucky enough to be accepted to speak at SQL Nexus in Copenhagen and I attended the event a few weeks ago. This is the new Nordic SQL conference that seems to be replacing the SQL Rally. I’d never been to Denmark, but the trip was easy, and I had no problem getting from the airport to the area downtown where the event was held. Surprisingly (for me) everyone I met spoke English, which was nice, considering I didn’t even know how to pronounce many words, including the train station I was trying to get to.

Being on the water, the speaker dinner was appropriate and quite enjoyable. I’ve had some good ones, but I think this was the best of them all.

Photo May 02, 12 16 50 PM

The event was held in a movie theater, which I think is very interesting. The spaces were large, theater style room with big screens. Our computers were hooked to the projectors, which worked well. You can see the keynote in the IMAX room below.

Photo May 03, 8 59 10 AM

The keynote was interesting, from Joseph Sirosh (Microsoft) and Troels Peterson (physicist at the Neils Bohr Institute). The same keynote from Joseph was at SQL Bits, but I don’t think Dr. Peterson went over to the UK. I want to write on Joseph’s a little later, so I’ll just take a few moments and show some highlights of how data is managed at the Cern Hadron collider.

Dr. Peterson works with the ATLAS detector. He had a few nice stats on the hardware.

Photo May 03, 10 24 59 AM

If you do some math here, you’ll see that when they run the detector and conduct an experiment they produce a lot of data. In fact, this was the next slide:

Photo May 03, 10 34 21 AM

That’s serious data. It was interesting that he said that’s an unmanageable amount of data. In fact, they need sensors to make decisions on the raw data because they can’t even use computers to analyze this level of information. However, they do have computers. In fact, they have:

  • 1127 racks
  • 10,070 servers
  • 17, 259 processors
  • 90, 948 cores
  • 75,718 disks
  • 113, 852 TiB raw disk
  • 312 TiB of memory capacity.
  • 120 tape drives
  • 52000 tape catridges
  • 75 PiB data on tape.

In all their analysis, searching for the secrets of the universe, they’ve learned a lot and gotten better at finding anomalies and problems with data. They know all their data is flawed, so they must use algorithms to try and find the data they can rely on in the entire lake of bits that is captured and stored. They use a lot of machine learning to comb through data.

In fact, he said their research actually showed that there was a reason certain data was altered in line with the phase of the moon. In fact, Dr. Peterson said that they determined that the length of the collider tunnel actually lengthened by 1mm because of the moon.

There were lots of other interesting SQL Server 2016 talks, including the ones you’d expect on machine learning, R, one on IoT (a bit of a wreck of a talk) and a great one showing MitM and other attacks against a SQL Server from a Linux machine.

The event was two days long. I spoke on encryption and security changes in SQL Server 2016 that went well. I’ll do some writing on my demos, showing more Always Encrypted, RLS, Dynamic Data Masking, and more.

The theater was right next to the river, with a nice walkway. A few of us were able to run alongside in the morning. Plenty of people walked or biked along the river each day, and the weather was amazing.

Photo May 03, 10 04 04 AM

I think this was a really nice conference, at a good cost for those of you in Europe. If you can get to Copenhagen, it might be worth the two (or three days with a pre-con) to try SQL Nexus next year. I’m hoping they do the event again next year and I’d certainly like to go if I can.

Pensacola in June–SQL Saturday #491

One June 5, I’ll be back in Pensacola, FL for SQL Saturday #491. It’s been a few years since there was a conference in the area, and I’m excited to go back. I really like the area, and am looking forward to visiting for a few days.

I’ll be presenting my dive into Always Encrypted in SQL Server 2016. I presented this in Phoenix recently and the session was well received. I like this feature, but there are some caveats and gotchas, as well as various items you should consider before making the decision to implement it. Hopefully I’ll see a few of you there and you’ll enjoy the talk.

However if you’re nearby, there are pre-conference sessions on both Thursday and Friday that you can attend. Both are inexpensive training and the chance for you to quickly improve some data skills.

There are lots of great sessions from other speakers, and Pensacola is a great place to spend a weekend. Lovely beaches, not too large, not too expensive, and a beautiful area in June. If you’re anywhere from Houston to Florida to Tennessee, consider combining a little learning with a vacation on the gulf coast.

Another Day, Another City

After a few days in Copenhagen for SQL Nexus, I traveled to London late last night, and am now making my way to Liverpool for SQL Bits. A hectic schedule, leaving one city, flying to another, a few hours sleep and moving again today.

It’s exciting to be heading back to SQL Bits again, as it’s my favorite conference, and I am looking forward to seeing people from all over the world that I’ve become friends with across the years. Plus it will be fun to see all the costumes at Friday night’s space themed party.

I’ve got two sessions to deliver. A lunchtime session on version control Friday and then a testing talk Saturday.

Between now and then, some final rehearsals, and hopefully an hour or two at the Beatles museum.

Explicitly using tempdb with ##tables

I had someone ask me last night if this statement would create a permanent table in tempdb with a strange name or a global temp table:

CREATE TABLE tempdb..##mytable
( id int

My suspicion was that this would always be a temp table, but since I usually don’t include tempdb, I decided to test things. I quickly opened up SSMS and tried it. I got this message:

2016-04-21 13_55_14-Microsoft Solitaire Collection

That seems to imply what I suspected. The ## override everything and determine the table type. When I look in the list of tables, I see my table there as a temporary one.

2016-04-21 13_56_21-Start

This is the behavior I’d expect, and it acts the same in SQL 2012, 2014, and 2016 (RC2).

I don’t think there’s an advantage to using tempdb..##mytable, or even creating a permanent table (until restart) using tempdb.dbo.mytable, but if there is, please note something in the comments.