T-SQL Tuesday #77–My Favorite Feature

tsqltuesdayThis month is an interesting, but tough topic. The blog party is hosted by Jens Vestergaard, and his invitation is short and simple. Pick your favorite feature and write about it. This is good, because SQL Server has grown so much, I’m sure that many people will choose different things. However it’s hard for someone that works with many different features.

My Favorite Feature

I’ve been working with SQL Server since 1991. I’ve worked with all the Windows versions, and a few on OS/2. That means I’ve had the chance to manage and develop applications on:

  • SQL Server 4.2
  • SQL Server 6/6.5
  • SQL Server 7/2005
  • SQL Server 2008/R2/2012/2014/2016

I’ve seen the platform grow and expand quite a bit. I’ve spoken on a number of topics over the years, as my jobs have changed and my emphasis has wandered. Of all the features available, however, if I have to choose one, it would be…

SQL Agent.

I’m a programmer at heart. I grew up admiring the power of computers to execute code over and over again. I appreciate the ability of computers to remember things and remind me, or to handle them on their own.

SQL Agent allows that. Over the years, I’ve taken advantage of SQL Agent to perform maintenance on systems, to alert me to issues, to run a process that needed to be performed. I can even schedule one off jobs, having them delete themselves. I can’t tell you how many times someone has asked me to run something “later” on the server, often during the evening. It’s easy to schedule a job for later, have it run one time, and then let it disappear.

For example, Andy asks me to run a query tonight that gathers some data. I create a new job and step.

2016-04-07 13_50_00-Movies & TV

I set up a one time schedule.

2016-04-07 13_49_51-Settings

Certainly I can set alerts and logging, but in notifications, I can have this job disappear.

2016-04-07 13_50_12-Movies & TV

I certainly want to make sure I have results saved, but this allows me to execute code, without much effort, and in a way that doesn’t clutter up my system.

I’ve found SQL Agent to be incredibly easy to work with, and quick to build jobs running against my SQL Server. I don’t need to setup connections, like I might need to with the Windows scheduler.

If you haven’t experimented with SQL Agent’s capabilities, or you don’t use it extensively in production, you should.

T-SQL Tuesday #75–Power BI

This month’s host is Jorge Seggara, the @sqlchicken, who works for Microsoft. A busy schedule caused a slight delay, so we’re posting the third Tuesday of this month, but that is OK. This is a great topic for T-SQL Tuesday.

Power BI Data

While Power BI is a great visualization tool, you can’t do anything without data. That means you need to find data, which is both easy and hard. Easy if you’re working within your own organization on a specific project. Slightly more complex if you want to look at data out in the world.

However I saw this in a talk last year and I was amazed. This is the type of thing I’ve written before, and it’s cumbersome and problematic. I would think that SSIS would have made things this simple years ago.

I love sports, and wanted to play with some sports statistics awhile back. Finding good data is tough, at least in a format like CSV, that you can easily import. However Power BI makes this easy. Start up the desktop and you’ll see this:

2016-02-10 14_07_40-Calendar

Right away Power BI wants to get data. Click on this and the Get Data dialog opens, with lots of choices.

2016-02-10 14_07_57-

However if you pick “Other”, you’ll see one more that I love. Web.

2016-02-10 14_13_55-Calendar

Click this. You get asked for a URL. Any URL.

2016-02-10 14_14_35-Calendar

I happen to have one handy. After the win for Denver in Super Bowl 50, I thought I’d look back at Mr. Manning’s career.

2016-02-10 14_14_44-Calendar

I take that URL and drop it in the dialog.

2016-02-10 14_14_52-Calendar

Once I click OK, this will analyze the URL for tables of data. In this case, I get quite a few.

2016-02-10 14_16_32-Calendar

Now, I can click each one to see what data this is. This isn’t what I want

2016-02-10 14_16_32-Calendar

But this is.

2016-02-10 14_16_37-

I now click “Edit” at the bottom to clean my data. I could just load it, but there are a few issues.

2016-02-10 14_16_47-Untitled - Power BI Desktop

I see all the data in the designer, and I have lots of options for working with this data.

2016-02-10 14_17_11-Calendar

First, since I’m going to do a comparison, let me rename the table.

2016-02-10 14_17_02-Untitled - Power BI Desktop

Next, I see the steps below the name. I’ll add more steps, but I’ll do this in the designer GUI. First, let me remove the last row, which is a career summary.

2016-02-10 14_19_28-Calendar

In this case, I’m only removing one row.

2016-02-10 14_19_36-Calendar

Now, I want to remove a couple columns. In my case, I don’t care about a few of the data items, so I’ll pull them away. I can right click a column or choose “Remove Colums” in the ribbon. Either way, I get rid of QBR and Team.

2016-02-10 14_20_50-Untitled - Query Editor

Now I’ve got a nice year by year summary of Peyton Manning’s career. When I close and apply the query, my data is loaded into a data set for use by my Dashboard. I can then repeat this, and I’ll have two sets of data.

And, here’s my PowerBI Dashboard. It’s not terribly useful, or interactive, but it’s got data from the web that I didn’t have to copy or move.


T-SQL Tuesday #74–The Changes

It’s T-SQL Tuesday time, the monthly blog party on the second Tuesday of the month. This is the first T-SQL Tuesday of 2016, and all you have to do to participate is write a blog post. It’s supposed to be today, but write one anytime you can and look for the invitation next month.

I’ve got a list of topics here, but watch the #tsql2sday tag on Twitter.

This month’s topic comes from @SLQSoldier, Robert Davis. The topic is Be the Change, and it’s a good one.

Quick Changes

I’m going to write about SQLServerCentral here. Years ago we were updating our email system to send a high volume of email in two ways. At the time we’d considered purchasing software from others, but found the cost to be significant at our volumes (5-6 figures a year). Instead we needed to handle emails stored in our SQL Server database in two ways:

  • Thousands of bulk emails sent overnight, as quickly as possible
  • Quick, high priority emails sent in response to actions

These two conflicting requirements meant that a simple queue of emails to send wasn’t easy for us to design around. We also needed to deal with the issues of scaling, so we wanted to have mutliple separate machines that could help spread the load.  We were building a small .NET process that would run every minute and send a series of emails.

Our design process led us to the need to build in priority levels into our table. We couldn’t think of more priorities, but we allowed for them with a smallint. Our bulk emails were inserted with a priority of 2, and the registration emails, forum notes, etc, were stored with priority 1.

Once we had a separaton of emails, we needed a way to determine what was sent already. To do this, we used a NULL date for the sending date. This allowed each process to determine when new information had been inserted into the table, and needed to be processed.

This worked well for a single machine. The process would:

  • query for xx priority 1 emails
  • send priority 1 emails
  • update sent priority 1 emails with the sent date/time.
  • query for yy priority 2 emails
  • send priority 2 emails
  • update priority 2 emails with sent date/time.

The updates actually occurred for each email sent, so we could easily track the time/order of sends for troubleshooting purposes. We would query a few hundred emails each minute, let’s say 500, knowing that was the rate at which we could send emails. We wanted all priority 1 emails to go, so our value for yy would be  500 – xx.

As we worked to scale things out, we also needed to track what items were queried by which client. Our solution here was to add a machine name to the data, which was blank when emails were inserted, but would be updated by a client with its name as it queried rows. Since we were looking to determine which emails to send, we’d update xx rows with the name of a client process and then query back those rows. The query used the sent date of NULL with the client name to get the correct rows.

Using a combination of the date sent, the client name, and the priority, we could easily manage detecting and working with changes to this table and build a high volume queue that worked extremely well on SQL Server 2000, and all versions since.

Poor Data Modeling – T-SQL Tuesday #72

tsqltuesdayThis month Mickey Stuewe hosts the T-SQL Tuesday and she has a great topic. Data Modeling is something few people seem to do, especially developers, and it often can go wrong. That’s the topic, and I have a short story.

T-SQL Tuesday is a monthly blog party, on the second Tuesday of the month. The rules are on Mickey’s blog, and you can read them there, but essentially you write a post every month.

Or later.

The Aging Model

I once worked at company that shall rename nameless. We had a system built before my time which had a hierarchical set of entities. I can’t disclose the exact industry, but imagine that we had a list of items like this:

  • Manufacturers of products
  • Distributors of products
  • Warehouse holding products
  • Vendors selling products.

In this structure, we have a strict linkage where each item below the next is contained in the item above it. In other words, a manufacturer works with specific distributors and only those distributors. Distributors don’t work with other manufacturers.

I know this sounds contrived, and it is for a supply chain. However not for the industry in which I worked. So imagine we’re 100 years ago when power was more concentrated with supply chains.

This resulted in tables like this:

CREATE TABLE Manufacturers ( manufacturerid INT IDENTITY(1, 1) CONSTRAINT manufacturer_PK PRIMARY KEY ( manufacturerid ) , manufacturername VARCHAR(200) , primarycontactid INT -- ... ); GO CREATE TABLE Distributors ( distributorid INT IDENTITY(1, 1) CONSTRAINT distributor_PK PRIMARY KEY ( distributorid ) , manufacturerid INT CONSTRAINT Distributor_Manufacturer_FK FOREIGN KEY REFERENCES dbo.Manufacturers ( manufacturerid ) , manufacturername VARCHAR(200) , PrimarySalesPersonid INT -- ... ); GO CREATE TABLE Warehouses ( warehouseid INT IDENTITY(1, 1) CONSTRAINT warehouse_PK PRIMARY KEY ( distributorid ) , distributorid INT CONSTRAINT Warehouse_Distributor_FK FOREIGN KEY REFERENCES dbo.Distributors ( distributorid ) , warehouse VARCHAR(200) , regionid INT -- ... ); GO


Each of these links to the item above it. This means that I might have a Manufacturer  table like this:

manufacturerid  manufacturername   …

————–  —————-  

1               Acme

2               Big Product Corp

With warehouses linked as shown.

warehouseid  manufacturerid warehousename   …

———–  ————– ————-  

1            1              Denver Central

2            1              Kansas City East

3            2              Santa Fe

4            1              Billings Depot

This would mean that I used distributors that worked with a warehouse, and their data would be.

distributorid warehouseid distributorname  …

————- ———– ————-  

1            1            Denver City

2            1            Denver South

3            1            Aurora

4            2            Kansas City Distributors

5            3            New Mexico Wholesale

If I wanted to get a list of the distributors that carried a manufacturer’s products, I’d have to join through the warehouse table.

SELECT manufacturerid , ... FROM dbo.Manufacturers AS m INNER JOIN dbo.Distributors AS d ON d.manufacturerid = m.manufacturerid INNER JOIN dbo.Warehouses AS w ON w.distributorid = d.distributorid ...

Not a big deal, but we had 5 levels of this appear over time. Which means that queries that might need a higher level had to cross tables in between to join data. These also weren’t narrow tables, with a decent amount of meta data for each entity. Indexing helped, but certainly we needed better rollups of data, and performance suffered as the amount of legacy data grew.

What’s more, over time, we learned that business changes. A warehouse might start to work with multiple distributors, and our model couldn’t cope.

Eventually we embarked upon a project that set of link tables between entities, so that we had just IDs in a table that showed linkages where appropriate. This was ongoing when I left, but it was a major disruption and problem for the company.

I’ve tried to avoid embedding hard relationships based on business contracts in my models. Certainly some FKs make sense, like order details linked to a specific order, but many other relationships aren’t as tightly bound, and it is possible that they will change. It doesn’t cost me a lot in modeling to treat suspect relationships as many:many, and if application development does this from the start, it’s an easy architecture to incorporate in their development.

Data modeling is always a struggle as we often work with incomplete information because an organization will morph over time. It is, however, worth taking some time to think about the possible ways in which things might change in the future and allow for some flexibility in your model where possible.

T-SQL Tuesday #70 – The Enterprise

tsqltuesdayIt’s hard to believe this is the 70th edition of T-SQL Tuesday. I haven’t missed many, and I’ve enjoyed them all. I hope more of you are participating in this monthly blog party started by Adam Machanic (b / t). Whether you write today, or you write at some time in the future, T-SQL Tuesday topics are good ways to showcase your knowledge.

This month’s invitation comes from Jen McCown, of Midnight DBA. Jen asks us to write about managing an Enterprise of SQL Servers. That’s a good topic, since many of us struggle to manage multiple instances. Whether you have 5 or 500, there are some good ideas that you might implement in any environment, and I’m looking forward to reading what people have to say.

Decoupled Consistency

A long time ago I was thrust into the role of managing hundreds of SQL Server instances at a large company. I’d managed dozens of machines before, but this was a whole new level of scale. What’s more, the majority of machines were set up completely independently of each other, with no concerns other than a mandate to try and keep versions close to each other. That was a challenge in and of itself.

The only common tool in use was Patrol, which was mainly used to monitor performance counters, but we didn’t have the SQL Server specifics, so we really could only get general performance counters, and even those were difficult to access when thousands of hosts were being monitored.

I brought an idea with me from a previous job that had served me well with a handful of disparate instances. We’d consistently set up each instance, both installation, configuration, and monitoring, however we’d decouple each instance from others. Our goal was each machine capable of operating independently from all the others.

We had found that central servers go down, that we had no good way of tracking the status from machines when this happened, and most importantly, there are always exceptions.

With this in mind, we

  • built a procedure to install SQL Server, but included a number of scripts for post installation that would standardize settings. This allowed our server build people to easily handle SQL Server installation as part of their job. These days I’d use something like FineBuild to make this easy.
  • set up a DBA database on each instance that monitored jobs and other important status for the instance. If the instance was up and SQL Agent running, we’d know the status of that instance.
  • Performed basic monitoring of some key performance counters for a quick trend of the latest performance over the last week, similar to what SQL Monitor shows. Getting a quick snapshot was quicker than accessing central monitoring, especially in a crisis.
  • Assembled a report for the instance each day, calling out exceptions at the top, and leaving expected data below. We needed documentation for our ISO certification, and our auditors loved this.
  • We did use a central server to assemble the reports from all instances and compile them for the DBAs to review. All exceptions were at the top, and we used a left join to compare the list of instances with current reports. If any were missing, we bubbled that to the top as an exception.

These were all good, simple ideas that allowed a team of 3 DBAs to manage 400-500 instances of SQL Server. We were flexible enough to handle the needs of mission critical Finance instances as well as often changing development machines. Our data was simple and always available for us to give to clients when they had questions.

Most importantly, we built a system that allowed us to deal with exceptions, but not review the mundane stuff. Our time was precious, and that’s the case in any enterprise situation. You need to focus on the 10-20% of systems that really need regular attention while ensuring the other 80-90% of them are still being maintained.

Catch up with T-SQL Tuesday

T-SQL Tuesday is a monthly blog party where we get an invitation the first week of the month to write on a specific topic for the second Tuesday of that month. Adam Machanic came up with the idea and he chooses the hosts.

If you’re interested in hosting, contact him.

While there’s a sense of community and participation here, and certainly the chance to showcase some of your knowledge for others, it’s also got another benefit.

It’s the chance for you to learn something.

Not by reading other’s posts, but by writing your own. This is your opportunity to bolster your own skills, and teach yourself something new.

Don’t Worry About Timing

I know many of you are worried about the pressure of producing something good. I know many of you will find that the first weekend of the month is really busy and you can’t write.

That’s fine.

I’ve got a list of all previous invitations and topics. Go back and pick one of them and write your own post today. Start working on it, teach yourself something, and put some thoughts down. Research what others have done by looking through the roundups. Get a friend to review the work and see if it’s readable and makes sense.

Do that ten times. It might take you ten months, but I’m sure you can write something down about SQL Server once a month.

When you get ten, start a blog and publish them. Use the T-SQL Tuesday tag, but I’d also encourage you to use the #SQLNewBlogger tag as well. Start showing your boss that you’re improving your skills. Be ready to impress you potential next new boss with a series of posts on SQL topics.

Your career will be better off, and the world will start to see better software being written.

Start improving your skills this weekend, or start documenting the learning you already do. Either way, start building a better brand for your career.

T-SQL Tuesday #69–Encryption

TSQL2sDay150x150This is a good T-SQL Tuesday topic for me. This month Ken Wilson asks everyone to write on encryption, which is a topic I’ve presented on quite a few times.

You can participate, too. Take a few hours, learn something, and tell us what you learned. Let everyone know how you view this topic and grow your blog with a little new knowledge.

T-SQL Tuesday is a great chance to force you to write a post on a specific topic. Your post needs to publish on the second Tuesday of the month, Aug 11 this month, to be included in the roundup from the host. However feel free to write about this topic anytime, and even include the T-SQL Tuesday title.

CASTing Results

A short post this month, as I’m a bit buried in a few things, but this is one of those encryption notes that I didn’t see well documented when I started working with the subject, and I’m hoping I can save you a few minutes of frustration.

If you encrypt your data, it will be stored as a binary type. This is because encrypted data is supposed to be random, and not easily decrypted.

Let’s imagine I have some simple setup like the code below. I’ll create a key, open it, and use it to encrypt some data that I’ll insert into a table.

CREATE TABLE MyEncryptionTest( intsource INT, charsource VARCHAR(50), intencrypt VARBINARY(max), charencrypt VARBINARY(max));

INSERT dbo.MyEncryptionTest
        ( intsource ,
          charsource ,
          intencrypt ,
VALUES  ( 7, 
          'Spike' , 
          ENCRYPTBYKEY(KEY_GUID('MyKey'), 'Spike')

SELECT top 20
 * FROM dbo.MyEncryptionTest;

The results of this are that I get binary data:

2015-08-04 22_10_22-SQLQuery1.sql - ARISTOTLE.sandbox (ARISTOTLE_Steve (76))_ - Microsoft SQL Server

Now, the decryption routine for T-SQL doesn’t need to specify the key. That means instead of a *, I can use the DECRYPTBYKEY function and pass in the column.

        intdecrypt = DECRYPTBYKEY(intencrypt),
        chardecrypt = DECRYPTBYKEY(charencrypt) ,
        intencrypt ,
FROM    dbo.MyEncryptionTest;

This gives me this:

2015-08-04 22_12_22-SQLQuery1.sql - ARISTOTLE.sandbox (ARISTOTLE_Steve (76))_ - Microsoft SQL Server

Not quite what I want. What if I cast this back to an integer? After all, the output of the function is listed as an nvarchar.

        intdecrypt = CAST(DECRYPTBYKEY(intencrypt) AS INT),
        chardecrypt = DECRYPTBYKEY(charencrypt) ,
        intencrypt ,
FROM    dbo.MyEncryptionTest;

I see:

2015-08-04 22_18_10-SQLQuery1.sql - ARISTOTLE.sandbox (ARISTOTLE_Steve (76))_ - Microsoft SQL Server

Again, not what I wanted. However, since I know something about conversions, I realize the output is close to what I want. In fact, what I need to do is perform a different CAST before I perform my final one. Here I’ll decrypt the results as NVARCHAR first, then as an INT.

        intdecrypt = CAST(CAST(DECRYPTBYKEY(intencrypt) AS NVARCHAR(30)) AS INT),
        chardecrypt = DECRYPTBYKEY(charencrypt) ,
        intencrypt ,
FROM    dbo.MyEncryptionTest;

Now I see:

2015-08-04 22_15_29-SQLQuery1.sql - ARISTOTLE.sandbox (ARISTOTLE_Steve (76))_ - Microsoft SQL Server

If I do the same for the character column:

        intdecrypt = CAST(CAST(DECRYPTBYKEY(intencrypt) AS NVARCHAR(30)) AS INT),
        chardecrypt = CAST( DECRYPTBYKEY(charencrypt) AS VARCHAR(50)) ,
        intencrypt ,
FROM    dbo.MyEncryptionTest;

I’ll get the correct results.

2015-08-04 22_17_11-SQLQuery1.sql - ARISTOTLE.sandbox (ARISTOTLE_Steve (76))_ - Microsoft SQL Server

Note that if I take the character column and cast to nvarchar, I’ll get something different. Try it and see.

And don’t forget to close your key.


No Defaults – T-SQL Tuesday #68


It’s the second Tuesday of the month, and time for T-SQL Tuesday. This month’s invitation is from Andy Yun, where he asks you to Just Say No to Defaults.

This is the monthly blog party started by Adam Machanic, and it’s the chance for you to write on a particular topic every month. The hosts rotate, so you have to watch the #tsql2sday hashtag for the topic. Posts for the month need to go live during the day according to UTC time.

I also keep a list of topics on the blog here, and you should feel free to write about any past topics and post something on your blog. It’s great practice, and a good way to get started as a #SQLNewBlogger.

Default Changes

There are all sorts of defaults in SQL Server. The setup program presents you with a number of choices, but in most cases a default exists for setting because SQL Server needs something.

I used to have a setup script that I would run for every new install. In a few jobs, back when we used to have physical servers, a hardware person or sysadmin would install Windows and SQL Server onto a new computer and then send me the name for customization. My script, which was really a series of SQLCMD calls in a batch file that in turn called various other scripts, was designed to add an administrative database, create some jobs to track the system, setup backups, and more.

The process really did part of what Policy Based Management can do, but was simpler and tailored to ensure that all of our SQL Servers worked in a similar manner. We could override settings, but this quick script gave us a starting point that all DBAs understood. We even ran this on development machines for instances we didn’t manage as it allowed us to troubleshoot other issues, it only took a few minutes, and it removed some of the management headaches from the developers’ minds.

However, there is one thing I’ve almost always changed on my instances. I try to do it during setup, but at times I need to do it later. That setting is the default location for files. I do this as I want to usually have data files, log files, and backup files separate from each other.

Even if I don’t have different drives, but setting up separate locations here now, I can easily move the files later and make one change here for the defaults and I know I’ll have things separate.

I’m not running a new install this week, but I’ll show you how to change it on an instance that’s installed. First, right click the instance in Object Explorer and click Properties.

2015-07-06 14_17_48-SQLQuery2.sql - not connected_ - Microsoft SQL Server Management Studio

Next, go to the Database Settings section.

2015-07-06 14_24_07-Server Properties - JOLLYGREENGIANT_SQL2012

At the bottom here you see locations for data, log, and backup. In this case, on my laptop, I only have two drives, so I can’t achieve great separation.

However in any production system, I’d have the data and logs separated to different physical drives, or at least different LUNs. Backups might go with logs, but they’d ideally be separated to another location.

T-SQL Tuesday #67 – Extended Events for DBCC

tsqltuesdayIt’s the second Tuesday of the month and time for another T-SQL Tuesday. This time it’s #67 from Jes Borland on Extended Events. You can read the invite, and whether you can participate today or not, write your blog and leave a comment for Jes on her blog.

If you want to see all the topics from the past, I have them on a post here.

If you want to host, contact the founder of T-SQL Tuesday, Adam Machanic. He chooses the hosts and is always looking for people that want to participate. You just need a blog and a good idea.

Who Ran DBCC?

I ran across a question recently from someone that said DBCC messages had appeared in the error log, but no one had run DBCC. That’s not possible as DBCC isn’t run without someone, whether that’s a human or a service, executing the DBCC command.

In the past, I might recommend a trace to track this, but not I’d choose an Extended Events (XE) session. I’ll show how to set up a simple session, though be aware I’m not an XE guru. I’m sure there are better ways to structure this, but it worked well for me.

You can start quickly building a specific XE session by right clicking the Sessions folder under Extended Events in SSMS under Management. I tend to pick the wizard for getting started, though be sure to save the script later.

2015-06-08 11_36_10-SQLQuery2.sql - JOLLYGREENGIANT_SQL2012.Sandbox (JOLLYGREENGIANT_sjones (86))_ -

From here we get the opening screen for XE, which I won’t show. The next screen asks for a name for the session. It doesn’t matter, but once you start using XE, you’ll start to get quite a few session, so it’s handy to pick something that’s simple and easy to understand later.

2015-06-08 11_38_23-New Session Wizard_ Set Session Properties

I can choose to start the session here when I’m done, but don’t worry if you’re not sure. You’ll get a chance again later.

The next step is to decide whether you use a template or build a session from scratch. I’m not sure if any of these templates will work well as I haven’t dug in. In my case, I decided to just choose a generic session.

2015-06-08 11_41_07-New Session Wizard_ Choose Template

Now I need to pick the events. There are a lot, but I went down to get the sql_statement_completed as my event. This will let me see the text of a call, which is what I need.

2015-06-08 11_41_32-New Session Wizard_ Select Events To Capture

A couple notes here. One, you might want to grow this screen so that you can read what’s in the lower windows. I know there’s a lot of data here, but this screen doesn’t seem well defined.

The second note is that don’t forget to add the event with the arrow button. Until you have an event (or multiple events) in the right window, you can’t click "Next".

The next step is to get the data fields you need. Again, there are a lot, and I limited this to what I needed. I grabbed the client host and app, as well as the database. Then I grabbed the service_principal_name and sql_text as well. This should let me determine who’s running the command.

2015-06-08 11_45_29-New Session Wizard_ Capture Global Fields

I’ve got lots of data here, but I only care about the execution of dbcc checkdb. This means I need to filter things, which is my choice with the next screen.


Here I need to click in the top line to get a place to add a filter. Then I select sql_text from the drop down and choose like for the operator and "dbcc checkdb" for the value. This should grab events that execute with a dbcc call.

2015-06-08 11_49_19-New Session Wizard_ Specify Session Data Storage

I have to choose where to store this data. Since this is mostly a check for me, I can just use the ring buffer and keep data in memory. I lowered this to 10 events, but you can certainly pick what works for you. If you care about auditing, drop this in a file. For a small session like this, there’s not much data that’s going to be captured.

I get a summary screen, which lets me review settings. I haven’t shown it here, but once I pass that, I get the "Success" screen. This is where I can start the session, and actually watch live data if I want.

2015-06-08 11_51_20-New Session Wizard_ Create Event Session

For this, I’ll watch the live data.

2015-06-08 11_52_20-JOLLYGREENGIANT_SQL2012 - dbcc checkdb_ Live Data - Microsoft SQL Server Managem

I decided to test this first and see if it captures anything else. I ran a few queries, and then this:

2015-06-08 11_52_30-SQLQuery3.sql - JOLLYGREENGIANT_SQL2012.AdventureWorks2012 (JOLLYGREENGIANT_sjon

Still a blank, live data screen. Then I ran checkdb. I saw the results, and then ran it a few more times.

2015-06-08 12_23_59-JOLLYGREENGIANT_SQL2012 - DBCC execution_ Live Data - Microsoft SQL Server Manag

As you can see, I now can see that I ran checkdb against the EncryptionPrimer database.

This lets me do some quick auditing of what’s happening with my DBCCs. A smart DBA might even use this to ensure that DBCC is being run against some databases, perhaps on a spare machine.

T-SQL Tuesday #66 – Monitoring SQL Server

tsqltuesdayThis is a T-SQL Tuesday that not only makes me smile, but makes my boss happy. In fact, I’m sure that quite a few people at Redgate Software, my employer, are thrilled with this topic since we have a product that helps you here: SQL Monitor.

T-SQL Tuesday #66 has the theme of monitoring, hosted by Catherine Wilhelmsen. Anything having to do with monitoring is fair game. Custom scripts, home grown applications, even third party software.

If you’re a #SQLNewBlogger, this is a great chance to keep going with a new post. If you can’t write now, come back and publish something later. If you post something on May 12, 2015, you’ll get included in Catherine’s roundup. However if you can’t make it, you’re still a T-SQL Tuesday writer.

Either way, follow #tsql2sday and learn a few things.

Self Monitoring

My post is going to be more of a philosophical one, with a little code.

Most of the time I’ve worked with SQL Server, I’d had some responsibility for a production instance. Even when I was writing code and building software, I’d still find myself as the backup, if not the primary, accidental DBA. I learned early on that I needed to have some way to track what was happening on an instance.

One of the ways in which I did this was by ensuring each instance could monitor itself. I’d found far too many times that having a process connect remotely to the instance and gather data was a way to lose data. Too often something would get missed. The exact time that a process connected was the time things didn’t work.

As a result, I decided to keep some monitoring on the local instance. Even though I usually had a central server roll up information, if that instance dropped off the network (or my monitoring server died), I’d still know what was going on. This was especially important when I worked in the financial industry or was bound by ISO 9000 standards.

Note: These days I don’t bother to manage monitoring software of my own on an instance. I use SQL Monitor because I work for Red Gate, but no matter who my employer might be, I’d buy software because it’s worth the cost. I don’t have the time to worry about maintaining my own software for most monitoring tasks.


I had a setup routine that I used to use on every instance. It did a few things, the first of which was create a DBAAdmin database. This was the place I could put anything I needed to monitor my instance. I tried to keep all the code the same between instances, and tried to write idempotent code so that re-running the install wouldn’t cause issues.

Once I had the database, I’d add tables and procedures for various purposes. For example, one of the things that needed to be tracked each day was the backups for the databases. So I created a table called DBBackups.

CREATE TABLE DBBackups ( BackupID INT IDENTITY(1,1) , Backupdate DATETIME , DatabaseName VARCHAR(2000) , BackupType CHAR(1) , Filename VARCHAR(2000) , sizeGB NUMERIC(15,2) ) ;

This table had a corresponding procedure that was designed to scan the filesystem and report back on the latest backups written to the file system. The details of each file are recorded here.

Why? If there’s a problem with monitoring, the local machine still has the information about backups. What’s more, I can usually get more details here than I’m capturing in a monitoring system that’s looking at the last backup date or just getting full backups. If msdb history is wiped, I have a second copy here.

However I also need to remove data over time, so this solution usually has three parts.

  • Table for data storage
  • Procedure(s) for gathering data
  • Procedure and job to remove data older than xx days

That’s simple, but it’s a decent amount of work. However I only use this for certain areas. I used to really care about backups, but SQL Monitor captures that for me. However I might care about sp_configure settings. While SQL Monitor can alert me if there are changes, how do I know what changed? I’ve used a process like this to keep the last week’s worth of sp_configure information, captured every day to let me look back.

As I noted above, I wouldn’t rebuild a monitoring solution for overall checking of a database, but I might capture specific information using this process, with the table and procedures tailored to the specific information I care about.