Viewing Extended Properties for Information

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as#SQLNewBloggers.

I’ve been working a little with extended properties, adding and updating them for various objects. However in addition to altering properties, viewing the properties on an object is often necessary. This post will look at how we get the properties in a few different ways.

The easiest way to see extended properties is to look at the properties of an object in the SSMS Object Explorer. For example, I can right click on a table in OE.

2015-11-02 20_30_55-

Once I click Properties, I get a dialog with a lot of items on the left. The bottom one is for Extended Properties, with a simple add/edit/delete grid. Here I can see the property(ies) I’ve added.

2015-11-02 20_31_07-Table Properties - SalesHeader_Staging

However this is cumbersome for me. I’d much rather find a way to query the information, which is what I need to do with an application of some sort. I’d think sp_help would work, but it doesn’t. If I run this, I get the following result sets:

  • header with owner, type, and creation date.
  • column list with meta data
  • identity property information.
  • RowGuid column information
  • filegroup storage location.
  • Messages with index, constraint, FK, and schemabinding relations.

Not very helpful in this case.

I do know that extended property information is in sys.extended_properties. I can query this view, which gives me some information, but I need to join this with sys.objects for easy to understand information.

2015-11-02 20_38_42-SQLQuery13.sql - aristotle.RaiseCodeQuality (ARISTOTLE_Steve (69))_ - Microsoft

This works, and this is one of the ways in which I do query properties in various tSQLt tests.

There is one other way I’ve seen to query extended properties. When perusing the BOL page for sp_updateextendedproperty, I found sys.fn_listextendedpropery. This is a DMF, a function, that you can use to query for property values. Since it’s a TVF function, I need to use it in a query as a functional object.

2015-11-02 20_42_27-SQLQuery13.sql - aristotle.RaiseCodeQuality (ARISTOTLE_Steve (69))_ - Microsoft

There are lots of parameters in this function. However you can guess what they are after working with the other extended property procedures. In fact, the first time I started this post, I was disconnected and had to experiment with the function, adding parameters until it ran without an error.

The first parameter is the name of the property. This can be NULL, in which case you’ll get all the properties that exist.

2015-11-02 20_44_48-SQLQuery13.sql - aristotle.RaiseCodeQuality (ARISTOTLE_Steve (69))_ - Microsoft

The rest of the properties correspond to the level 0, 1, 2 types and names that you are using to filter the results. This is actually a good technique to use with this function, and I’ll be using this more in the future.


This post followed on from the previous ones. In this case, I started this disconnected, using the knowledge I had to write the basics with SSMS and the system table. That took about 20 minutes to document and then I spent 5 minutes experimenting with the function, whose name I had on an open browser tab. Once I worked through that, I spent another 5 minutes writing.

Thirty minutes to a post. You can do this.


A few items from BOL:

sp_help –

sys.extended_properties –

sys.fn_listextendedproperty –

Updating Extended Properties

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as#SQLNewBloggers.

I wrote recently about adding extended properties. Updating them is very similar. There’s an analogous procedure called sp_updateextendedproperty that changes the value of properties.

The arguments are again, unintuitive, but the more I work with extended properties, the more comfortable I become. In this case, I have the same name and value, and then the level 0,1, 2 items with both a type and name.

I highly suggest, however, that you name your parameters, including the names in your calls so programmers running across the T-SQL aren’t depending on position for an understanding of the parameter.

If I look at the table from the previous post, I can update the value of my property with this code:

EXEC sys.sp_updateextendedproperty
  @name = 'PKException'
, @value = 0
, @level0type = 'schema'
, @level0name = 'dbo'
, @level1type = 'table'
, @level1name = 'SalesHeader_Staging' -- sysname

However my property needs to exist. If I call this procedure with the wrong property, I get an error.

2015-11-02 17_25_03-Cortana

This means that you need to be sure that the property exists before you update it. Good code would have the error handling somewhere.


After writing the previous post, this one took only about ten minutes to do the typing. I’d been working with extended properties, so I had the code and just needed to take the screenshot.


A few items from BOL

sp_updateextendedproperty –

Adding Extended Properties

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as#SQLNewBloggers.

One of the things I needed to do recently was add some extended properties to objects. I got the idea of using them from John McClusky at SQL Bits. He had a great presentation on tSQLt that’s worth watching.

In any case, I wanted to add, and update, extended properties.  I had used SSMS to do this, but it’s cumbersome. I decided to experiment and see how the T-SQL code works. My browsing of Books Online showed me there are a few procedures used, one each for adding, updating, and deleting properties. I decided to start with sp_addextendedproperty.

This procedure takes some interesting, rather unintuitive arguments. Name and value are easy to understand. These are the name of the property and it’s assigned value. One thing to note is that value is a sql_variant, which should work fine for most situations, but CASTing may be required.

However the next arguments are level 0, 1, and 2, with a type and name for each. Those didn’t make much sense at first. In fact, as I wrote a few scripts, I had to keep looking up the meanings. Essentially we have three classifications of objects. The outer containers, the objects, and the dependent objects. I’ll explain them below.

The level0 type is essentially the class of object. Is this an Assembly, a Contract, a Schema, etc. For my purposes, this has always been a schema, but certainly you could add properties to the other classes if you needed them.

The level1 is the object type that we usually work with: table, view, function, procedure. For me this is pretty much been table, view or procedure, but certainly function is something I’d use as well.

The level2 is the dependent object: the trigger, the column, the parameter, the constraint. These I haven’t really used, but I certainly think that adding in properties for indexes, triggers, etc are valuable.

Adding a property is easy. For example, one of the items I add is a PK exception for heap tables. To do that (for the SalesHeader_Staging table), I’d run this.

EXEC sys.sp_addextendedproperty 
  @name = 'PKException',
  @value = 1, -- sql_variant
  @level0type = 'schema', -- varchar(128)
  @level0name = 'dbo', -- sysname
  @level1type = 'table', -- varchar(128)
  @level1name = 'SalesHeader_Staging' -- sysname

I can see this easily in SSMS.

2015-11-02 17_16_53-Table Properties - SalesHeader_Staging

Properties are great ways to add additional information to an object in SQL Server, though I certainly wish they were more visible in objects.


I knew there was a procedure to do this, and a quick search on extended properties got me to the BOL reference. I was experimenting with adding the properties while working on this, and I had to research the meanings of the parameters a bit, so this took about 20 minutes to get ready for publication.


A few items from BOL

sp_addextendedproperty –

Using Automated Tests to Raise Code Quality


Agile development practices can speed the development of applications and increase the rate at which you can deploy features for your customers. But unless you include a high level of test coverage in your code, these practices can also increase the number of bugs that are released. Databases can be particularly challenging for developers. This session examines database refactorings that introduce bugs, which are detected by automated tests. This approach allows developers to rapidly fix their code before a customer is affected.

Level: 200


These are the demos shown in this talk.

  • Adding test data inline
  • Added test data in a procedure
  • Adding test data from a separate set of tables.
  • Exclusions to SQL Cop or other tests with Extended Properties.
  • Using FakeFunction
  • Using SpyProcedure
  • The boundary issues with multiple requirements for a function.
  • 0-1-Some testing
  • Catching dependencies.


Here are the downloads for the talk.

What’s a Code Smell?

We all have a variety of code patterns and practices that we follow. Most of them were probably picked up along the path of our career. A suggestion from a colleague. A piece of sample code that solved a problem. A performance tuning trick that stopped our phone from ringing. These methods of learning are the way that most of us actually grow our skills over time.

However just because we learned something, or because a technique solved a problem doesn’t mean it was a good piece of code. In fact, often the code we may think works well might not be the most efficient way to structure the code. Many developers have learned this over the years, as they read about new techniques that are more efficent, elegant, or just simpler.

Kent Beck and Massimo Arnoldi coined the term code smell years ago, as a way of noting the development patterns and practices that  lead to poorly written, or difficult to maintain code. There have been other attempts to document practices which are not recommended, though the success is probably limited as many developers continue to build on poorly written code rather than refactoring and cleaning their codebase over time.

Simple Talk  and Phil Factor published a SQL Code Smells ebook awhile back, trying to document the signs of poorly written T-SQL. The book is good, with guidance about particular patterns that can cause you problems over time. The items aren’t meant to be rules, but rather guidelines that you adhere to unless you have a good, specific reason that you can justify to others.

I ran into a code smell recently where a developer noted that their application depended on a specific database name in order to work.  That’s not in the ebook, but I think it’s easily one I’d avoid. My connection should determine the database, not the application itself. I know there may be exceptions here, but in general, application code shouldn’t be dependent on a particular name.

I’d urge you to pick up the ebook (it’s free) and keep it handy. See if any of the items listed are habits you might have picked up over time and not realized that they are, in fact, poor practices. I would also recommend you peruse Aaron Bertrand’s Bad Habits to Kick series, as a way of improving your own code.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.9MB) podcast or subscribe to the feed at iTunes and LibSyn.


Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

I ran across a question on Facebook, of all places, the other day. Someone had asked a friend how to return a value from a procedure and assign it to a variable. My friend answered, but in the discussion, I noticed the poster was looking to return @@IDENTITY to the calling procedure as the value of the row that was just inserted.

Don’t do that. At least not without understanding the potential issues.

It’s been years since I’ve seen @@IDENTITY in use, and for a number of years before that, this was an easy “weed out” question in interviews.

If you look at the documentation for @@IDENTITY, the documentation notes that SCOPE_IDENTITY() and @@IDENTITY both return the last identity value inserted in the table, but @@IDENTITY is not limited in scope to the current session.  This means that when concurrent inserts occur, you could receive the identity value of another session. Depending on how you use this value, that may or may not be an issue.

How does this work? Let’s create a simple table with an identity. I also create a logging table and a trigger that will add a message to my logging table when I add a row to the first table.

      id INT IDENTITY(1 ,1)
    , mychar VARCHAR(20)
 (logid INT IDENTITY(56,1)
 , logdate DATETIME
 , msg VARCHAR(2000)
CREATE TRIGGER newtable_logger ON dbo.newtable FOR INSERT
  INSERT INTO logger VALUES (GETDATE(), 'New value inserted into newtable.')

If I run this, what do I expect to be returned?

INSERT INTO dbo.newtable
        ( mychar )
VALUES  ( 'First row'  -- mychar - varchar(20)


However I get this. A 56 in my result set for @@identity.

2015-09-22 17_32_20-Cortana


The reason is that the last identity value was 56, from the logging table. The order of operations is

  • insert value into newtable
  • @@identity set to 1
  • trigger fires
  • insert into logger
  • @@identity set to 56

That’s often not what we want when capturing an identity value. What’s worse, this behavior can exist, but not manifest itself until someone changes a trigger later.

If I change this SCOPE_IDENTITY(), I get a different result.

2015-09-22 17_38_26-Start

This is because the SCOPE_IDENTITY() function takes the scope into account and doesn’t get reset by the trigger code.


This took some time to write. Mostly because I had to setup the demo, test things, and then get the explanation straight in my head. It took me 15-20 minutes, including lookup time in BOL, but if you are new to writing, this might take a bit longer. You’d also want someone to review your explanation since this can be tricky to explain.



Remember the N

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

I saw a post recently from someone that was having trouble with Chinese characters being inserted into a SQL Server table. I’m not sure if they were using an application or just SSMS, but they inserted this:

insert into mytable select ‘<IDC>亮块(DS3或DS4)-清理/维修显像滚筒</IDC>’

And they got this in the table:


That’s a problem that I can see, even if I can’t read Chinese.

Someone else posted a note that when you insert, you need to let SQL Server know your string is Unicode. That means prefixing your string with an N’.

I looked in Books Online, and found this note under the nvarchar section:

“Prefix Unicode character string constants with the letter N. Without the N prefix, the string is converted to the default code page of the database. This default code page may not recognize certain characters.”

I think this means that the N’ isn’t necessarily required, but it depends on the default code page of your database. For most of us, if we don’t include it, I believe our data gets converted to UTF-16, which might not be what we want.


I ran across the post and spent 5 minutes researching things and looking in BOL. This really took about 5 more minutes to write.


NVarchar –

Creating an SQL Databases – New Portal

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

I was looking to do some testing recently, but needed a v12 SQL Azure database. The test system that I have in Azure is v11, and I needed an upgrade. I clicked the upgrade button, and it was really slow to change from v11 to v12.  It’s easier now because I think all new databases you create are v12.

I was impatient, and ended up creating a new one. I decided to do a quick walkthrough of the process.

Getting Started

I connected to Azure and went into the new (Preview) Portal. I couldn’t find anything in the old portal, so keep tat in mind. Since “old” and “new” are very relative terms in Azure, and I hate them, I’ve included lots of screenshots. Here’s the portal I used:


My first step was to go to the “Create” item and select Data + Storage. As you can see, there are all sorts of options. For this post, let’s create a new database.


Selecting “New SQL Database” gives me this set of options. The name is the big item here.


Obviously I wasn’t very creative with the name:


Next I decided to make this one blank. I wanted to do some testing, so I selected that. It’s nice to be able to create a database from a backup, especially if you are recovering from a “Whoops” mistake.


With that selected, I had this:


Now on to the pricing tier. You can go crazy here.


I went down to a Basic level, since that’s what I really need. A basic, small database for testing. It’s $5/month, but I have credits from my MSDN subscription, so to date, I haven’t paid for any of my testing on the low tier systems.


At each stage, I can see what I’ve selected. This is a good way to get started, but if you needed to do these are any scale or repeatability, learn to do it through PoSh.


My next step is to decide where this database lives. I need a server, which is strange to me. I want a database as a platform, but I think this is legacy for connection purposes, so I’ll make a new one.


I decided to make up some generic, boring name. I wasn’t feeling very creative today. I entered a password, but like with any password, I generated this in PasswordSafe first, then entered it here.

<securityrant> Please use a password manager, use strong passwords, and use separate ones for services. Using the same one for all your Azure stuff and other accounts is a bad idea. </securityrant>


Once I complete this, it’s listed under my SQL Database as the location for my data.


Change your collation if you like, but since I’m a unilingual American, this is what I choose.


I also need a resource group. I don’t have any, so I’ll need a new one.


Again, not creative. If you are doing this for something more than fooling around, spend 2 minutes thinking of some way you’ll classify stuff and use a name that makes sense.


I think we’re ready to go. Click “Create”.


Once it’s done, you’ll get a notification in the main portal. I do like that the notifications come up at the bottom, but they’re also annoying when they stack up. I’ll sometimes close them because I’m doing something else and not pay attention. However, that’s a “me” problem.paas_s

I allowed this to create a pin for me, and I can easily see my database here. If you have lots of stuff, you won’t want pins for most of them.


If I select the database, I can see details.


I can select “Settings” at the top and get more options. One of which is what I need. Note that the version is v12.


The Properties item gives me the ability to change a few things.


The Firewall allows me to limit access to specific hosts.


Auditing has the ability to let you audit by events, success of failure


If I want to audit things, I need a place where I can store the audit data. The cloud charges by everything, so be aware.


The item I really care about is the “Latest SQL database update” item. When I pick that I can see the items that will be enabled.


If I pick Users and tags, I can specify accounts and labels, but that’s not really important for a test system. Really it’s me using it, so I’ll leave those alone.

It’s really easy to set up, and worth practicing if you want some cloud experience. You can connect from SSMS and work with the system, which is really SQL Server, but if you have other stuff in the cloud, or an app, you can connect and see what is possible.

Be aware, however, that you get charged for this stuff. Use your MSN subscription and keep an eye on usage. It’s pretty cheap, but no one wants an unexpected bill. Be especially careful about VMs. Those can really rack up charges.




This one took awhile. After I completed the process, I immediately deleted the database and started to take screen shots as I went through the steps. The steps don’t take too long, but refreshes on the Portal are sometimes slow.

This actually sat in my drafts folder as a single sentence (creating an Azure db) and a bunch of pictures. It took me about 20 minutes to load the pictures into Live Writer and then work my way through them, adding notes. I also created a new database as I was writing this to walk through the same steps again (and then deleted it).


None I used. I’ve done this before and didn’t bother to look around. The Portal does a good job of leading you along.

The Tally Table

I saw a someone post a question recently about solving a T-SQL problem. One of the answers given used a tally table, which the original poster didn’t understand. A few follow up links pointed them in the right direction, but it got me wondering.

How many of you know what a tally table is and how to create one? I bet a few of you don’t, which might mean you’ve never had the need. Or it might be a hole in your skill set and you didn’t realize that a tally table can be very useful in solving a number of problems. Anything from generating dates to splitting strings. There are plenty more ways to use one, and feel free to mention more in the comments that others might not have tried.

I’m not sure the tally table is a core T-SQL skill, but I think it’s an important one you should learn as you grow your skills. After you’ve mastered the basics (Insert/update/delete, aggregates, outer joins) Adding in an understanding of window functions, the APPLY operator, and CTEs are also important to allow you to become better at solving the problems you run into with more efficient T-SQL.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 1.7MB) podcast or subscribe to the feed at iTunes and LibSyn. feed

T-SQL Tuesday #69–Encryption

TSQL2sDay150x150This is a good T-SQL Tuesday topic for me. This month Ken Wilson asks everyone to write on encryption, which is a topic I’ve presented on quite a few times.

You can participate, too. Take a few hours, learn something, and tell us what you learned. Let everyone know how you view this topic and grow your blog with a little new knowledge.

T-SQL Tuesday is a great chance to force you to write a post on a specific topic. Your post needs to publish on the second Tuesday of the month, Aug 11 this month, to be included in the roundup from the host. However feel free to write about this topic anytime, and even include the T-SQL Tuesday title.

CASTing Results

A short post this month, as I’m a bit buried in a few things, but this is one of those encryption notes that I didn’t see well documented when I started working with the subject, and I’m hoping I can save you a few minutes of frustration.

If you encrypt your data, it will be stored as a binary type. This is because encrypted data is supposed to be random, and not easily decrypted.

Let’s imagine I have some simple setup like the code below. I’ll create a key, open it, and use it to encrypt some data that I’ll insert into a table.

CREATE TABLE MyEncryptionTest( intsource INT, charsource VARCHAR(50), intencrypt VARBINARY(max), charencrypt VARBINARY(max));

INSERT dbo.MyEncryptionTest
        ( intsource ,
          charsource ,
          intencrypt ,
VALUES  ( 7, 
          'Spike' , 
          ENCRYPTBYKEY(KEY_GUID('MyKey'), 'Spike')

SELECT top 20
 * FROM dbo.MyEncryptionTest;

The results of this are that I get binary data:

2015-08-04 22_10_22-SQLQuery1.sql - ARISTOTLE.sandbox (ARISTOTLE_Steve (76))_ - Microsoft SQL Server

Now, the decryption routine for T-SQL doesn’t need to specify the key. That means instead of a *, I can use the DECRYPTBYKEY function and pass in the column.

        intdecrypt = DECRYPTBYKEY(intencrypt),
        chardecrypt = DECRYPTBYKEY(charencrypt) ,
        intencrypt ,
FROM    dbo.MyEncryptionTest;

This gives me this:

2015-08-04 22_12_22-SQLQuery1.sql - ARISTOTLE.sandbox (ARISTOTLE_Steve (76))_ - Microsoft SQL Server

Not quite what I want. What if I cast this back to an integer? After all, the output of the function is listed as an nvarchar.

        intdecrypt = CAST(DECRYPTBYKEY(intencrypt) AS INT),
        chardecrypt = DECRYPTBYKEY(charencrypt) ,
        intencrypt ,
FROM    dbo.MyEncryptionTest;

I see:

2015-08-04 22_18_10-SQLQuery1.sql - ARISTOTLE.sandbox (ARISTOTLE_Steve (76))_ - Microsoft SQL Server

Again, not what I wanted. However, since I know something about conversions, I realize the output is close to what I want. In fact, what I need to do is perform a different CAST before I perform my final one. Here I’ll decrypt the results as NVARCHAR first, then as an INT.

        intdecrypt = CAST(CAST(DECRYPTBYKEY(intencrypt) AS NVARCHAR(30)) AS INT),
        chardecrypt = DECRYPTBYKEY(charencrypt) ,
        intencrypt ,
FROM    dbo.MyEncryptionTest;

Now I see:

2015-08-04 22_15_29-SQLQuery1.sql - ARISTOTLE.sandbox (ARISTOTLE_Steve (76))_ - Microsoft SQL Server

If I do the same for the character column:

        intdecrypt = CAST(CAST(DECRYPTBYKEY(intencrypt) AS NVARCHAR(30)) AS INT),
        chardecrypt = CAST( DECRYPTBYKEY(charencrypt) AS VARCHAR(50)) ,
        intencrypt ,
FROM    dbo.MyEncryptionTest;

I’ll get the correct results.

2015-08-04 22_17_11-SQLQuery1.sql - ARISTOTLE.sandbox (ARISTOTLE_Steve (76))_ - Microsoft SQL Server

Note that if I take the character column and cast to nvarchar, I’ll get something different. Try it and see.

And don’t forget to close your key.