Skip to content

Awareness

How often have you been caught off guard by some issue with one of your databases? I would guess most people have received a phone call, email, or trouble ticket that alerts them to the fact that something has gone wrong. A few of you might have even had an irate customer or manager wander over to your desk and loudly inform you of an issue. Those are not great times, and most of us would prefer to find out about problems before others bring them to our attention.

This past week was the 66th T-SQL Tuesday blog party. If you haven’t heard of it, I would encourage you to look each month for the announcement on Twitter and participate. It’s a great chance to share some knowledge, and showcase your own experiences or viewpoint on a particular topic. I’d actually encourage all of you to go back and write posts about the past topics at some point. It’s a great way to challenge yourself to document your own experiences in a concrete way.

The theme this month was monitoring (thanks to Catherine Wilhelmsen b | t), and this is exactly the way that you build awareness in your environment. Monitoring allows you to ensure that you, as the DBA, learn about issues, or potential issues, first. Monitoring doesn’t solve your problems; that’s up to you to do, but it can help you proactively detect issues early and perhaps mitigage them before users notice them.

The blog entries this month cover all sorts of issues that you might think about watching for. The 825 errors in your log, metadata about your backups, jobs, and more. There’s a variety of posts that cover all sorts of home grown systems, as well as noting that third party products work well. You have a huge variety of choices, from building your own tools and processes or purchasing something like SQL Monitor from Red Gate. However no matter what your budget, resources, time, desire, and more, I’d highly recommend that you really think about ensuring that you are monitoring your systems in a way that lets you know how things are working.

What to do when they’re working poorly, that’s up to you. Hopefully the information we publish at SQLServerCentral, and the links we send out every week from Database Weekly, can help you learn to be better at your job.

Quick Installs with Chocolatey–Screen Capture

I have been looking for a new screen capture tool. I used to use Snagit, but it felt too heavyweight and buggy for me. I just need to snag part of a screen for my job and blog. I used Quick Screen Capture, but in Win 8 it didn’t want to accept my license key. I’d tried a few more, but none really worked well.

I mentioned this to Andy Warren on the phone the other day and he told me to try Green something. I decided to do that today, and couldn’t remember the name, so I surfed over to chocolatey.org and searched for screen captures. Sure enough, Greenshot popped up first.

I quickly opened an administrative level command prompt and typed “choco install greenshot” and went along with my work.

Chocolatey is an amazing package manager and it started the install for me in the background.

chocolatey_install

A few minutes later I noticed the browser pop the greenshot website as the final part of the install. The program was set up with its install defaults and running on my system.

2015-05-07 14_06_26-Administrator_ Command Prompt

Chocolatey is amazing and I urge you to look at it for installing software on your system, especially if you’re a developer or consultant that regularly moves hardware or systems.

If you’re a software developer, get your stuff packaged into here.

Getting Database Properties – DatabasePropertyEx()

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

I had someone ask me a question about security recently, and while working through the answer, I ran across something I didn’t know well: databasepropertyex(). Here’s a few notes.

Each database in SQL Server has all sorts of options and settings. Most people (including me) get in the habit of checking here for the information. First we right click the database (after starting SSMS if it isn’t running)

dbproperty

Then you get this dialog, with lots of stuff.

dbproperty2

And more on the other tabs.

dbproperty4

Time consuming, and error prone. As I get older, I find that trying to decide which selection is set for which option becomes harder. I find my finger tracing across the screen. It’s entirely possible I’d make a mistake when glancing at this to check the ArithAbort setting.

Use T-SQL

Scripting and querying is usually better. Not the sp_configure scripting where you get a whole list of options, but looking for a particular item. That’s where DatabasePropertyEx() comes in. This lets you query for a database property.

The problem comes in when you run it. If you run the function, you get little information.

dbprop_a

Certainly you can go look at BOL to get more data, but that’s annoying. If I run sp_configure, I get data. However here, nothing. Even if I do what I think would be helpful, with a NULL parameter, I don’t get a list of stuff. SQL Prompt alerted me to the fact that the first parameter is the database, and the second is the property, but that doesn’t work.

dbprop_b

Fortunately, I have SQL Prompt, so I get this when I put in quotes for the second parameter.

dbprop_c

As you can see, the parameters don’t map to the properties, though you can figure out what they mean if you see them. They tend to follow the conventions that most application programmers use (IsAutoClose).

That’s fine, and it just means you need to have a reference handy for properties to query. I wish MS would give all properties with a NULL parameter, or a link to BOL.

Writing

This one took a bit longer to write. Once I realized I didn’t know databasepropertyex() very well, I had to read about it (5 minutes) and experiment a bit. I took some screen shots, which is always cumbersome. As I wrote this, I had to change the wording and ordering a few times to try and convey a simple message. I was originally going to look at more details, but decided to keep this simple and talk about just querying properties.

This was about 30-40 minutes for me.

You can do this. Join the #SQLNewBlogger group and start documenting your career. You can see all my posts that fall into this area by looking through the SQLNewBlogger tag here.

References

Documenting with Tests

Documenting code isn’t a task that many people enjoy. Those of you that have been through a computer science curriculum probably added lots of verbose, and obvious, comments to code. It’s probably unlikely that you ever found much value in the comments in a paying job.

There are all sorts of ideas on how to document your softwarewriting good comments, and even refactoring code to remove unnecessary comments. I’m sure that if we put five developers in a room, we’d end up with eight different ways to comment code that would be debated, with no agreement on how to proceed.

I ran across an interesting approach from Ed Elliot on documenting your code with unit tests. I hadn’t thought about the tests providing some documentation, but it’s an interesting idea. I’d have to work with the concept a bit, but I’m skeptical I’d get enough information from unit tests to ensure I understood what a stored procedure was doing, especially if I had 10 tests for a long procedure.

I do think unit tests are important, and perhaps in conjunction with some type of code header that gives the requirements the procedure fulfills, I’ll get enough information to understand the code.

I don’t have a universal solution that I think will work in most situations, but I do think that having a tool like intellisense or SQL Prompt helps you self document code with expressive names for columns, variables, and aliases. If nothing else, those phrases for variables can clue the next programmer in to what is happening better than single letter names.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.2MB) podcast or subscribe to the feed at iTunes and LibSyn.

Database Lifecycle Workshops

Trying to implement version control, and Continuous Integration (CI), and automated releases of code is hard. The concept is simple, but all of the various processes, tools, and links that need to be connected can be hard. I’ve seen too many organizations that struggle to get started, often because they see the entire process as just too complex and overwhelming.

It’s not.

Whether you’re trying to release application code or database code, you need to treat them in a similar manner. While there are lots of books, tools, blogs, and companies that can help you work with application code, the database has been more challenging.

At Redgate, we’ve put together a few workshops to help break down the stages of engineering your database delivery and get you moving to help smooth the process of making database changes. These workshops are coming to locations all around the US and UK in the next few months. Check out our schedule, which has dates in :

  • Bellevue, WA (May 15)
  • London, UK (May 20)
  • Philadelphia (June 4)
  • Belfast, Northern Ireland (June 26)
  • London (July 8)
  • San Diego, CA (July 16)
  • Manchester, UK (July 24)
  • Baton Rouge, LA (July 31)

There are more dates, and we’re certainly open to scheduling more locations and even private classes if you’d like. Contact us if you have a request.

Most of these classes are delivered by experienced partners of Redgate, often consultancy organizations that have been working with clients for years to build smoother application delivery processes, and are moving into the database space. The classes use our tools, and Grant, myself, or one of our other Redgate experts will be on hand to help you learn the skills you need to start implementing DLM concepts. We can even recommend consultants to work with you for a two day DLM Health Check or on longer term projects that help you build better software. Contact DLMPartners@red-gate.com for more information.

We find that many of our customers are struggling with different parts of building a smooth software pipeline. They want to get better, and they find the Redgate tools help, but they need a little push or confidence to implement an engineered process.

Our three workshops are designed to teach you about different parts of a DLM environment. They map to the Database Lifecycle, helping you move to a different stage and a better engineered process at each step. The three workshops are:

  • Database Source Control – getting your database code into a Version Control System and dealing with the issues of branching, merging, team coordination and more.
  • Database Continuous Integration – learning to automate the building and testing of your code each time a change is checked into a VCS. This lets you find issues quickly.
  • Automated Database Deployment – Gain the skills to automatically generate upgrade scripts and ensure they are tested and deployed smoothly, without any of the release day stress many of you experience.

These classes won’t solve your DLM challenges in a day, but they’ll give you the skills and confidence to move forward. Ask you boss to send you and get a bit of training to help you start building better software.

Come join us and learn to ship often and ship safe.

T-SQL Tuesday #66 – Monitoring SQL Server

tsqltuesdayThis is a T-SQL Tuesday that not only makes me smile, but makes my boss happy. In fact, I’m sure that quite a few people at Redgate Software, my employer, are thrilled with this topic since we have a product that helps you here: SQL Monitor.

T-SQL Tuesday #66 has the theme of monitoring, hosted by Catherine Wilhelmsen. Anything having to do with monitoring is fair game. Custom scripts, home grown applications, even third party software.

If you’re a #SQLNewBlogger, this is a great chance to keep going with a new post. If you can’t write now, come back and publish something later. If you post something on May 12, 2015, you’ll get included in Catherine’s roundup. However if you can’t make it, you’re still a T-SQL Tuesday writer.

Either way, follow #tsql2sday and learn a few things.

Self Monitoring

My post is going to be more of a philosophical one, with a little code.

Most of the time I’ve worked with SQL Server, I’d had some responsibility for a production instance. Even when I was writing code and building software, I’d still find myself as the backup, if not the primary, accidental DBA. I learned early on that I needed to have some way to track what was happening on an instance.

One of the ways in which I did this was by ensuring each instance could monitor itself. I’d found far too many times that having a process connect remotely to the instance and gather data was a way to lose data. Too often something would get missed. The exact time that a process connected was the time things didn’t work.

As a result, I decided to keep some monitoring on the local instance. Even though I usually had a central server roll up information, if that instance dropped off the network (or my monitoring server died), I’d still know what was going on. This was especially important when I worked in the financial industry or was bound by ISO 9000 standards.

Note: These days I don’t bother to manage monitoring software of my own on an instance. I use SQL Monitor because I work for Red Gate, but no matter who my employer might be, I’d buy software because it’s worth the cost. I don’t have the time to worry about maintaining my own software for most monitoring tasks.

DBAAdmin

I had a setup routine that I used to use on every instance. It did a few things, the first of which was create a DBAAdmin database. This was the place I could put anything I needed to monitor my instance. I tried to keep all the code the same between instances, and tried to write idempotent code so that re-running the install wouldn’t cause issues.

Once I had the database, I’d add tables and procedures for various purposes. For example, one of the things that needed to be tracked each day was the backups for the databases. So I created a table called DBBackups.

CREATE TABLE DBBackups ( BackupID INT IDENTITY(1,1) , Backupdate DATETIME , DatabaseName VARCHAR(2000) , BackupType CHAR(1) , Filename VARCHAR(2000) , sizeGB NUMERIC(15,2) ) ;

This table had a corresponding procedure that was designed to scan the filesystem and report back on the latest backups written to the file system. The details of each file are recorded here.

Why? If there’s a problem with monitoring, the local machine still has the information about backups. What’s more, I can usually get more details here than I’m capturing in a monitoring system that’s looking at the last backup date or just getting full backups. If msdb history is wiped, I have a second copy here.

However I also need to remove data over time, so this solution usually has three parts.

  • Table for data storage
  • Procedure(s) for gathering data
  • Procedure and job to remove data older than xx days

That’s simple, but it’s a decent amount of work. However I only use this for certain areas. I used to really care about backups, but SQL Monitor captures that for me. However I might care about sp_configure settings. While SQL Monitor can alert me if there are changes, how do I know what changed? I’ve used a process like this to keep the last week’s worth of sp_configure information, captured every day to let me look back.

As I noted above, I wouldn’t rebuild a monitoring solution for overall checking of a database, but I might capture specific information using this process, with the table and procedures tailored to the specific information I care about.

Opening Up Data

Tim O’Reilly has been an advocate of open data access and standards for some time, especially from governments. He’s pushed for more interoperability and certainly more accessability from all sorts of groups. He gave an interview earlier this year to LinuxVoice where he talked about a variety of things, but data was foremost on his mind.

There are some good thoughts, but I was pleased to see him looking for more software to adapt how it works with data rather than asking data to match the application. An interesting thought he had was in the area of control systems. Does every device or sensor need a separate application and way of interacting or should we have some guiding design principles that let similar applications work in similar ways with different data? That almost sounds like good data modeling and normalization principles in action, backing a data driven application.

I also liked his acknowledgment of the fact that so much of our data isn’t very portable. Between social networks and proprietary storage, it becomes hard to move data around. The pattern of downloading data, perhaps editing, perhaps not, and then uploading elsewhere works great with ETL tools, but it’s cumbersome for many users and applications to deal with. Building ways for us to interact with disparate data, allowing for queries to remote sources, sometimes transforming and copying data, all of this needs to be easier to implement and integrate inside software.

In some ways, I think the 3.0 model of our Internet interaction will take place around data. I think SSIS will continue to be one of the most valuable tools in SQL Server (along with lots of demand for work), but it still needs improvement and enhancement to catch up to other ETL tools. I really hope Microsoft believes this and continues to invest in the tool.

I also think that the data professionals that really stand out in the next decade will be those that learn to make the choices about when to use R, JSON, XML, HADOOP, or whatever non-RDBMS tool to meet a need. But also when not to use these tools. The better data professionals will make good decisions about when to query data, and when to move it to another system.

It’s an exciting time to work with data as the opportunities and rewards continue to expand and grow. I look forward to what the future will bring us.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.2MB) podcast or subscribe to the feed at iTunes and LibSyn.

The Service Pack Fiasco

I remember getting a Service Pack years ago that caused a blue screen of death on Windows 2000 servers when it was installed. In fact, a quick search shows that quite a few operating systems (and SQL Server 2005) have had issues with SP2 over the years. Perhaps we shouldn’t be waiting for the first service pack to upgrade, but upgrading and then skipping SP2 for various products.

Microsoft has responded fairly well to problems with service packs by removing them quickly from their download sites and re-releasing them. This occurred recently for SQL Server 2014 SP1, where an issue was discovered with the SSIS Catalog. This is disconcerting to me, not because of the bugs, but because these mistakes and problems seem to lead Microsoft in the direction of abandoning Service Packs. The idea floated in the past, and still pushed by many people, is that the Cumulative Updates are good enough for patching SQL Server.

That seems crazy to me. If we have problems with a service pack, wouldn’t we still have problems with a CU? Or is Microsoft hoping that the smaller group of people that are impacted with CU issues are good beta testers and limit the impact of issues to the community? I’m assuming the testing is similar for all patches, if not the same. However as we all know, testing can’t cover every possible scenario. Microsoft notes this as their latest documentation includes these quotes:

I really prefer that a service pack is released every year for each product. It can contain all the CUs up to that point, with no additional patches, thought hopefully a bit more testing. It should be a line in the sand that helps administrators manage their systems to keep up with patching and reduce support requirements. I’d actually even suggest Microsoft require vendors that want to certify their products on SQL Server and use any logo, validate their software on an SP within four months of release. In most cases, there isn’t any work for vendors, merely the effort to re-run all their tests.

We pay a large licensing cost for SQL Server, and the lifecycle support policy under which many of us purchased our software notes that we’ll get ten years of support and the term “service pack” is embedded throughout the SQL Server support pages. We should receive a service pack every year to provide continuing support for our platforms.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.8MB) podcast or subscribe to the feed at iTunes and LibSyn.

A Week to SQL Saturday #393 – Redmond

Only my second SQL Saturday of the year (wow), but SQL Saturday #393 is next week in Redmond, WA. This is my first time at a SQL Saturday in Washington, though I’ve been to the area many times. I’m looking forward to the trip up with a couple days in the Seattle area and getting the chance to catch up with some friends.

I’ll be talking tSQLt and testing at the event, and the full schedule has a lot of great sessions to choose from. If you’re in the area on May 16, 2015, I’d urge you to register and come by for the day.

If you can’t make it, hopefully I’ll see you at another event soon.

How Virtualized?

I went to a talk recently where I saw this statistic: “50% of all workloads were virtualized in 2009. That number is 72% today.”

That’s a really big number, at least in my mind. That implies the vast majority of all servers, file, print, database, email, etc. are virtualized. Inside of companies that have their own data centers and machines, they must be heavily virtualized. I’m sure that all those instances in the “cloud” also count, but still, 72%? That’s big.

However I’m sure that’s skewed towards those machines that don’t require a lot of resources, like file and print servers, DNS hosts, etc. This week, I thought I’d see what the percentage is inside of your organization.

What percentage of your SQL Servers are virtualized?

Give us numbers of physical v virtual if you can. I’d combine all instances, from development to test to production, not worrying about size or workload. If you have a single guest on a host, using almost all the resources, that’s a virtual server.

My suspicion is that the percentage of SQL Servers is much lower than that of other workloads, but I’m curious. With the low overhead of modern hypervisors, and the free (or low) cost, it makes sense to virtualize servers. If for no other reason than to remove any weird hardware dependencies for DR purposes. However I’m sure that there are large workloads that require more resources than the current hypervisors can expose, at least for some database instances, and those need to remain on physical machines, but my guess is more often than not, it’s the human concerns or lack of confidence that prevents virtualization.

Let us know this week how your organization is doing in the trend towards virtual servers.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.4MB) podcast or subscribe to the feed at iTunes and LibSyn.

Follow

Get every new post delivered to your Inbox.

Join 5,262 other followers