Off to SQL Nexus

This week begins my two city, two conference journey across the Atlantic. I arrive in Copenhagen today, after traveling overnight from Denver. I wrote this before leaving, knowing that I’ll likely be a bit worn out as I make my way from Denver to Washington D.C. to London to Copenhagen.

This is the most relaxing part of my trip, with a day to adjust in a new country before the SQL Nexus conference starts tomorrow. I’m looking forward to getting some coffee and exploring the city.

Tomorrow is a conference day, hanging out and learning a bit before I speak on Wednesday morning. I’ll be talking about SQL Server 2016 Encryption, and I expect that if SQL Server 2016 hasn’t RTM’d by this time, it will either Tuesday am at SQL Nexus or Friday am at SQL Bits. I could be wrong, but as I write this, I expect those are the likely dates.

I’m looking forward to Nexus, which has lots of SQL 2016 content scheduled. I just hope all the sessions are in English Winking smile

Install Cumulative Updates

For years I haven’t recommended installing Cumulative Updates (CU) as they are released. The main reason is that Microsoft has had language associated with their CUs that say customers should not install a particular CU unless they are experiencing the specific issues the CU corrects. Each CU has (in the past) also noted that you should stick with the latest Service Pack otherwise. That disclaimer has been enough for me to be concerned about CUs in a general sense, despite the insistence that the CUs were tested as well as SPs. My thought was that if the testing was the same, that disclaimer wouldn’t exist.
Well, things have changed. The latest CUs have this language in the section where a KB says that Microsoft recommends CUs as they are release:
  • SQL Server CUs are certified to the same levels as Service Packs, and should be installed at the same level of confidence.
  • Historical data shows that a significant number of support cases involve an issue that has already been addressed in a released CU.
  • CUs may contain added value over and above hotfixes. This includes supportability, manageability, and reliability updates.
That’s good news, though as Kendra Little notes, you still need to test. Bugs will still exist in patches, and really all software, so it’s important that you test in your own environment. That means you need a test plan that can easily run, preferably an automated test plan. If nothing else, this is a good reason to use tSQLt and have tests written for your system. At least you can verify important queries and systems are working. Kendra has a good list, so read her post.
While I think the quality of CUs is up and they are probably as safe as most of the Windows patches we get every month (and are often installed no matter our individual feelings), I’d still be wary. If you can’t test, if you’re busy, if this is a month you can’t afford for things to fail, then don’t install the CU. This is like throwing your own developers’ code into production without any testing. Make sure you know what is being changed, and you look for obvious problems. No one will be too upset of an obscure issue, but if your scheduled jobs start failing, you’ll dramatically reduce the confidence people have in you.
I am still wary of all patches. They’re disruptions, to both your routine, and potentially to availability as well. Make sure you test, but if you have the time, I’d say keeping up with patches is worth doing. Microsoft is constantly fixing issues, and you want to take advantage of their hard work, if you can verify the patches don’t degrade your system.
Steve Jones

Stress Testing

Many of the DBAs that manage production systems will at some point determine what level of hardware is needed to support a workload. Whether this is a physical server purchase or a cloud “rental”, someone has to decide what hardware is needed. How many cores, the amount of RAM, the number of disks, which hopefully correspond to some level of IOPs, and more. Even in the Azure SQL Database world, you must decide what database capacity you will pay for.

Since this is a big decision, and changes can be hard to make, many DBAs overbuy hardware. After all, no one wants to have a slow server. This is true for Azure as well, at least for many people I know. While changing from an S3 to a P2 is quick and easy in the Azure portal, it’s not such an easy sell to management. If they’ve budgeted $150/month and you tell them we want to go to $900/month, the technical change is the easiest part of this.

As a result, I’m surprised that we don’t really have better ways to determine if hardware will support our workload. I see this question asked all the time, and although there are tools and techniques suggested, I’ve yet to see many people have a set, known standard way of evaluating hardware and a particular workload.

One one hand, I think there should be better tools to do this, whether from Microsoft or someone else. I suspect since this is such a rare activity and businesses have been willing to overbuy hardware (or deal with substandard performance), that there isn’t any large impetus to solve this issue.

However I wanted to ask if any of you actually stress test hardware? Either your current hardware or new purchases. If you don’t know what your level your current hardware performs at, how do you compare that to new hardware?

Do you have a way to replay and measure a workload? Do you have the time to do so when new hardware arrives? Is there a documented method you use? Apart from discussing this today, I’d love to see some articles that detail exactly how you test hardware from a technical tool perspective, and then a followup that examines and evaluates the results.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.2MB) podcast or subscribe to the feed at iTunes and LibSyn.

Am I a sysadmin?–#SQLNewBlogger

 

I was doing some security testing and wondered if I was a sysadmin. There are a few ways to check this, but I thought there should be a function to tell me.

There’s this code, of course:

SELECT
    ServerRole = rp.name,
    PrincipalName = SP.name
FROM sys.server_role_members rm
    Inner JOIN sys.server_principals rp
        ON rm.role_principal_id = rp.principal_id
    Inner JOIN sys.server_principals SP
        ON rm.member_principal_id = SP.principal_id
where sp.name = SUSER_SNAME()
and rp.name = ‘sysadmin’

That lets me know if my login is a sysadmin. However, there is a function that you can use. IS_SRVROLEMEMBER() is a function that you can use, passing in a server role as a parameter. The code I’d use to check on sysadmin membership is this:

SELECT IS_SRVROLEMEMBER(‘sysadmin’);

If I run this, I get a 1 if I’m a member, or a 0 if I’m not.

2016-04-12 11_38_34-Settings

Using this function in your code allows you to make decisions based on role membership for the users involved, and perhaps alert them of needs for certain rights.

SQLNewBlogger

This was a quick one, really about 10 minutes to organize and write. Most of the time was writing the code to join system tables. If you tackle this subject, talk about how you  might use this, or where this type of check could come in handy in your code (maybe before taking some action).

Losing All Traces of Data

I was reading a thriller recently, in which a businessperson had their child threatened if they didn’t get some data for the criminals. Once the person had retrieved the data, they were told to delete it from the system and from all backups. Of course, they could do this, all in a few paragraphs of a novel. I’m sure plenty of lay people read this passage and accepted it as a possibility

While I certainly understand how a user might be able to delete data from a system, especially in many third party applications that are poorly written but have sold well. However, could someone actually delete all information from backups? I’d say that in most of the companies I’ve worked in, this wouldn’t be possible. If the information was of any age, it would be stored in multiple locations on different media, some of which would be offline.

However I haven’t worked lately in some enterprises where companies have moved to using disk backups, with systems connected together and managing versions. I suspect that it is possible in some of these enterprises to actually remove all traces of data from the organization, which isn’t what I’d ever want possible. If for no other reason than this is an incredible attack vector for ransomware, a malicious virus, or some other destructive process (including rm -rf). There’s also the issue of mistakes made by users; should they be able to remove all traces of data?

There may be valid reasons to remove all copies of data from an organization, especially when some time has passed. However I think this should be a difficult process, have some hurdles to overcome, not the least of which is physical access, and should require multiple people to approve the actions. As we connect more and more systems, and rely on data being available, allowing anyone to permanently remove data without oversight will become a problem.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.8MB) podcast or subscribe to the feed at iTunes and LibSyn.

Cloud First

SQL Server 2016 is the first “cloud first” release of SQL Server, as told to us by Remus Rusanu, former Microsoft SQL Server development team member. The features and enhancements in SQL Server 2016 have been running in the Azure SQL Database cloud for some time before they will be packaged up and released as the on-premise SQL Server product that most of us expect to work with.

There are a number of items about this worth discussing, but one stands out to me. The idea of using large numbers of feature flags and deploying code to the “cloud” for use by customers and internal Microsoft people is interesting. On one hand, it’s really just a faster way of having beta software examined by users other than developers, with no install/uninstall/upgrade for users. Speed is good, but being on the bleeding edge and having systems break isn’t necessarily what any of us want. However the use of these flags to turn features off quickly means that disruptions can be minimized for individual customers.

Some of the engineering process changes made to be cloud first were important for Microsoft to have one main branch of code for SQL Server. Having a separate branch for the cloud and on-premise versions had to be inefficient and problematic. However, that’s not something we, as customers, care about. We just want the database platform to work, wherever we run may run it.

I do think that having features come out in Azure, whether private testing, public preview, or general availability is good. The people that can test these features give feedback quickly, and the rest of us aren’t affected by the problem code. More importantly, the developers at Microsoft get the chance to learn more about how the features will be used and tune them before a very wide set of on-premise customers get code. Personally I was glad to see things like Row Level Security (RLS) appear in Azure first (Mar 2015), and come later to the on-premise version (SQL Server 2016).

I really expect in the future that almost every feature that is added to the SQL Server platform will be run in some form in Azure before it’s released for on-premise use. In fact, I’d be surprised if features are added to the on-premise without 6-12 months of testing in the cloud. That’s good, and I hope it leads to an increase in code quality for those of us that will run SQL Server on our own hardware.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.0MB) podcast or subscribe to the feed at iTunes and LibSyn.

Specialist or Generalist

What would you rather be: a expert specialist in some narrow area of SQL Server or a generalist that’s competent in many areas? The same question could apply to any area of technology, like C# or Windows, or even any subject in which you work. What’s your preference?

Most of us don’t have to make a strict choice. Often in pursuit of deep knowledge in one area, say indexing, the effort will lead to lots of good general knowledge in other parts of the platform. I think it’s important that gain both types of skill, as you work through your career and grow your abilities. I find that there is always more to learn, especially in technology. Even in areas that I think I have deep knowledge about, I continually add to, and refine, my skills.

Brent Ozar had an interesting post on this topic, looking at the maturity of a particular subject area. He noted the more mature some area, the more you might learn towards specialization. I think that’s good advice for your career, though I do think that if you find a deep passion, don’t be afraid to specialize in some area. While you might not get a monetary compensation directly back for the effort put in, I think the reward of learning something you love, and achieving expertise is extremely valuable and important to you as an individual.

As we move through our careers in computing, we find there are always new technologies, techniques, and architectures that we will need to understand. We need to grow our knowledge. In many cases we see older ideas re-packaged and updated for the modern times, but that doesn’t mean skills we can rely on prior knowledge. Instead, you will need to update your skills.

I also think more and more we will be driven to give guidance to others as much as use specific expertise to solve problems. With that in mind, I think whether you choose to be a specialist of generalist, you’ll need to continue to learn more about both implementing technology, as well as communicating your recommendations.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.8MB) podcast or subscribe to the feed at iTunes and LibSyn.

Explicitly using tempdb with ##tables

I had someone ask me last night if this statement would create a permanent table in tempdb with a strange name or a global temp table:

CREATE TABLE tempdb..##mytable
( id int
);

My suspicion was that this would always be a temp table, but since I usually don’t include tempdb, I decided to test things. I quickly opened up SSMS and tried it. I got this message:

2016-04-21 13_55_14-Microsoft Solitaire Collection

That seems to imply what I suspected. The ## override everything and determine the table type. When I look in the list of tables, I see my table there as a temporary one.

2016-04-21 13_56_21-Start

This is the behavior I’d expect, and it acts the same in SQL 2012, 2014, and 2016 (RC2).

I don’t think there’s an advantage to using tempdb..##mytable, or even creating a permanent table (until restart) using tempdb.dbo.mytable, but if there is, please note something in the comments.

The Pen Test

We need something like the Air Force project seeking network vulnerabilities for our database systems. While there are tools that can help from the outside, but we need SQL Server specific tools. Why don’t we have a Best Practices Analyzer for SQL Server 2014 (or 2016)? It seems as though this is one of those tools that we should have available for checking against instances and determining if there are obvious vulnerabilities. Being able to add custom extensions for particular needs would also be nice.

I guess if this is important to your organization, you’ll pay for a tool (or a pen test), but the problem is far, far too many organizations just don’t take security seriously. It seems that despite the huge costs incurred for not properly securing computer systems, the penalties aren’t nearly enough to get every company to patch their systems and write secure code. Most just hope they’re never attacked, or if they are, no one finds out.

I’d really like to see some tools, or maybe just a best practice process that would allow system administrators to test their own systems. I found this post from PenTestLab, and I think it’s great. The article is a good start for what to look for and what issues might exist. However even better than this might be a series of automated items we can run that showcase to management, developers, and DBAs when they’ve made a mistake that potentially opens a system to attack. Maybe best of all would be getting software vendors to actually run these types of tools against their applications and provide some proof they’ve built secure code.

I don’t hold out hope that security will improve anytime soon, but I do hope that more system administrators learn about what issues potentially exist and check their systems for potential vulnerabilities before someone else discovers the issues exist. Lastly, I hope they all share what they learn.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.4MB) podcast or subscribe to the feed at iTunes and LibSyn.

Changing the sa Password with SQLCMD

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

I wanted to make a quick note on changing the sa password, as this is a sensitive account, and the password should be changed if you ever suspect it is compromised. I’d also recommend you change this if anyone that knows the password leaves your group.

I wrote about using SSMS, but that’s not always convenient. If you need to change this remotely, perhaps in a hurry, SQLCMD is a quick way to do this.

SQLCMD is a command line tool, so open a command prompt.

2016-04-06 12_47_33-Photos

Run SQLCMD and connect to your instance as a sysadmin. If you have any doubt, you can enter the query from my previous post to check your connection.

Once you’ve connected, you can issue this code:

ALTER LOGIN [sa] with PASSWORD = N‘Sup#rAmaz!ngP@$$w0rd’

This is the code that will change the password for the login specified, even if I’ve logged in with a different account.

Once I’ve done this, test the sa login from a new session and verify it works.

SQLNewBlogger

Make sure you know how to do this. It’s a basic skill, so learn it, blog about it, and use it where appropriate. Maybe write about why you’d do this in your own post.

References

SQLCMD – https://msdn.microsoft.com/en-us/library/ms162773.aspx

ALTER LOGIN – https://msdn.microsoft.com/en-us/library/ms189828.aspx