Skip to content

The SQL Server Container

I’ve been hearing about Docker quite a bit lately. It’s a piece of software (and a company) that builds containers that provide isolation for an application, allowing a more lightweight way to separate our applications than virtualization. Right now the containers only run on Linux, but they allow a Docker container running PHP to run on a machine as though it were the only process running on the Linux host. Other Docker containers running on the same host could contain different versions of PHP and not have conflicts. Or you could have multiple versions of MongoDB, MySQL, Java, etc. that provide separation in a much less resource demanding way than virtualization does.

It makes me think back to the idea of having stripped down, lightweight database machines. We’ve gotten the Windows CORE OS, but I wonder if Microsoft would ever take this a step further and package an even more stripped down Windows OS, with SQL Server embedded in it. Rather than have to install Windows, SQL Server, and then potentially have administrators want to install other software on the host and incur the overhead of sharing resources, could we have a single installation that includes a bare bones, stripped down and optimized Windows/SQL Server combination that only functions as a SQL Server database machine?

I doubt Microsoft would ever move in that direction, but perhaps they might use a similar concept. It seems the push from Microsoft is for more services and hybrid solutions involving cloud type services. Perhaps smaller companies might want an appliance device that contains a SQL Server, or an application like Dynamics, with hybrid bursting power in the cloud. Certainly containers might make deployment of software easier in a PaaS environment.

Either way, Docker is coming to Windows, and it’s going to be another tool that developers should consider incorporating into their application development processes.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.2MB) podcast or subscribe to the feed at iTunes and LibSyn.

MY Data

Data is merely a set of numbers. However when data is given context, it allows conclusions to be drawn and many of us make a living managing some part of that process every day. Most of us deal with data that is viewed as somewhat public, related to our interactions with businesses and organizations. However we see some of this changing as employers start to gather more data on us through social media and other sources, beginning to use that data to make hiring decisions.

It can get worse. As Shane Battier says in this article says, “big data is scary.”

Most of us don’t aren’t hired for our physical performance. While our health can make a difference in how well we write code over time, the regular, subtle decisions of drinking juice or soda don’t affect our performance much on a daily basis. We could argue about that, but from the employer’s perspective, I’m not sure those things matter.

Or do they? If the cost of employees rises over time, especially with regard to health care or sick time, should employers start to make decisions based on the data? Are we confident in the conclusions from data, which are really probabilities, not actualities? Would you want aspects of your life, perhaps outside of your health (think driving, finance, etc) to be part of the evaluation (or negotiation) process for your employment?

Perhaps a lighter question, would you be comfortable managing and writing applications that work with employee data and try to analyze, perhaps even strongly recommend, changes in peoples’ live? I’m not sure I would, and I certainly hope we don’t get to the point where the data about our personal lives directly impacts the way we are managed.

Steve Jones

 

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.1MB) podcast or subscribe to the feed at iTunes and LibSyn. 

Prepping for Winter

Living on a ranch in Colorado means that we have to prepare for winter in a way that we don’t worry about other seasons. Taking care of livestock when the weather gets cold (and snowy) presents challenges, and since it can be a matter of survival, we have to think about potential issues in advance and work to mitigate them before they occur. My return from the PASS Summit last weekend had me outside working on shed with my daughter to prep for this week’s winter storms.
When working with computers, we don’t usually need to make special preparations for seasons, but we do need to be ready for difficult times. I was reminded of this with a post this week about 12 Things to Check to Prepare for Winter. These aren’t necessarily things you need to check as the temperatures drop, but they are things you might want to examine periodically in your environment.
In many businesses, we are aware of the time periods when our systems get stressed, and great DBAs plan for those times. In one large organization I worked for, we knew that the end of quarter and end of year periods were very stressful and busy for many people in the company. As a result, any maintenance or preparation needed to occur at least a month early to ensure systems were at peak performance. In another job, at a power plant, the refueling process that occurred on a semi regular basis was the busiest time of year and we needed to ensure we were aware of when it was coming and double check everything before it was too late to do so.
Good preparation is one of the keys to avoiding any of a DBA’s worst days. After all, if things go wrong for you at work, it’s unlikely The DBA Team will be available to save the day.
Steve Jones

Use -eq in Powershell

I was writing a quick script to work with files and I only wanted to process one file for each execution of a loop. I could have done this multiple ways, but I threw this together:

$fileEntries = [IO.Directory]::GetFiles(“d:\placeholder”);
$delete = 1;
foreach($fileName in $fileEntries)
{
if ($delete = 1)
{
# do something
$delete = 0;
}
}

When I ran it, it kept deleting everything in the folder. That was really annoying, and it took me a few minutes to spot the problem. I kept thinking my variable wasn’t getting set to a new value, but it was. The problem was it kept getting reset.

I first changed to this, but that produced a PoSh error. That’s because I’m working in PoSh and not C.

$fileEntries = [IO.Directory]::GetFiles(“d:\placeholder”);
$delete = 1;
foreach($fileName in $fileEntries)
{
if ($delete == 1)
{
# do something
$delete = 0;
}
}

Eventually I remembered that I need to compare things with -eq, so I ended up with this, which worked perfectly.

$fileEntries = [IO.Directory]::GetFiles(“d:\placeholder”);
$delete = 1;
foreach($fileName in $fileEntries)
{
if ($delete -eq 1)
{
     # do something
$delete = 0;
}
}

Continuous Learning

It was a simultaneously busy, and also relaxing Tuesday for me recently. I attended Allan Hirt’s A to Z of Availability Groups at the PASS Summit. I’ve set up an Availability Groups before, but I was never sure that I completely understood everything happening in my lab, so this was a good chance to add some depth and color to my skills in this area. I met a few people surprised that I was spending time in a pre-con learning. They seemed to expect that I’d know most of what Allan was talking about.

I know quite a bit about AlwaysOn and the related technologies, but I wouldn’t consider myself anywhere near an expert like Allan. I’ve fumbled through settings, but it’s a complex topic, and more importantly, it’s easy to misunderstand or confuse the subtleties of the technology. I went because a good, solid grounding in the technology, being led about in an organized fashion by an expert, is a good way to expand and solidify your knowledge. I saw other “experts”, MCMs and talented speakers in different pre-cons, each trying to continue to learn more about SQL Server.

SQL Server is a big platform, one that’s wide in the number of features, and deep in complexity. No one knows everything about SQL Server, and most of the people I know that are extremely talented in areas of the platform, continue to grow their knowledge on a regular basis in a variety of areas.

It’s the mission of SQLServerCentral to help you do the same thing, with our daily newsletter that brings you educational information about SQL Server. Hopefully you look forward to regularly growing your knowledge, just as I do.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.1MB) podcast or subscribe to the feed at iTunes and LibSyn. 

Bucket List Data

I made a bucket list over a decade ago after hearing an interview with Ted Leonsis, former President of AOL. He had an interesting list, and while mine was more pedestrian, I have managed to knock off quite a few items, though there are more I’d like to get to. Actually, I need to redo my list and re-think my life as it’s changed since I first wrote things down.

Over the years, I’ve also tracked other sorts of data about my life, things that just intrigued me. I had my running streak (1500+ days), 14ers climbed (4), states visited (38), books read each year (75-100), and more. I’m not quite sure why numbers have a fascination for me, but they do and perhaps that’s why I like working with technology and data.

I’m sure that many of you share a similar passion for numbers, so I wanted to ask you this week:

What interesting data points do you track, or have you tracked, about your life?

It could be something with your family, hobbies, sports, weather, or something else. If there’s something you track and would like to share, let us know. If you have some interesting way of tracking data, or even an application you’ve worked on, I’m sure there are others that would find the data interesting.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 1.9MB) podcast or subscribe to the feed at iTunes and LibSyn.

It’s Not Just Poor Coding

We hear regularly about SQL injection continuing to be a problem (it is) and malware causing data loss for many companies. We’ve got misconfigured firewalls and backup tapes being lost. However even if we could solve all those issues, we’d still have security issues.

A new study shows that many workers share a tremendous number of files with coworkers, and potentially friends on a regular basis through the various cloud services. I don’t worry so much about cloud service providers being hacked themselves, or their employees selling information. It could happen, and probably does, but it’s also likely just isolated incidents that have limited impact. However I’m more worried about the poor security practices of the employees.

Many people tend to use the same passwords on different sites (a bad practice). They also often open attachments from “friends”, unaware of how dangerous it can be to pass along and share documents that can embed malware. Malicious coders are becoming smarter, learning to make their actions more subtle, often only infecting a device to use it to send emails or files to other users who are the real target of the malware. With all of these issues, it’s much more likely that a user will end up creating their own hack for malicious servers than targeted attacks will work. It’s also highly possible that many users will share documents with friends, unaware that they might be exposing corporate data to the general public with confusing settings in some of these services.

Ultimately I’m becoming more cautious about sharing information with others. I won’t open many files sent to me by people I don’t know well, and never look at any “humorous” attachments. I won’t accept a USB key from you and plug it into my laptop. It’s sad because we’ve built some amazing services to allow us to communicate easier and faster than ever before, however we’ve also not built these applications with an eye towards security first, and convenience second.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.6MB) podcast or subscribe to the feed at iTunes and LibSyn. 

T-SQL Tricks – Templates in SSMS

There are lots of little tricks that you can use to become more proficient, and efficient, at writing T-SQL. Here’s a short one that might be more for administrators, but I think for developers it’s very handy as well.

SQL Server tools have included templates for years. I first started using them in Query Analyzer with SQL Server 2000. The tools have changed, but we still have templates in Management Studio (SSMS). We can access the templates by exposing the Template Browser from the View menu (as shown below).

templates1

We can also use the CTRL+ALT+T shortcut. Either of these will give us a tab on the right side of our SSMS main window, opposite the Object Explorer, as shown below.

templates2

Like all the windows in SSMS, I can detach this, move it, auto hide, etc.

Each of these folders contains a series of default scripts that are installed with the tools. For example, I can look at the Backup folder and see three scripts in there. I can grab one of them with a left click and "drag" it to the query window and the script will appear in the query window as I’ve shown here.

templates3

A CTRL+Shift+M will give me the parameters in the script, which I can replace with my own values (or accept the defaults).

templates4

Many of these are generic, but they are useful. I’d encourage you to take a look at them and use them when you can.

You can customize these, but that’s for another post.

Double Up During Trips

Would you do this? Combine two calls, like an INSERT and then a SELECT, into the same batch sent from a ADO.NET client? It’s a proposal from Visual Studio Magazine to combine code into one batch instead instead of making two query calls (and two round trips).

It’s an interesting idea, and for those programmers that are older, we probably unconsciously think about this all the time. We still remember when bandwidth was measured in kbps and processing power in MHz, and we often consider those resources to be valuable. We try not to waste them, precisely because they impact performance. Perhaps that’s why we recoil at some of the ORM frameworks and other programming structures that make development easier but are tremendously chatty in how they communicate with our servers.

I know that programmer salaries are high, and that labor is one of the more expensive parts of building software. However rather than building inefficient applications that will create customer complaints, shouldn’t we be educating our developers more to ensure they write efficient applications? They can still use a framework or template of some sort, but they should do so in an efficient way. Along with training, using code reviews and keeping everyone on a central VCS where all code is visible can help spread knowledge throughout your staff.

I realize that many managers are loathe to invest in developers’ knowledge when those developers might leave your company for another position, but imagine that a manager elsewhere is investing in his developer. It’s entirely possible those other developers then come to work for you. Ultimately we all benefit if we work to spread and build better software development practices throughout the industry.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.1MB) podcast or subscribe to the feed at iTunes and LibSyn.

 

T-SQL Tuesday #60 – Something New Learned

tsqltuesdayIt’s been five years, and that’s amazing. Not many things last for a year, much less five, but the T-SQL Tuesday party, started by Adam Machanic (B|T)., has been amazing and lots of fun for me.

This month, Chris Yates hosts and his theme is Something New Learned. It’s a great topic, especially given the aims of T-SQL Tuesday to spread knowledge out in the world and share it with others.

AlwaysOn

This is good timing for me as I took a one day class last week. At the PASS Summit, I spent Tuesday in Allan Hirt’s A to Z of Availability Groups, which tries to help you understand AlwaysOn and the Availability Group portion of the technology. It was a great experience, and Allan did a fantastic job of walking through an overview, and then details. BTW, I can see how this would be amazing, and while I haven’t been, I’m sure that Allan’s Mission Critical SQL Server classes are valuable ways to learn this stuff in a hands on environment in a way that should help you be productive quickly on your own systems.

The most interesting part for me is that this wasn’t an all day class where I listen to lectures, follow along in a workbook, and then move on. There were actual labs, and not labs that meant I downloaded scripts onto my machine and worked on them. Actually 3 node labs, dedicated to me, with instructions on how things were configured.

What did I learn? Quite a bit, but for me the big mysteries that I’ve struggled with on AlwaysOn setups have been some of the permissions. Going through the labs, and getting the permissions necessary in the AD domain. The few places I had issues in the lab exercises were almost all related to a permissions issue I missed or had set incorrectly.

I also went through the advanced versions of the labs, specifically to practice using Powershell for some config items. This was the chance to practice some skills and try to learn a bit more about how I can use PoSh for real world tasks. While the GUI might work well, I know that if I wanted to ensure I could build and create a lab in short order, or on demand, I’d really need PoSh scripting to ensure it was done correctly, and repeatedly. The lab reinforced that.

I also learned a bit about a better way to teach. I’ve been in a few classes and lots of sessions across the last few years, but this dedicated lab environment really made things much easier for me. The hands on work was valuable in actually working through the concepts. In fact, I’ll be going through it again today as I have access to the labs for 10 days, and the workbook, so I can set up another Availability Group today and see the things I’ve done wrong in my own lab setup.

Follow

Get every new post delivered to your Inbox.

Join 4,918 other followers