What is the True Version of Code?

As I’ve been working on more development tasks, I regularly hear from developers that a version control system (VCS) is the “true” version of our code. We should always look to a VCS for the actual code that we need to work with. I believe that, and I think it’s true.

However I heard a DBA say that the true version of the code is always in production. That’s what’s being executed, especially in the database world. There is some truth here as well, but in my mind this speaks to a broken process. This means we have a more chaotic, and less “engineered” way of producing our software. Perhaps more importantly, we don’t have a good source from which development can proceeed.

Certainly hot fixes need to occur, and there are times that you can’t wait for a set of changes to be made in development, tested, approved, and then deployed to production. I think you can overcome most of this, but there may always be cases where this happens, so I don’t think we should  argue about how fast a software deployment pipeline we can build.

Instead I’d say that production changes should always be fed back to the VCS. Our code is an ever changing set of files that we can only really keep track of by using a proper VCS. We should be able to track back the view of production at any point in time to a set of versions for our files in a VCS.

Certainly some of you have gotten by for years, perhaps your entire career, without a VCS. However I’d argue that isn’t a good position for your organization. What habits and skills you have could easily be lost if you leave, and your memory isn’t a point of record. A stable system, a VCS, should note the state of our systems, both in development and production.

And if you’re worried about the cost of a VCS, there are plenty of free (as in beer) ones. I recommend Git, but there are plenty to choose from. In terms of overhead, certainly you can use tools like SQL Source Control or ReadyRoll, but even without those tools, using a VCS really just requires building some habits. If you aren’t willing to change your workflow slightly, there’s probably no way you will ever ensure your environment is under control in the long run. However, if you do change to a VCS, I think you’ll find it’s well worth the effort.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.2MB) podcast or subscribe to the feed at iTunes and LibSyn.

Laptop Build Quality

I’ve been looking around at various laptops, in preparation for getting a new one. I wrote about considering the Surface Book, which is still on the list, but has dropped a bit. The hardware quality is great, but when I was in the UK last week, a few people had them and complained about some driver bugs. In particular, I was messing with one person’s touch keyboard, and they warned me not to pull if off.  If I did, the machine might crash.

Ugh. At $2k and lots of hype, I wouldn’t expect any issues like that.

In any case, this post is about build quality, not software.

I was laying in bed this week, working on some editorials when my daughter came in. She wanted me to look over a piece she was writing for school and handed me her laptop. She has a Macbook Air, and as soon as I put my Toshiba z30 down, I was impressed with the Air’s build. It’s solid, it’s light, but it feels strong. I remember loving my Macbook Air, and holding it as I reviewed her work, I was reminded of that.

My z30 flexes, to the point that across a year, my touch point is unusable with the twisting of the frame. The trackpad was also far, far superior on the Air. I thought the Macbook Pro was like that, so I swung by a Best Buy to check. I walked in and went to the Apple section, picking up a Macbook and it feel solid. It’s just a better device than my Toshiba.

However I was curious about others. I did walk over and look at a Surface Book. It’s a solid machine, about the size and weight of the MBP. However it has the touch screen, which is interesting. The trackpad works differently, but it’s a nice machine. Detaching the screen, it’s a tablet, which is nice. I still don’t know how much I’d use the tablet factor, but it’s tempting. However the weight distribution is strange. The screen is heavier than the keyboard, the opposite of most laptops.

I also walked over to look at a Yoga 900, which I was curious about after reading Tim Mitchell’s review. I’m actually anxious to see how Tim’s machine looks next month in NM, but for now I contened myself with the display model. The hinge is neat, but this is a light laptop. At first glance, it also was solid. The flex I have on my Toshiba was not there. Despite a few reviewers noting this felt plastic and cheap, I didn’t get that feeling. It’s no Macbook, but it’s better than my Toshiba.

This will be an interesting decision for me, but since I’m going to wait for Apple’s announcement in March and see what they might do. I doubt they’ll go touch screen, but you never know. I have gotten used to touching my screen for some reading, and I think I might miss that with a MBP.

Training at the Top

Many of us see flaws and problems in the way that we code securely as we build software, as well as the way in which our infrastructure security is configured. There have been no shortage of times in my career when I, or a coworker, wondered why our company didn’t work to implement better security in its systems.

Perhaps it wasn’t us. Perhaps it’s not a lack of desire, but maybe it was due to a lack of knowledge. I ran across a piece in Enterprise Security that notes we should have security training starting at the top, with our C-level executives. Far too many of them don’t necessarily understand the threats or nature of the threats because many of these threats didn’t exist 20, or even 10, years ago. Often we have management that has never faced these kinds of vulnerabilities.

I think there’s certainly room for most of us to learn more about security, especially database security and SQL Injection as these are fundamental issues around some of our most important assets: our data. However when we want to implement stronger security, or limit access, we need the support of management, who themselves need to understand the issues, not just respond to whoever makes the best case, or complaints the loudest.

The world has changed, in that our valuable assets can be transferred to our competitors, or common criminals, and we aren’t away of the disclosure. Or perhaps worse, our enemies could change some data and we might never know without the ability to perform comprehensive audits of our systems, something many of us might not be able to do. We certainly need technical capabilities, but also the time and support from management.

I think there is a good case to ask our management make an effort to understand cybersecurity, and I’d urge you to pass this link along to your management.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.3MB) podcast or subscribe to the feed at iTunes and LibSyn. feed

 

DeveloperBall

I’ve been fascinated by Moneyball and the efforts made in sports to assemble good teams using data more than opinions. I used to think about baseball more, but lately I’ve been intrigued by American football. There are challenges in assembling a team, with the constraints of a limited budget, existing contracts that cannot be changed, and the fact that one player isn’t a replacement for another, even when they have similar skill sets.

That got me thinking that we could do this with our development teams. Certainly the skills that each of posses might be closer to one another than athletes, but that doesn’t change the need to have a variety of skills on a project. We need someone that writes great T-SQL, someone that can manage front end code, someone that can build and provision environments, someone to help test.

I know that many of you can do all these things, but do you want to? Maybe more important, is it a good use of your skills as a developer to manage restores or schedule index maintenance? Those are tasks that might provide a welcome break, but they aren’t necessarily the tasks that I want you to be responsible for or even spend time performing.

There is also the very, very high likelihood that the people hired in your environment have different levels of skills. In a group of T-SQL developers, or SSIS package builders, or any other group that each of you can learn from the others. And you should learn from others, since it’s entirely likely that some of you will leave, and others will need to handle the load left behind. After all, the next person hired is as likely to be the weakest team member as the strongest.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.0MB) podcast or subscribe to the feed at iTunes and LibSyn.

Creating a Database Snapshot

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

I’ve rarely dealt with database snapshots, outside of lab experimentation. They didn’t exist when I did most of my DBA work, and since then we haven’t seen the need for them at SQLServerCentral, though, I may suggest we add them to our deployment process since we can quickly roll back if something breaks.

However, I created one recently for a test and realized that I’d forgotten the syntax. Unlike the quick "create database xx" syntax I often use, with a snapshot I need to be more specific.

The big item that I must specify is the location of the snapshot file. This is the file that gets written to with the copy-on-write process that ensures the original state of the pages when the snapshot is created are still available.

You do need to give the database a logical name as well, which can be anything, but the reference below has suggestions. I’d say that this is bad:

CREATE DATABASE sandbox_snapshot
ON (NAME = MySnap
, FILENAME = ‘h:\SQL Server\MySnap.ss’
)
AS SNAPSHOT OF Sandbox

But this is better

CREATE DATABASE sandbox_snapshot_20150122_1345
ON (NAME = SandBox_Snap_20150122
, FILENAME = ‘h:\SQL Server\MySnap.ss’
)
AS SNAPSHOT OF Sandbox

Because the snapshot is based on an existing database, at a particular point in time, it’s useful to specify the time when the snapshot was created, if possible. While you can get this from metadata, if you have people that look to multiple snapshots for information, it can be handy to know when each is from.

However if you are providing this on a schedule, like daily, for people to report from, you might need to have the same name every day. Think about this, as you cannot rename a snapshot once it’s created.

SQLNewBlogger

When I realized I had to lookup the syntax, I took a few notes and captured code, which meant I combined this writing (< 10 minutes) with other work I was doing.

You should do the same. When you tackle something new, take screenshots, save code, and drop it in a OneNote/EverNote/etc notebook for your weekly blog writing.

Reference

The references I used:

Crafting Your Resume

Your resume of CV is often the first glance that a hiring manager gets about your career. Even if you’ve been recommended by a friend or current employee, often a manager requires some summary of who you as a few paragraphs on a screen that they can study.

I have my own advice, but this post from a manager at StackOverflow covers quite a few of the same things I recommend. I certainly agree with the first section, writing for humans. Over and over again I hear from people that make hiring decisions that they spend 30-60 seconds on a resume to get an impression.

One minute. You should set a timer for one minute and let someone read your resume. Then take it away and ask the person for their impressions of what you know. In fact, maybe that’s a great icebreaker at a user group meeting or SQL Saturday. Find someone that hires others, or is an experienced person in your industry, and ask them to do just that.

We learn a lot from experimenting and seeing what works well and what doesn’t. Many of us solve problems in code and realize later that we could rewrite things more efficiently. Why not do that with our resumes? We certainly can control how we present ourselves, be interesting, and more importantly, don’t waste the reader’s time.

You get one minute, or less, to make a good impression, so spend some time crafting your resume. Control your brand and ensure that you let people know who you are. Do your best to communicate the skills you have, the things you do well, and the ways in which you are a good employee. Most of us will change jobs at some point, so why not be prepared to present yourself in the best possible way you can by working on your resume now.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio (2.4MB) podcast or subscribe to the feed at iTunes and LibSyn. feed

Push Button Deployments

I worked in an Agile team in 2000 and we deployed changes to our web platform every Wednesday for over 18 months. That was about the length of time I was involved before the small startup failed, but not for lack of technology. In fact, I managed a few weeks extra pay by helping sell and deploy our IP to another company.

Deploying regularly and often, to meet business needs and get features into the hands of customers is important. For most businesses, this is something the CTO and CEO want. It can work, and there’s a case study with Nmbrs, a Redgate customer, that has used DLM with this quote:

“The productivity of the team has improved dramatically. We were doing one database update a week, which took almost a full day. Now it takes ten minutes. We can spend the extra time we’ve gained developing new features.”

That’s been my experience with tooling and automation. If you get things setup, as I did with my startup, then you gain confidence in deployments, are willing to make them more often, and get these small chunks of software to customers on a regular basis. It’s not just customers, as we found we could make some fundamental changes in databases and applications that helped smooth development later without affecting customers.

I know I’m paid by Redgate to talk about DLM, but the concept, the idea, the processes really do work well. They’ll help you get features to customers quicker (and feedback to developers faster) as well as give you confidence that you can release, if you find an issue with your application.

Advent of Code–Day 2

I’ve continued working along, and while I found Day 2 to be straightforward in Python and PowerShell, I thought it was tricky in SQL I decided this one was worth a post, since I had to futz around a few times to solve it, and I managed a slightly different way than some others.

If you haven’t solved it, then try. Come back here later and compare solutions, but give it a try first.

 

Solution coming below, so don’t scroll if you don’t need the solution.

 

 

 

 

 

But first,

 

 

 

 

 

Missteps

I had a misstep in this one. I loaded the entire list of packages as separate lines into separate rows into a single column table. My challenge to myself was not to use ETL work to break this apart, or updates. I wanted a simple solution, thinking I didn’t want to take up extra space in the database.

As a result, I wanted a single query from a single string column that had the package size stored as one column, ‘2x3x4’ as an example.

My first attempt used the Moden Splitter function, which seemed to work well. I got three rows for each package. I then used a WIndow function to grab that data, order by the sizes, and then start performing calculations. When I didn’t get the right result, I started digging in.

One of the first things I saw was that I had multple packages with the same sizes. So I had two 22x3x1 packages, and when I used a partition based on the dimensions, I had calculation problems. That’s because the window partition doesn’t know that three rows are one package and three are another.

I could have fixed this with some other value to capture the package, maybe a row_number even, but I decided not to go down this route.

 

 

 

 

My Solution

I decided to break this down, and I used a series of CTEs to do this. I haven’t gone back to optimize things, or combine CTEs, which is possible, but instead left the CTEs as I wrote them to solve parts of the puzzle. Multiple CTEs are east, and they help examine the problem in pieces.

My first step was to parse the string. I don’t love this solution as it is limited to a three dimension package, but it does seem to be the easiest way to break down the dimensions of the package. My query looks to find the string positions for:

  • end of the first dimension
  • start of the second dimension
  • start of the third dimension.

This gives me the simple query:

with cteSplit (d, el, sw, sh)
as
(
select
   dimensions
, endlength = charindex(‘x’, dimensions) – 1
, startwidth = charindex(‘x’, substring(dimensions, charindex(‘x’, dimensions),20)) + charindex(‘x’, dimensions)
, startheight = len(dimensions) – charindex(‘x’, reverse(dimensions))  + 2
from day2_wrappingpresents d
)

Once I had these values, a little math gives me the length, width, and height.

, cteDimensions
as
(select
   d
   , l = cast(substring(d, 1, el) as int)
   , w = cast(substring(d, sw, sh-sw-1) as int)
   , h = cast(substring(d, sh, len(d)) as int)
from cteSplit d
)

Now I’m in business. These two queries were fairly simple, despite all the nested functions. I’ve got integers with the dimensions of each package.

Now the tricky part. I want these ordered. They’re columns, not rows, and I can’t put an ORDER BY in the CTE, so I need to use some comparisons.

, cteOrder
as
( select
   d
, small = case
            when l <= w and l <= h then l
            when w <= l and w <= h then w
            when h <= l and h <= w then h
        end
, middle = case
            when (l >= w and l <= h) or (l <= w and l >= h) then l
            when (w >= l and w <= h) or (w <= l and w >= h) then w
            when (h >= l and h <= w) or (h <= l and h >= w) then h
        end
, large = case
            when l >= w and l >= h then l
            when w >= l and w >= h then w
            when h >= l and h >= w then h
        end
  from cteDimensions
)

Not the prettiest code, and perhaps there are better ways to determine this, but this passed all my tests, and seemed to work.

I could have put the next part in the final query, but I decided to make this a separate CTE to easily read the math. I know some people don’t like lots of CTEs, but in this case, I think they make the query very readable. I should look back at this in six months and see what I think.

, cteFinal
as
(
select
  d
  , area = (2 * small * middle) +
           (2 * small * large) +
           (2 * middle * large)
  , slack = (small * middle)
from cteOrder
)

Now I use a final outer query to sum things up.

select
sum(area + slack)
from cteFinal

The other thing I noticed here is that when I needed to solve the second part, I only had to change the math in the cteFinal to get the new values. It took longer to re-read the second part than to change the code and solve it.

I looked over how Wayne Sheffield and Andy Warren solved this in T-SQL, and I thought their approaches were interesting. I didn’t want to PIVOT or UNPIVOT anywhere, nor did I look at performance here. This runs so quickly, I’m not sure it matters, though I wonder if we were calculating across 1mm rows, would one be better?

I may look, but for now, I’ll leave that to someone else.

Disabling SQL Server Network Protocols

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

I ran across a question on network protocols recently, which is something I rarely deal with. Often the default setup for SQL Server is fine, but there are certainly times you should add or remove network connectivity according to your environment.

Here’s a short post on turning off (or on) a network protocol for SQL Sever.

What’s Enabled?

The easiest way to verify what’s enabled is to use the SQL Server Configuration Manager. You’ll need administrative permissions on the host to run this, but it’s easy to find.

2016-01-13 14_59_40-Start

Once you open it, typically you’ll have a list of the items that can be configured.

2016-01-13 15_02_09-Photos

We want the SQL Server Network Configuration, which is the server level configuration for this host. The Client configurations are for the host being used a client to connect to a SQL Server.

2016-01-13 15_02_31-Photos

As you can see here, I have Shared Memory and TCP/IP enabled for this instance, but Named Pipes disabled.

Disabling a Protocol

As you might expect, this is easy. I right click on a protocol, and I can change the status. In this case, I’ll disable Shared Memory

2016-01-13 15_03_50-Photos

Once I do that, the protocol is disabled. However not on the instance. I’ll get this message.

2016-01-13 15_04_56-Photos

I need to restart the server. Once that’s done, no one will be able to use Shared Memory on the host.

I can fix this

2016-01-13 15_04_49-Photos

Of course, I need to restart my instance again.

Checking the Log

When SQL Server starts, quite a bit of configuration information is written into the log. This is useful for troubleshooting in many cases. One of the things you’ll find is the network listeners, as shown here.

2016-01-13 15_08_14-Log File Viewer - JollyGreenGiant_SQL2016

This is usually after the database startup information, so if you look, you can see I have some local pipes and some TCP/IP settings here.

SQLNewBlogger

After reading a question, this was less than 10 minutes to write, with making screenshots. However I’ve done this before. If this was your first time, then it might take you longer to research and write, but I bet most DBAs could do this in 30-45 minutes.

Chips and Margaritas at SQL Saturday #461

It was a few years back that my wife and I planned a trip to the Austin City Lights music festival to see Stevie Wonder. This was a bucket list item for us, and we thoroughly enjoyed a long weekend in the city. I’ve been for SQL in the City as well, and each time I’ve gone, I’ve enjoyed a lunch at one of the Mexican restaurants South of the river. My kids always joke that my wife and I will fill up on chips and margaritas at Mexican restaurants, and it’s somewhat true. Often dinner becomes lunch the next day.

It’s just two weeks to SQL Saturday #461 in Austin, and I’m looking forward to going back. In fact, I’m going to make it a point to at least go get chips and a drink at the same place. I can’t remember the name, but I know how to get there, so I’ll be fine.

However the main event is the free, one day SQL Saturday event taking place. I’ll be there delivering my Branding for a Dream Job presentation, but there are tons of other great talks. From AlwaysOn to Power BI to Azure to Writing Faster Queries, you’ll have a great day of learning on all sorts of SQL Server topics.

If you’re anywhere near Austin, register today and come join us in Austin for a fun day that kicks of my SQL Saturday 2016 tour.