Quick Scalar Tables–#SQLNewBlogger

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

This actually comes from Itzik Ben-Gan, by way of Kevin Feasel.  Kevin’s post is about CROSS APPLY, which is something I need to play with more because I hadn’t thought about this.

However, the one thing I did see as interesting is the use of VALUES to get a quick table. Here’s an example. Suppose I want to get a list of the last four years in a table for some reason, I can do this:

WITH    cteYears ( Yr )

          AS ( SELECT   2015

               UNION
               SELECT   2014
               UNION
               SELECT   2013
               UNION
               SELECT   2012
             )
    SELECT  Yr
    FROM    cteYears;

Or I could be more compact and do this:

WITH    cteYears ( Yr )
          AS ( SELECT  Yr = y
               FROM ( VALUES (2016), (2015), (2014), (2013) ) a(y)
             )
    SELECT  Yr
    FROM    cteYears;

Is one arguably easier? Certainly some might like the UNION, but it’s a lot more typing, and I think it can be easier to make some mistake in editing. The VALUES clause can easily simulate a table, and you can quickly see groups of rows as well.

WITH    cteYears ( Yr, champ )
          AS ( SELECT   y
                      , c
               FROM     ( VALUES ( 2016, 'Broncos')
                           , ( 2015, 'Patriots')
                           , ( 2014, 'Seahawks')
                           , ( 2013, 'Ravens') ) a ( y, c )
             )
    SELECT  Yr, Champ
    FROM    cteYears;

SQLNewBlogger

This is a quick item I noted, and one I’ve started to use more and more to build quick tables. It seems much easier for me to visualize and create the virtual table. I have started to use this to mock data, or run quick tests.

Posted in Blog | Tagged , , | Leave a comment

Too Much Information for Sports

Many of us deal with lots of data and information, sometimes conflating the two when they are really separate things. Our jobs usually revolve around somehow  helping others to extract useful value from the data we store in some way. This could be with reports, or with transfers to another system, or even a simple explanation of what data in a column means from a business perspective.

Like many industries, pro sports have been using more and more data to help them better manage their business. This was popularized with Moneyball, a book (and movie) about baseball. Since this time, plenty of other teams across many sports have started think about data analytics. The NBA gathers lots of data for each team, even making some available publicly (including spatial shot data). The NFL uses lots of sensors, and more sports, especially Olympic sports, are gathering data to help athletes perform better and learn more about the impact of their decisions.

However, more data isn’t necessarily what makes decisions on the field of play. There was a great quote from Doug Baldwin, an American Football player for the Seattle Seahawks. In looking at all the data collected, he noted this:  “Yes, the data and information is useful, and give it all to me. But at the end of the day, the user has to use it the right way.”

That’s a great quote, especially considering the owner of his team is Paul Allen, founder of Microsoft. More data doesn’t necessarily solve a problem, tell you what to do, or how to do it. Instead, data analytics and analysis is still a bit of an art with some science behind it. Data matters, but when we reduce a problem to a simple set of statistics, we can be missing the subtleties of the actual situation.

I wouldn’t think most industries could do without data gathering and analysis, but people still need to be involved and work with the bits and bytes. Whether you’re in sports or business or some other area, it’s important to use data, but don’t discount the human factor in applying experience and understanding to extracting information from that data.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.5MB) podcast or subscribe to the feed at iTunes and Mevio .

Posted in Editorial | Tagged | Leave a comment

Learning to Search

I’ve had my own domain for about 17 years now, and about 6 or 7 years I decided to move my email from GoDaddy to Google. There were a variety of reasons, but the bottom line was I got more accounts, more space, for less money. I’ve been happy there, and I even set up accounts for my kids to have their own email.

About a year after I gave my son an email, I went to ask him if he’d seen a particular note from our Scout troop. He opened email and I saw that he had quite a full inbox. He had various newsletters and kid related emails (most unread) as well as a bunch of different messages. As he looked for the email, I asked him why he didn’t delete the old messages that he was ignoring or had already read. His reply surprised me.

He pointed to the left side of the screen, below the list of folders, at a small graph. “See that,” he said, pointing at the percentage of space he’d used. “When that goes above 10%, I’ll worry about deleting things.” It was at 1% at that time, after a year of email.

Not long after that, I read a note from Mark Cuban on email, where he said he didn’t bother to delete emails or move them into folders or anything else. It’s inefficient to worry about this. Instead, he buys space as needed and uses search. I started doing that as well, rarely deleting emails, and counting on search to help me find things. I’m at 5% of my Google allocation after 6 years and if something isn’t on the first page of email, I always use search to find it.

I’ve started to do that elsewhere. Even in databases. I was reminded a few months ago that Redgate Software (my employer) makes SQL Search. Ike Ellis made a short video that shows how he uses SQL Search to find objects in databases rather than wandering through the Object Explorer. Ike is a consultant and runs into lots of databases and can’t remember where every object is in all of them.

I like that, and I find myself starting to use SQL Search because it’s quicker than opening a database, then opening Programmability, then opening Stored Procedures, and scrolling. Search is a couple of clicks and keystrokes, and way faster.

I use search more and more, on my local disks, inside particular websites (you should all know about site: searches on search engines), in Books Online, and in code. Most IDEs make this easy, and trust me, once you start to do this, you’ll never go scrolling around again.

At least not if you want to be efficient.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.9MB) podcast or subscribe to the feed at iTunes and Mevio .

Posted in Editorial | Tagged , | Leave a comment

A Buggy Release

I definitely believe in a DevOps process, though a thoughtful, incremental one. I think this is the best way to develop software, whether you release every day or every year. Yes, you can implement DevOps and release once a year. You just end up tracking, testing, communicating, and being ready for that once a year release. Of course, I bet you don’t release once a year since I’m sure you’ll patch the system at least once.

One of the core principles of DevOps is to use automation where you can. Remove humans and ensure that repeatability is possible for moving software from one machine to the other. Communicate, test, and then alter your process to work better. This requires the monitoring and input of humans to examine the process, but they shouldn’t be involved in deployments other than approving them. It’s too easy for an individual to make a mistake.

However, DevOps isn’t a panacea for building better software. Witness the issues at Knight Capital, where they went from having $364mm in assets to losing $460mm in 45 minutes. Mostly because of a problem deployment, where an engineer didn’t deploy code to all the servers in their farm. Certainly a clean deployment to every system might have prevented this, but the reuse of old flags in code is problematic, as is leaving old code around that could be executed.

In addition to moving to a DevOps mindset, I’d also say that you should be sure that you follow good software development practices as well. Clean out old code (including database code) and be very, very careful about reusing any part of your software, including flags, for a new purpose. It’s far, far too easy to make mistakes here.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.5MB) podcast or subscribe to the feed at iTunes and Mevio .

Posted in Editorial | Tagged , , | 3 Comments

Slimmer SQL Saturdays

I’ve had this idea for some time. It’s been bugging me, but I haven’t really been pressured to write, nor inspired. That changed a bit with the announcement of the rules change for SQL Saturdays, as well as a few discussions I had with people.

I dislike the announcement for a few reasons, which you can read if you like.

Of What does a SQL Saturday Consist?

Like my grammar? I was desperate to avoid the preposition at the end. However, I’m wandering away from what I want to write about.

What does a SQL Saturday consist of each week? I think this is a high level view of the tasks that have taken place at many of the events I’ve attended in the last few years.

  • Get a venue
  • Sell sponsors some advertising/exhibition package
  • Perhaps put together pre-con events (get more space and manage logistics)
  • Get speakers to come speak
  • Market the event
  • Put together a speaker dinner
  • Have some sort of speaker gift
  • Order shirts for speakers and/or volunteers
  • Contract for some sort of lunch
  • Get volunteers to help with logistics
  • Buy some prizes
  • Teach people in sessions
  • Have a closing and give stuff away.
  • Get people to come to an after party, often providing some food and drinks.

Now, what do we need to have a SQL Saturday?

  • Get a venue
  • Get a few speakers
  • Market the event
  • Teach people in sessions

The core of this event is teaching people. It’s about touching many people that may never go to the PASS Summit, or DevConnections, or SQL Intersection. It’s about helping people get inspired in this business, learn something, meet colleagues. It’s about building a community.

Everything else is gravy.

Let’s be clear. I have no issue with events that can put on a fancy dinner for speakers, or give nice gifts, or cater lunch, or give away XBOXs to an attendee. I have no issue with any of that, and if you can, do it. However, please don’t feel you have to. Please don’t compete and build an event based on the budget you want, or have seen elsewhere. Please don’t make this event about stuff.

Remember the attendees.

Money is Tight

Money is tight with product vendors, and it’s tight with recruiters, and it’s tight with venue sponsors. I have been worried about this for a few years now, as I think vendors have gotten caught up in the SQL Saturday excitement as much as organizers, speakers, and attendees. There has been lots of easy money, and as a result, I’ve seen competition and effort put into making a SQL Saturday more like a mini-Summit than a free, grassroots, one-day event.

I am not complaining about anyone’s efforts. Many of you have done amazing jobs putting on events, and I’ve been impressed, and also quite humbled by your efforts. You really care and do a great job.

However.

The easy money isn’t sustainable, and I want many of you to be cognizant of that. I want you to ensure that you run another event next year, based on the budget you have, not the budget you want.

Most of all, I want you to feel successful with your event because attendees came and learned something. Attendees don’t care about the extras that much, at least not overall. Someone will always complain. Extras are nice, but people come to learn and interact. They come because they’re hungry to be better at their jobs.

Please don’t expect that you need to raise, or spend, $10,000, or even $5,000. Raise what you need, but build the event that suits you. I won’t think any less of you if I need to buy my own breakfast, coffee (yes, that’s a separate meal), dinner, and drinks.

I promise you, the attendees won’t think less of you either if there’s a room, a speaker, and a slice of pizza for lunch.

Posted in Blog | Tagged , , | 23 Comments

Where’s the Unit Testing?

I’ve been a proponent of Unit Testing, especially for databases. I’ve given presentations on the topic and advocate the use of techniques to verify your code works, especially over time when the complexity grows, new developers change code, and potentially introduce regressions. I’m not the only one as I saw a question recently from Ben Taylor asking where has unit testing gone?

I was disappointed that few people have responded to the piece, and I think this is the same response that unit testing in front end application software received a decade or two ago. Few people saw value in testing, preferring to assume developers will code well. Over time, and with some investment, quite a few people have seen the value of unit testing, though I’m not sure it’s the majority yet. In building database software, we’re still woefully behind, preferring to use ad hoc tests that are subject to human frailty (forgetfulness, making mistakes in running tests or not examining results closely).

I do know a few people that are customers of Redgate and use unit testing extensively in their database code. They definitely spend a lot of effort building unit tests, often having more test code than feature code, but they also have very low rates of complaints and bugs from users. I hope that more people having success will publish details on their unit testing successes and failures, and I’d welcome more pieces at SQLServerCentral on either side of the issue.

For many people writing in-house applications, especially those installed in one location, perhaps a few bugs aren’t a problem. Maybe the impact is low enough that training developers to write tests and making the investment isn’t valuable enough.  However, for those that have disparate external clients, or maybe install software in many locations, I bet that moving to a more thorough set of repeatable, reliable, non-trivial tests will improve your software quality.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.2MB) podcast or subscribe to the feed at iTunes and Mevio .

Posted in Editorial | Tagged , , | Leave a comment

Deploying Row Level Security with SQL Compare

Row Level Security (RLS) is a great new feature in SQL Server 2016. It’s been in Azure SQL Databases for some time, but we now have it on premise. It’s easy to setup and use, and worth taking a look at if you have multi-tenant scenarios, or need more granular security in your organization.

Developing this feature means some extra testing, and certainly then ensuring that the code you send to production is the same as that which you tested and verified in your dev and QA environments. I wrote a short piece over at the Redgate blog that shows  how SQL Compare helps here, moving the changes from one environment to the next.

I really like this feature. It seems simple, and it is, but it’s a powerful, central way to actually implement the security features you want.

Posted in Blog | Tagged , , , , | Leave a comment

What’s Database Drift?

The first time I heard someone mention drift at Redgate, it made sense to me. After all, in the context of the schema of your database, I can see the code “drifting” away from what you expect. Someone noticed this was a problem when implementing continuous delivery and DLM Dashboard was born.

I grew up on the water, and I learned that if you don’t anchor these things down, they move.

drift1

In a database, our code can be the same way. Actually, all code is like this, which is why most professional software developers learn to use a Version Control System (VCS). Code changes, and you want to have some way to anchor down the code you need to work with.

We don’t want to prevent changes to a database with some heavyweight process. I’ve worked in those places, and it’s an impediment to getting business done. On the other hand, we can’t have uncontrolled changes. I’ve been in those environments, and apart from the instability for the business, this creates bad relationships between technical people.

That’s one reason we build DLM Dashboard at Redgate Software. We recognized that tracking and being aware of what’s changed is important. It allows you to respond, and respond quickly if needed, but doesn’t prevent changes.

More importantly, you can track down those items that might have drifted to a new location and feed the changes back to development. Even if they appear to be all spread out.

Vietnamese_fishing_boats

In Practice

How does this work? Well, download DLM Dashboard and get started. It’s free and for each installation, you can monitor up to 50 databases.

Once that’s installed, you can add the various databases that make up your pipeline for monitoring. For example, I’ve got a SimpleTalk pipeline with four databases in my environments, as shown here:

2016-07-21 13_55_39-SalesDemo-2015-12-01-1745-export-i-fgod4b6h - VMware Workstation

There are for the following purposes

  • Integration – Gets an integrated build of the database from CI, with all developers changes to the head of the branch of development.
  • Testing – A particular version of Integration is deployed for additional testing.
  • Acceptance – A pre-production environment, where the database upgrade is practiced.
  • Production – The live, final environment for database changes.

I could have other databases in my pipeline (DR, training, beta, etc.) , or even multiple databases at each stage. However, this is a fairly simple pipeline.

Now, let’s suppose I realize we have an issue in production. I need to change a stored procedure that’s got a bug. Someone forgot a WHERE clause in this procedure, and I need to make an emergency fix.

2016-07-21 13_58_36-SalesDemo-2015-12-01-1745-export-i-fgod4b6h - VMware Workstation

Let me add code and  recompile the procedure.

2016-07-21 15_05_03-SalesDemo-2015-12-01-1745-export-i-fgod4b6h - VMware Workstation

This is a new version of my procedure on production (note the RED outline, thanks SQL Prompt). I have drift. My schema is not in the same state as it was. Production now has a red note, with the drift image.

2016-07-21 15_21_42-SalesDemo-2015-12-01-1745-export-i-fgod4b6h - VMware Workstation

This change isn’t recognized by DLM Dashboard. The version of each object, of all the code, is stored and tracked. The previous version was 50.17, which is the version of Acceptance. If I click the “Review”, I can see the details. At the top, I see the old schema version, as well as a summary of what changed and by who. I can name this schema if I want, and add comments about the changes.

2016-07-21 15_39_20-SalesDemo-2015-12-01-1745-export-i-fgod4b6h - VMware Workstation

Scrolling down I can see the actual code that changed. Using the style that SQL Compare and other tools use, I see the additions to the code highlighted, showing what was in the previous version as well.

2016-07-21 15_39_29-SalesDemo-2015-12-01-1745-export-i-fgod4b6h - VMware Workstation

At this point. I can copy this code, put it back to development, and run it through the same cycle as all my normal development changes, including the testing that might ensure my “hotfix” is actually the change that was needed.

Depending on my development process, this might be deployed as a new change, or it might be ignored when the next deployment takes place. Either way, my process should be able to handle this appropriately. If not, I need to better manage my development.

I think DLM Dashboard has a number of uses, but certainly the capture of changes to production, ensuring you’re aware of what changes, is a valuable one.

68_dlm dashboard red wfill

I hope you’ll download it today, since it’s free, as in beer, and it’s worth a test in your environment to see how it can help you.

Photo Credits

Posted in Blog | Tagged , , | 4 Comments

Rename a Primary Key–#SQLNewBlogger

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

Having system named objects is one of those things that people may debate, but as you move to more automated and scripted processes can cause you issues. In the case of a Primary Key (PK), if you do something like this:

CREATE TABLE OrderDetail
    (
      OrderID INT IDENTITY(1, 1)
                  PRIMARY KEY ,
      OrderDate DATETIME
    );

What you get is something like this:

2016-06-27 13_58_47-SQLQuery4.sql - (local)_SQL2016.EncryptionDemo (PLATO_Steve (60))_ - Microsoft S

When you compare that with the same table in another database, what’s the likelihood that you’ll have the PK named PK__OrderDet__D3B9D30C7D677BB4? Probably pretty low.

This means that if you are looking to deploy changes, and perhaps compare the deployment from one database to the next, you’ll think you have different indexes. Most comparison tools will then want to change the index on your target server, which might be using this technique. Or the choice might be something that performs much worse.

What we want to do is get this named the same on all databases. In this case, the easiest thing to do with rename the constraint on all systems. This is easy to do with sp_rename, which is better than dropping and rebuilding the index.

I can issue an easy query to do this:

exec sp_rename ‘PK__OrderDet__D3B9D30C7D677BB4’, ‘OrderDetail_PK’

When do this, I see the object is renamed.

2016-06-27 13_59_20-SQLQuery4.sql - (local)_SQL2016.EncryptionDemo (PLATO_Steve (60))_ - Microsoft S

This table has over a million rows, and while that’s not large, it does take time time to rebuild the index. With a rename, the change is to the metadata and takes a split second.

SQLNewBlogger

These quick, easy, administrator items are great to blog about. As an exercise, if this table has millions of rows, how much longer does the index rebuild take?

This is a great topic for you to write about (or learn about) and show how you can better administer your SQL Server databases.

Posted in Blog | Tagged , , | Leave a comment

ReadyRoll and Octopus Deploy Webinar

I’ll be hosting a webinar next week, Thursday, July 28, with Daniel Nolan (t) (founder of ReadyRoll and fellow Redgate employee) and Damian Brady (b | t) of Octopus Deploy.

We’ll be talking deployment, with demos to show you how to work with ReadyRoll for your deployment and then how to build a package to be automatically deployed though Octopus Deploy. Both are great products that I enjoy working with.

Register today if you’re interested in seeing how migration based development and deployment works with ReadyRoll and Octopus Deploy.

Posted in Blog | Tagged , , , , , | Leave a comment