Skip to content

Intermediate T-SQL: Window Functions

This is the third hour of a three hour set of presentations on intermediate T-SQL techniques and features.

This presentation covers the window functions that are available in SQL Server. Some of these functions were introduced in SQL Server 2005 and some in SQL Server 2015. We start with a brief introduction to what a window is, how a partition works, and then look at ordering and framing of rows. Then we cover the following in demonstrations:

ROW_NUMBER()

RANK(), DENSE_RANK(), and NTILE()

The OVER clause and windowing in SQL Server 2012

Using PARTITION BY and ORDER BY in the OVER clause

Default framing with RANGE and how ROWS are different

PERCENTILE() functions

This covers a lot of information in an hour, but I try to use small data sets to explain what is happening with the window functions and how they can improve the performance, as well as simplify the code, for a number of common queries.

Slides: TSQL_Windowing.pptx

Code: TSQL_Windowing_code.zip

Intermediate T-SQL: PIVOT, UNPIVOT, and APPLY

This is the second of a series of talks I put together on intermediate T-SQL topics that many database developers might not understand. In this talk, we cover the other join-type operators that exist in the FROM clause, outside of the INNER, OUTER, FULL, and CROSS join clauses. We break this hour into three main areas, covering:

  • PIVOT and crosstabs
  • UNPIVOT
  • APPLY

Most of the time is spent on the APPLY operator, which is arguably the most useful of the three. I show how APPLY is like an inner or outer join, and also how it can be used to improve performance of a scalar function, how it can be used to run some queries that are difficult with INNER JOINs, and how we can use this to easily find query plans and code in performance tuning.

The PIVOT operator is compared to a crosstab, which is similar. We see how to write a query using either method. We also show the UNPIVOT operator.

Slides: TSQL_IntermediateQueries.pptx

Code: TSQLPIVOT_Unpivot_Apply.zip

Intermediate T-SQL: Writing Cleaner Code

This session is an intermediate T-SQL session that helps users learn how to write better T-SQL code by covering a few items that many database developers might not be aware of. In an hour, users will lean how to:

  • write and use CTEs to simplify complex queries.
  • learn basic error handling, including recommendations for SQL Server 2012 and beyond with THROW.
  • use templates to store common code elements
  • tricks in SSMS to speed up code development
  • understand what a tally table is and how to use it to perform a few common tasks.

This is the beginning of a three hour series I have on intermediate T-SQL code.

Slides: TSQL_WritingCleanerCode.pptx

Code: TSQL_CleanerCode.zip

Fun Data Analysis

The first part of this article is very true: “… correlation doesn’t always equal causality.” It’s a fun look at statistics and there’s a short video from a news station that contains this other great quote: “… numbers can be tricky.”

As data professionals, we are often tasked with helping business people understand the information hidden in data. We look to identify patterns and relationships between the data that our applications collect or our companies purchase. However it’s not as simple as it might appear.

This week, I’m wondering if you have some fun stories, or perhaps some scary stories, on conclusions that you or others have reached with data.

When has a correlation been mistaken for causation by you?

I know that most people in business don’t have extensive statistics backgrounds, and I’ve often seen people fooled by finding simple or obvious patterns in data that didn’t necessarily match reality. I’ve seen plenty of money lost because of bad decisions made by incorrect data interpretations.

I once worked in a restaurant where a new chef decided to use sales data from the previous year to predict how many meals would be served. Using a growth factor from sales over the previous year, he increased his purchasing as the summer season approached. A month into summer, we realized that we were wasting almost twice as much food as we had in the past. I had a similar story where purchasing managers used a straight average of the previous three months that left is woefully short on imported wood.

Let us know this week if you’ve seen mistakes made from naive interpretations of statistics.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.0MB) podcast or subscribe to the feed at iTunes and LibSyn. 

SQL in the City 2014 – Two countries in two weeks

I have had a fairly busy travel schedule the last few years. While I don’t love sitting on airplanes and being away from home, I do look forward to the events and seeing the amazing #SQLFamily and community we have for SQL Server. I’ve also learned a few tricks to make travel smoother, which reduces stress.

It’s a good thing since the SQL in the City 2014 event schedule is going to be short in duration, but long in miles. We have two days coming this fall: London and Seattle. We open in London on Fri, Oct 24, 2014 and then a little over a week later will be in Seattle on Mon, Nov 3, 2014. It’s a packed schedule that has me flying 6,000 miles to the UK before returning home and getting ready for a 1,300 mile trip up to the Pacific Northwest.

However I’m excited for both events. London draws a bit crowd, with a variety of SQL Server professionals that are excited to learn new ideas, tips, tricks, and tools to help them in their professional work. We’ll once again be near the center of town, at St Grange, just off the St Pauls and Blackfriars Underground stations.

Seattle is a great town, and while I’m sure I’ll see some Summit attendees, I look forward to meeting lots of local SQL Server data professionals that won’t attend the Summit. I hope that if you’re in the area, you come to our event. After all, how can your boss say "no" to a free day of training? Especially if you’re not attending the Summit later that week. We’ll be at McCaw Hall, near the Space Needle.

It’s a short tour this year, but an exciting one. We’ll also continue to expand our SQL in the City seminars to other cities in the future, and hope we get the chance to meet more of you as we look to support, educate, and participate in this amazing SQL Server community.

Continuous Delivery for Windows?

I read a bit about the next version of Windows, which is coming in 2015 as Windows 10. I’m not sure how much I care about some of the changes coming in the OS, though having CTRL+V working in a command prompt is very welcome. However I did read this piece about the people testing the Technical Previews and was a little intrigued by one quote: “… Threshold testers … will have those features and fixes pushed automatically to them…”

Does that mean that Microsoft has re-engineered Windows to be integrated with a Continuous Delivery process? If so, then I think this is a good move. We’ve already seen SQL Server move to a pace that releases new versions every two years and bimonthly patches to fix issues. Imagine that we could get patches even more often, as bugs are fixed.

Also imagine that we could get those bugs quickly rolled back and patches pulled if there are issues.

I think that’s one of the interesting things for me. There have been patches in the past which caused, issues and were sometimes hard to remove. If new changes can be pushed out quickly, I’d hope they could be removed quickly. And with all the feedback that Microsoft gets from existing installations, I could even start to see custom patches built that are deployed to only certain configurations that are compatible with the patch.

Of course, that’s an ideal view. I suspect that we’ll still see overworked developers releasing patches that not only fix issues, but cause other problems, and at times, can’t be removed. At least we’ll probably get the patch to fix the patch, a little faster than in the past.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.0MB) podcast or subscribe to the feed at iTunes and LibSyn.

Version Control for Databases

This is based on my Team Based Development with Version Control talk, which has evolved a bit.

Abstract

Getting your database under version control is ultimately a way of communicating changes and collaborating with team members, maintaining an audit trail and ensuring you always have a version to roll back to.

In this session, we’ll cover why this gives you and your team a crucial advantage in reducing the risk in your development processes. We’ll also discuss another door that your team opens by adopting solid version control: setting up the fundamentals of continuous integration. You will learn what build automation means as a first step towards continuous integration and the value it brings as part of your evolving deployment process. We’ll also discuss branching strategies, and how to manage this for databases as part of your evolving deployment process.

Goals

  • Understand the value of version/source control for databases
  • Learn the tools, standards, patterns and best practices needed to manage a database from source control
  • Identify the necessary flow within a team needed to develop a database with source control

This talk includes the following demos:

  • Move database DDL into VCS manually
  • Link database DDL and lookup data with SQL Source Control
  • Make development changes and check into VCS
  • Setup a new developer DB linked to VCS
  • Changes flowing back and forth between developers
  • Resolving conflicts between development changes made by two people
  • Basic branching of code for bug fixes and linking a database for development work to an old branch

Length: 45-75 minutes. Shorter lengths with fewer demos.

Slides: Get Database Under Source Control.pptx (Slideshare)

A T-SQL Code Testing Guide

Do we need a guide for code like this one? The piece linked is from a programmer at Google who gives some code review items that should be tested for. I’m wondering we we might want to have something similar for T-SQL code?

I’ve given a few talks on how I to get started with tqslt, which is a great framework for unit testing T-SQL code. As I’ve learned more about it, and experimented, I’ve been amazed by how flexible it is and how it makes it easy to setup and run tests on T-SQL code. However I can only experiment with the way I code and the tests I think of, and I’m sure there are many other ways in which we can better verify that our queries work as expected.

Testing has changed since I developed software in VB and C++. It seems that much thought has been given to building better unit testing that can not only catch bugs, but also ensure the code performs as we expect it to. There’s also the idea that we can run these tests in an automated fashion. That helps prevent some bug when we first write code, but it more often ensures code continues to work as we re-factor our work and enhance our applications. More and more I think that the constant and continuous regression testing is at least as important as initial testing, if not more so.

I really hope that we improve our development processes and testing methodologies for SQL code. It seems that it’s an area where we can also increase the speed at we build better database applications, and improve the quality at the same time. If any of you are formalizing the testing of your T-SQL code, we would be interested in publishing your thoughts and results for others to learn from. Please feel free to submit an article.

Steve Jones

 

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.5MB) podcast or subscribe to the feed at iTunes and LibSyn. 

Making Better Presentations–Practice

This is part of a series of tips for speakers on how to make your presentations better.

There’s one thing that makes a big difference for many speakers.

Practice

Practice makes perfect, but more importantly, practice develops familiarity. Everyone gets a little nervous on stage, everyone struggled to memorize their entire talk. The best presenters learn to minimize the amount of thinking they need to do on stage, and build a smooth flow of topics.

It’s important for you to practice your talks so that you become comfortable with the flow and ordering of your slides. I have typically done the following when I build a new presentation

  • Practice sections as I create them – Actually run through the words in my head.
  • Practice the entire talk of slides out loud – I may do this a few times in my office.
  • Record myself giving the talk and watch for the flow – I rarely do this anymore, but I have done it quite a few times when I was beginning speaking. Even now I’ll see recordings of myself at times and see things I want to improve or remove in my style.
  • Run through the talk the night before, often in my head, quickly, but covering each slide and demo.

Above all, practice with your demos and code. Learn to move around the system easily and smoothly. Being able to select icons, hit shortcuts, select code, etc. makes your presentations smoother, but it’s also handy in your work. Having these tools quickly move in and out of your way make you a more efficient worker.

The big things I’d tell people to understand is how to switch to and from Powerpoint. Note that you don’t need to stop the presentation to switch. ALT+Tab to your other application and ALT+TAB back to Powerpoint. The presentation itself will be an app separate from Powerpoint. It will take you 30 seconds to figure it out, but practice.

Practice, practice, practice.

Remember, amateurs practice until they get it right. Professionals practice until they don’t get it wrong.

The Next Five Years

On one hand, the next five years don’t seem too far away. Five years, that’s a year more than most of us spent in high school or college. On the the other hand, 5 years ago was a long time in the computer world. Azure first gave us the ability to create a database, as opposed to just a key value store. SQL Server 2008 was the current version of the platform. SQL Saturday #26 was held the first weekend in October in Redmond. 8GB was the large size for flash drives and the iPad hadn’t started the current tablet revolution.

The world of data has grown dramatically in 5 years. Our cell phones and other devices are drastically changed the amount of data that is collected and consumed. Advances in bandwidth have almost removed the need for us to move all but the largest sizes of data without any physical media. If we look to the next five years, does anyone think they can accurately predict how much storage we’ll maintain in our pockets or how fast we’ll expect to move data on a daily basis?

I ran across a prediction of surprising things in the next five years, and while I’m not sure I think they will all come true, I do think that similar things will come to pass because of two things: amazing increases in computational power available to any individual and tremendous amounts of data. That’s no great surprise, but I am excited. I can’t help but think that there will be so many opportunities for data professionals that most of us will be gainfully employed for decades.

I do think, however, that the mundane, easy jobs of administering individual instances, checking logs and backups, and setting security will be few and far between. Unless you know how to do those things for thousands of instances at a time.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.5MB) podcast or subscribe to the feed at iTunes and LibSyn. 

Follow

Get every new post delivered to your Inbox.

Join 4,768 other followers