Better ReadyRoll Script Naming

One of the things that I like about ReadyRoll is that the product will automatically build new scripts that are named in a sequential order. This usually results in a few scripts that look like this:

2016-06-14 11_05_15-Movies & TV

As you can see, these scripts aren’t very intuitive. In fact, if you get lots of scripts, this starts to look fairly complex and confusing. What about something more like this:

2016-06-14 11_08_13-Movies & TV

That’s easier to read and understand. I’d also have a better idea of what happens in each script. How can I do this? It’s easy.

Add an Object

First, let’s add an object in ReadyRoll. I’ll alter my Exams table to add a few columns. To keep this simple, imagine I want to add a modified date and a short description. I could do this in SSMS, but I’ll open the designer in VS. Here’s the table.

2016-06-14 11_10_30-Photos

I’ll make my changes.

2016-06-14 11_11_01-Photos

Now, I click the “Update” button in the upper left. When I do this, I get a Generate script item. I could do other things, but I like to do this and see my script before applying it to the dev database.

2016-06-14 11_12_04-Movies & TV

I click Generate, and I get the script. Notice, it’s named with some random number (after the 0004) on the right.

2016-06-14 11_12_34-Photos

If I right click the script, I can do all the normal file operations.

2016-06-14 11_13_32-Photos

Let’s give this a more descriptive name. It’s taken me a long time from my 8.3 name days, but I’ve learned to take advantage of file names to make them descriptive. A few bytes in a name is cheap.

2016-06-14 11_13_58-Photos

That’s it.

ReadyRoll does use the first characters in front of the underscore (_) to order scripts, so I don’t want to change those. I could, but in this case, I need script 4 to come after script 2 at the very least.

After the underscore, I can do whatever I like. In this case, I can see the changes being made to my database, just reading down the scripts and seeing how things will occur. I always have the detail in the code, but at a high level, I can see the changes.

I’m sure if you adopt this technique, you’ll find that it’s much easier to manage scripts and track what’s happening to your database.

The Desktop Rebuild–Part 2

You can read part 1, but that’s more about the (initial) hardware changes. This one looks at getting moving on the new desktop. Note, that since I changed the MB/CPU/Boot disk, I decided to reinstall Windows as well on a new disk to ensure I wouldn’t have more issues.

Having a new Windows desktop is a bit disconcerting. After all, many of us techies are used to loading more and more programs and documents onto our systems. We have lots of history with our machines, and Windows hasn’t always made it easy to move things. OSX is better, but I can’t build a Mac, or at least, I don’t want to.

In this case, I still had me 2nd, 3rd, and 4th drives connected. The contained most of my documents, pictures, and application code. However, certainly some things needed to be installed.

The first thing I did with the new Windows 10 is run Edge and then go here:

2016-06-23 23_37_58-Chocolatey ‎- Microsoft Edge

I used to go get Chrome or Firefox right away, but I have a better way now. I use chocolatey. Once I get here, I first open a Command Prompt as Administrator and run this:

@powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))" && SET PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin

Once that finishes, I can begin installing things. In my case, I have a list of packages I want to install in a folder on my Dropbox. However, I need to get Dropbox first.

choco install dropbox


choco install passwordsafe

Once that’s done, I log in (getting my password from my phone, and then I can run the rest of my installs, using Password Safe to get my passwords.

choco install googlechrome
choco install greenshot
choco install conemu
choco install firefox
choco install sublimetext3
choco install evernote

I have Chrome and Firefox linked to an account, so my bookmarks come right over when I log in. SQL has to be installed separately, as does Office from my O365 account. I prefer to do Visual Studio myself, but these command line installs get me up and running.

Fortunately I keep a copy of the .iso’s for those programs and can just run them.

I also needed to flash my BIOS, which was easy, but I needed to download the newest file from ASRock and put it on a flash drive for a reboot. That went without a hitch, and I now have two monitor support again.

Photo Jun 24, 10 33 59 AM

I still need to adjust my monitor and also get a new video card to support the third monitor, but that’s for another day.

Debuting Talks at the Summit

This is a series of posts on the PASS Summit and SQL Saturdays. I’m outlining some thoughts here, sometimes for the first time, but on topics that I think make better events. These are opinions and thoughts, not mandates or demands. I’ve written previously on choosing speakers.

One of the interesting items at the recent Speaker Selection Town Hall from PASS was a question about wanting sessions submitted to the Summit. The question was about whether the sessions could be given at a SQL Saturday (or other event), or if the Summit should be the first place the talk is given.

I’m torn on this.

One one hand, the Summit needs to attract people. This is a business and original content goes a long way to exciting people. I submit and speak at conferences that want this, and I’ve also given sessions at the Summit, which was the first time that I’d ever presented the content in front of an audience.

On the other hand, we want good sessions. If I had one complaint from many Microsoft sessions, it’s that there are many speakers that struggle to give a good talk because they’ve never practiced it. I appreciate they’re busy, but I would love if they had to present a couple times internally and practice.

The same thing occurs for us. The two times I’ve presented new content at the Summit (or other events), it’s very stressful. I’ve also struggled with timing and flow. Even having one talk at a local user group to 3 people will help make a better session.

This isn’t perfect. I’ve seen speakers give the same talk 5 times and not improve or adjust their pace. My preference is to give the benefit of the doubt that speakers will work to improve over time. Note, if they don’t, give them the feedback.

I also see value in having great sessions repeated. I’ll pick on a couple people here. Aaron Bertrand has given T-SQL: Bad Habits & Best Practices at many events. I think I’ve seen it twice. It’s great, and worth seeing. DBA Mythbusters from Paul Randal is great. I know Paul adjusts and grows content over time, and this is one I could see being on the schedule every year. I’ll also say that almost anything Itzik Ben-Gan presents is worth having at every conference.

I don’t know what percentage of repeat sessions should be allowed, but I wouldn’t rule this out. I also think it’s fine to pick sessions presented at other events, if they make a good event. One thing to keep in mind is that if you take advantage of every slot at the Summit, you’ll see 13 or 14 talks.

14/112 (or so)

You won’t see everything. I doubt you’ll watch everything, even if you a super motivated and buy the content on USB.

The same thing occurs at SQL Saturdays. At one of the smaller events, Sioux Falls, there were 3 tracks, but only 4 time slots, so you could see 4 of 11 talks at best.

No one sees all content.

While you might want Grant to have something new at the Summit that you haven’t seen at SQL Saturday Boston, there are lots and lots of other people that would like to see Grant present, even if it’s the same talk from SQL Saturday Boston.

If I could wave the magic wand, what I’d want PASS to do is this:

  • Have 80% new content (content not given at a previous Summit), but content that has been practiced somewhere. In fact, I’d ask that many speakers schedule a user group talk (or a SQL Saturday) prior to the Summit to work out any bugs. This might mean some crunches for user groups in the Aug-Oct range, but that’s fine.
  • That means there would be 20% content that’s being repeated. I think that’s OK, if there are great sessions worth repeating. Certainly I think some of the talks that speakers have given could be updated a bit, but many are worth giving again.
  • Of course, that’s just an idea, and one that doesn’t mean we have hard numbers. Maybe the 20% is a hard ceiling, but the percentages can vary, just discuss why. As the committee makes decisions, keep notes and comments and then drop them in a post.
  • More I’d like to have PASS continue to disclose data, discuss this more, and adjust over time.

The Desktop Rebuild–Part 1

I’ve had an aging desktop for some time. I originally bought it from Glenn Berry (b | t), who I think gets a new machine every quarter or two. Seriously, I think there is a computer in every room of his house, and he has a lot of rooms.

In any case, I had an i7-930, 24GB of RAM, and 4 drives with about 3TB of storage. However, the CPU was aging and I’d get temp alarms when doing heavy video processing. In the last 4-5 years, I’d replaced the power supply and CPU cooler, but still had issues. The Windows 10 upgraded added more, with some weird freezing and loss of the Start menu. I didn’t have all of those things on my laptops (both w10, one new, one upgrade), so I decided it was time to rebuild when a simple Skype call had temp alarms going off.

Glenn is my hardware guy. If I have a question, I ask him, mostly because I don’t care about cool hardware and just want things to work. Glenn recommended these items (some of them), and I drove down to the local Microcenter after my Skype call finished.

Photo Jun 23, 5 39 33 PM

I purchased

  • ASRock Z170 Extreme6 motherboard (recommended) – $160
  • i7-6700k CPU ( recommended) – $290
  • CoolerMaster fan – $35
  • 32GB Corsair Ballistic RAM – $150

I know I could find better prices, but I wanted this done quickly. I also needed a new video card, but I delayed that for now. I wish I hadn’t.

After a few hours of work, and a day (filled with family commitments), I sat down with parts.

Photo Jun 23, 5 42 30 PM

I made sure my Crashplan was up to date, and my cloud services were sync’d. I’ve tried to do this in order to become productive quickly on new machines. I also had a new SSD that I’ve been traveling with, but I repurposed that as a new boot drive. The old one was 256, and this was 1TB, so I won’t be in danger of running out of space, something that I was close to doing on the old machine.

I disconnected everything and pulled out the old (dusty) motherboard.

Photo Jun 23, 5 53 11 PM

It’s worked well, but it’s aging. It’s also somewhat slow compared to newer technologies, though that was (hopefully) a bonus, not a requirement.

I then installed the new motherboard, which wasn’t too bad. The hardest part was getting the CPU cooler installed. Balancing the four arms and screwing things in while the large cooler wants to slide on thermal compound was a challenge. Fortunately I managed to get it secure without too much grease on my hands.

I connected all my drives back, using some of the new SATA 3 cables that came with the motherboard. I didn’t bother connecting the front side speaker stuff since those connections are covered by the video cards. I added back both video cards and then plugged in basics: monitor, power, USB keyboard/mouse. I left some things uncovered, since I’ve rarely had things work perfectly on the first assembly.

Photo Jun 23, 7 27 13 PM

Crossing my fingers, I booted.

Or didn’t. I got lights, which I haven’t always gotten. There’s a debug LED in there and I got a 99 code. Looking that up, I realized there’s a PCIe issue. There are only 3 slots, with 2 video cards, so I pulled one. Same issue.

Pulled the second and it booted.

I had made a bootable USB (borrowed from my son) and then added the W10 install iso. Things booted up and I installed Windows.

Photo Jun 23, 7 31 21 PM

I only had the onboard video, which worked, and I’m glad I had another machine since I needed to download the LAN drivers to connect to the network and video drivers to get beyond a basic 1024 resolution.

Total time to get Windows back up, around 90 minutes, though most of that was wrestling with the CPU cooler.

The next post will look at getting productive again, with newer hardware.

Selecting Speakers for the Summit

This is a series of posts on the PASS Summit and SQL Saturdays. I’m outlining some thoughts here, sometimes for the first time, but on topics that I think make better events. These are opinions and thoughts, not mandates or demands.

I attended, well, listened to, the PASS Speaker Selection Q&A this week. In the spirit of being constructive, I want to thank the board members and committee members for attending. I thought the event was well run and informative. I appreciate the disclosure.

Tldr: I think the volunteers did a good job, with minor room for improvement. Disclose more numbers and guidelines, and involve the community more.

Hearing about the process is always good as this affects the community that attends the event. Both speakers and attendees. I’ve been involved before, and should do so again, to see how things might be different. I appreciate the amount of work involved, and would like to thank Allen White, Lance Harra and Mindy Curnutt for their work. I know many others are involved, and thanks for your time as well.

  • 840 abstracts.
  • 255 speakers (3+/speaker)
  • 112 session slots (+10 pre-cons, 4 lightning talks)
  • 5000+ comments from reviewers.

That last number is quite interesting. That’s an average of 5.9 comments per submissions, much higher than when I was involved. What’s more, Lance and Mindy reviewed all comments, ensuring none were inappropriate to return to speakers. While I abhor censorship, this is something that needs to be done. Some (very few) volunteers will poorly communicate their thoughts, or have a bad day. Reviewing and redacting (or asking for rewording) makes sense.

There also was a note that Lancy/Mindy tracked the timing of comments to ensure volunteers spent time actually thinking about the sessions/speakers and not racing through with a quick CTRL+C, CTRL+V. That is impressive.

I asked a question on first time speakers. Not to beat up the committee, but because I think the health of the community depends on regularly getting new speakers, both first timers at the Summit and new pre-con presenters. Note that I don’t want someone to give their first talk or their first pre-con at the Summit. They need to practice elsewhere, but we need a rotation of new speakers.

Allen mentioned that he looked for 20-25% new speakers, though that guideline isn’t published or listed. I know that the number depends on the submissions, but having guidelines and then giving reasons for deviating is what I’d expect. Give numbers and then react. Adjust and explain why. That’s what many of us do with data and our jobs.

For the record, I think 25% is high. Maybe 20%, especially as we have more and more speakers. I would like to see a goal of at least 10% each year. If you do more, fine, but explain a bit. Not a lot. A bit.

Allen has a nice post with some numbers and the process explanation. Thanks, Allen.

However.

More Data

I want to see more numbers. That’s our job. If PASS doesn’t have time, recruit a few volunteers. I’d bet there are a few people in the community that would love to play with data. Let us see the pre-cons broken out. Topics, resutls of surveys. There’s even this cool tool that lets you build some visualizations with data.

Or better yet, put out a feed, or if that’s too hard, some numbers in an Excel sheet. A flat file. Let people do some analysis and learn. You might learn something as well.

Honestly, while I think comments deserve some privacy protection, I’m not sure ratings do. I’d like to see those released. The committee is always going to upset people, and there will always be complaints. However, the more information you provide, the more we can comment, yes, also complain, but also you might learn something about how the community thinks about your choices.

After all, you’re supposed to consider the community.

I’m not asking you to only listen to the community. After all, the Summit is a business, and it’s fair to say that the 12th PowerBI session rated a 4.2 got bumped to get in a 3rd Extended Events talk rated 3.5.

Disclose more and analyze more. React and adapt. If you don’t want complaints, resign. That’s part of the job.

Enough on that.

Community Vote

I do think that it’s impossible to build a perfect schedule. Looking at last year’s numbers is inherently flawed. After all, how highly were the AlwaysEncrypted sessions rated last year? Or the first time speakers? It’s a guideline, and I’d like you to publish those numbers to show what you’re considering, but I also think the community deserves a vote.

I asked the question and Allen responded that not a lot of people voted and there were issues. I dislike terms like “issues” without specifics.

However, I’m not asking for sessions to bypass the program committee. I think there is good work done here. What I’m saying is that of the 112 sessions, when you get 100, put the last 10 or 12 up for a vote. Take the sessions rated 100-124 and drop them in a survey. Let community people, those that wish to, vote. After all, of your entire membership, how many vote for the Board? Probably a similar number to what you’d get here.

You take popularity votes from last year’s numbers already. Take a few more.
If it’s a lot of work, explain that. Maybe we can help you slim things down.

Break the Bubble

Mostly what I see from organizations, and what I’ve done, is that groups start to take on responsibility, feel the weight of that responsibility, and work in a bubble. Over time, they become more disconnected from the people they make decisions over.

This is a hard process, and a lot of work. I know you try and mostly succeed. You may be overthinking this, and over-controlling it. Let go a little, take more feedback, and look to improve the process.

Continually.

That’s what we do in this business, or at least, what we should do.

Getting the Previous Row Value before SQL Server 2012

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

I ran across a post where someone that was trying to access the previous value in a table for some criteria. This is a common issue, and  one that’s very easily solved in SQL Server 2012+ with the windowing functions.

However, what about in SQL Server 2008 R2-?

NOTE: I’m solving this quickly, the way many people do, but this is an inefficient solution. I’ll show that in another post. However, I’m showing how you can describe and solve a problem here. If you need to solve this, look for a temp table solution (or find a later post from me).

Setup

It’s pretty easy. Let’s get some data together. I’ll use a big sample since that’s easier to see the differences.

CREATE TABLE MyID
( myid INT
, myvalue INT
);
GO
INSERT MyID
VALUES (1, 10 ),
        (1, 20 ),
        (2, 400),
        (2, 500),
        (2, 600),
        (3, 8000),
        (3, 9000),
        (3, 10000),
        (3, 11000);

Now, what I want is something that returns the previous row, assuming we’re ordering by the ID and value. If there is no previous value, let’s return a zero. Essentially what we want is something like this:

select MyID        , MyValue       , MyPrevValue = ISNULL( x, 0)
from …

That’s the pseudocode. Obviously I need to fill in blanks. However, let’s build a test. Why? Well, I can then see my result data, and I can re-run the test over and over as I experiment with the query. It’s not hard, I promise.

EXEC tsqlt.NewTestClass
  @ClassName = N'WindowTests';
  go
CREATE PROCEDURE [WindowTests].[test check the previous row value for MyID]
AS
BEGIN
-- assemble
CREATE TABLE #expected (id INT, myvalue INT, PrevValue int) INSERT #expected
VALUES (1, 10  , 0  ),
        (1, 20  , 10 ),
        (2, 400 , 20 ),
        (2, 500 , 400),
        (2, 600 , 500),
        (3, 8000 , 600),
        (3, 9000 , 8000),
        (3, 10000 , 9000),
        (3, 11000 , 10000) SELECT *
INTO #actual
  FROM #expected AS e
  WHERE 1 = 0 -- act
INSERT #actual
EXEC dummyquery;
-- assert
EXEC tsqlt.AssertEqualsTable
  @Expected = N'#expected'
, @Actual = N'#actual'
, @FailMsg = N'Incorrect query' END

If you examine the test, you’ll see that I create a table, insert the results I expect, and then call some procedure. I compare the results of the procedure with the table I built.

That’s it. A simple test, but I’ll let the computer compare the result sets rather than trusting my eyes.

Last thing, I’ll build my dummy procedure, which can look like this:

CREATE PROCEDURE dummyquery
-- alter procedure dummyquery
AS
BEGIN select MyID   , MyValue , PrevValue = MyValue from MyID
  END

Now I have the outline of what I need. If I run the test now, I’ll get this:

2016-06-07 10_18_23-Photos

The test output tells me it has failed, the values in the #expected table (with a <), and the values from my query in the #actual table (with a >).

Now I can debug and work on this.

Solving the Problem

First, I want to order the data and get a number that counts the order. The ROW_NUMBER function does this, which is available in SQL Server 2005+. I won’t go into SQL 2000- solutions because, well they’re more complex and there should be very few SQL 2000 instances left coming up with new problems.

I can do this with this code:

2016-06-07 10_21_40-Photos

Note that I have a sequential counter that lets me order every row with an index. Now, I can access the previous row, since I know the MyKey value will be one less than the current row.

With this in mind, let’s turn this into a CTE (removing the previous value). Outside of the CTE, I’m going to self-join the CTE to itself. I’ll use a LEFT JOIN since not every row will have a previous row. In fact, the first row won’t.

The join condition, which you can play with, will be on the outer table’s ID being one less than the first table’s key. You could reverse the math as well, but that’s up to you.

2016-06-07 10_29_37-Photos

One last issue. Add an ISNULL to the previous value to return a 0 if there is no match. Now, let’s run the test.

2016-06-07 10_31_58-Photos

SQLNewBlogger

This was a slightly longer post, where I tried to explain how I setup the problem and solved it. I included a test, which didn’t add much coding time. In fact, the writing took far longer than the coding itself.

This is the type of problem I’d encourage you to solve on your blog. If you want to repeat this, look for a solution with temp tables, as the CTE incurs a lot of reads. This isn’t really what you’d like to do in production code.

Not a Great Search

Tldr: A bit of a rant. I’m not spending a bunch of time crafting this.

I have mixed feelings about searching for information on Microsoft sites. It kind of works, kind of doesn’t. I know that there are resource constraints to upgrading their sites, but I’m more and more annoyed.

Perhaps this is one reason I don’t use Bing. Maybe, not sure. I need to think about this.

Anyway, I go to the main BOL page online often. It’s supposed to be more updated than the local help, and I don’t have to use some procedure to get to it. I went today and entered in a search, getting these results.

2016-06-22 13_47_36-t-sql wildcard characters - MSDN Search

Check out the second one. That might be interesting, and it’s in the Support KB. I click on it and see this:

2016-06-22 13_53_40-How to use wildcard characters in the SQL-SELECT statement in Visual FoxPro

That. Is. Not. What. I. Want. (at least not these days)

Even surfacing the entire title, or maybe tagging this as a VFP article in addition to a kb one.

As I delve into more technologies, I find acronyms crossing over, so I want to look for a term in SQL Server, or in C#, and right now I have to search and then try to limit things with a click on the left. Except that I don’t always get a good way to limit searches.

2016-06-22 13_55_37-Settings

That’s not a great example, but that’s the term of I thought of today that crosses over.

Can we please get some way to default a search to the areas we’re working in? I can click something if I’m working with SQL and need to search wider, but I seem to get (more and more) results from Sharepoint, C#, etc that aren’t relevant.

Ugh, I know I’m ranting a bit, but search becomes more and more important in the see of data, especially with things like Azure rev’ing every quarter. Give me a way to search current docs, in my area, easily, by default.

Starting ReadyRoll with an Existing DB

I’ve been working with ReadyRoll quite a bit the last few months, and wanted to start putting out some information. I’m looking to do a longer series, but here’s a quick post on getting started.

When you install ReadyRoll, you get a new project type under SQL Server.

2016-06-14 12_27_14-Photos

This is a database project that will track all changes to your database as a series of migration scripts. Let’s get started. I’ll choose this and give it a name.

Once I do this, I get the ReadyRoll window with some information. What I really want to do here is point this to my existing database. Right now it’s looking at a LocalDB instance.

2016-06-14 12_28_07-Photos

I’ll click the configure link and get the Connection String dialog. This is the database that mirrors one of the SSC databases, with lots of objects in it.

2016-06-14 12_29_38-Photos

Once I’ve done this, the project will use this database. The Shadow database will also be on this instance.

I need to make one other change and set the correct version in the Project Settings.

2016-06-14 12_48_29-Photos

Now, back to the ReadyRoll window. I’ve configured the DB connection, so I can ignore that. What I want to do is import my database, getting a baseline script.

2016-06-14 12_49_32-Movies & TV

I click that and ReadyRoll creates a Shadow database (since one doesn’t exist) and begins to import objects.

2016-06-14 12_49_39-Photos

Once that’s done, I get a list.

2016-06-14 12_49_48-Photos

And a migration script.

2016-06-14 12_49_58-Movies & TV

This is my baseline, starting script. This contains all the objects that exist in the database at this point. These are also added to the Schema-Model, but I can ignore those. I’m going to work on the database itself.

The ReadyRoll widget notes this was an import, and there’s no need to deploy anything since these aren’t changes, but just the state of the db.

2016-06-14 12_52_09-Movies & TV

I can see this if I do a Visual Studio build. Note the message in the middle: No migrations pending deployment. The changes in script 1 (001_20160614-1249_sjones.sql) are already in the db.

2016-06-14 12_52_50-Photos

Now I can make changes to my database and get new scripts. Add a table in the designer?

2016-06-14 12_54_55-Photos

When I click Update (and Generate Script), I get a new migration script.

2016-06-14 12_55_09-Movies & TV

Note that I just generated this script. I’ll write more about this process later, but for now, I’ll click Deploy Project to execute this against my database. When I do that, VS does a build, and one migration is executed.

2016-06-14 12_56_06-Photos

Add a procedure?

2016-06-14 12_57_13-Photos

Generate a new migration script.

2016-06-14 12_57_35-Movies & TV

And so it goes. I can work with my database in VS and get new scripts. I can also do an import if someone else makes changes to the database from their own machine with VS, SSMS, SQLCMD, isql, etc. The import will generate another migration script that gets added.

This is a really basic look at ReadyRoll, but it will get you started. I’ll tackle more topics and different ways of working with database development in another series of posts.

LAST_VALUE–The Basics of Framing

I did some work a 3-4 years ago, learning about the Windowing functions and enjoying them so much I built a few presentations on them. In learning about them, and trying to understand them, I found some challenges, and it took some experimentation to actually understand how the functions work in small data sets.

I noticed last week that SQLServerCentral had re-run a great piece from Kathi Kellenberger on LAST_VALUE, which is worth the read. There’s a lot in there to understand, so I thought I’d break things down a bit.

Framing

The important thing to understand with window functions is that there is a frame at any point in time when the data is being scanned or processed. I’m not sure what the best term to use is.

Let’s look at the same data set Kathi used. For simplicity, I’ll use a few images of her dataset, but I’ll examine the SalesOrderID. I think that can be easier than looking at the amounts.

Here’s the base dataset for two customers, separated by CustomerID and ordered by the OrderDate. I’ve included amount, but it’s really not important.

2016-06-06 13_38_55-Phone

Now, if I do something like query for LAST_VALUE with a partition of CustomerID and ordered by OrderDate, I get this set. The partition divides the set up into the two customer sets. Without an ORDER BY, these sets would exist as the red set and blue set, but in no particular order. The ORDER BY functions as it does in any query, guaranteeing the same order every time.

2016-06-06 13_46_36-Movies & TV

Now, let’s look at the framing of the partition. I have a few choices, but at any point, I have the current row. So my processing looks like this, with the arrow representing the current row.

2016-06-06 13_49_22-Movies & TV

The next row is this one:

2016-06-06 13_49_33-Movies & TV

Then this one (the last one for this customer)

2016-06-06 13_49_44-Movies & TV

Then we move to the next customer.

2016-06-06 13_49_54-Movies & TV

When I look at any row, if I use “current row” in my framing, then I’m looking at, and including, the current row. The rest of my frame depends on what else I have. I could have UNBOUNDED PRECEEDING and UNBOUNDED FOLLOWING in there.

If I used UNBOUNDED PRECEEDING and CURRENT ROW, I’d have this frame, in green, for the first row. It’s slightly offset to show the difference.

2016-06-06 13_53_22-Movies & TV

However, if I had CURRENT ROW and UNBOUNDED FOLLOWING, I’d have this frame (in green).

2016-06-06 13_54_21-Movies & TV

In this last case, the frame is the entire partition.

What’s the last value? In the first case, the last part of that frame is the current SalesOrderID (43793). That’s the only row in the frame. In the second frame, the last one is 57418, the last row in the frame, and partition.

What if we move to the next row? Let’s look at both frames. First, UNBOUNDED PRECEEDING and CURRENT ROW.

2016-06-06 13_56_16-Movies & TV

Now the frame is the first two rows. In this case, the last value is again the current row (51522). Below, we switch to CURRENT ROW and UNBOUNDED FOLLOWING.

2016-06-06 13_56_29-Movies & TV

Now the frame is just the last two rows of the partition and the last value is the same (57418).

There’s a lot more to the window functions, and I certainly would recommend either Kathi’s book (Expert T-SQL Window Functions in SQL Server) or Itzik’s book (Microsoft SQL Server 2012 High-Performance T-SQL Using Window Functions). Either one will help. We’ve also got some good articles at SQLServerCentral on windowing functions.

The RGE

I first heard this little acronym from Grant Fritchey (b | t). He used it when talking about backups and restores, and I like it. However I realize that I’ve never actually noted what it is, so a short blog to do so today.

An RGE is a Resume Generating Event. This is usually when you make a mistake so egregious that you’ll be packing up your personal effects and exiting the building. If it’s really bad, such as releasing financial or other confidential information, you might be escorted out and someone else packs up your things. I’ve seen it happen, and it will shake you. Don’t do this.

We talk about forgetting about backups, or writing bad code or some important task we often perform as causing an RGE. In my experience, that doesn’t happen too often. Companies usually have a fairly high tolerance for mistakes.

However, that tolerance is usually extended only once. Don’t make the same mistake again. I’d also note that some managers can be very short tempered, and a single, large issue might be an RGE in their eyes.

I don’t usually worry about causing an RGE, but I keep the acronym in mind. Especially when I do something that could affect the core parts of my organization’s business.