The Desktop Rebuild–Part 1

I’ve had an aging desktop for some time. I originally bought it from Glenn Berry (b | t), who I think gets a new machine every quarter or two. Seriously, I think there is a computer in every room of his house, and he has a lot of rooms.

In any case, I had an i7-930, 24GB of RAM, and 4 drives with about 3TB of storage. However, the CPU was aging and I’d get temp alarms when doing heavy video processing. In the last 4-5 years, I’d replaced the power supply and CPU cooler, but still had issues. The Windows 10 upgraded added more, with some weird freezing and loss of the Start menu. I didn’t have all of those things on my laptops (both w10, one new, one upgrade), so I decided it was time to rebuild when a simple Skype call had temp alarms going off.

Glenn is my hardware guy. If I have a question, I ask him, mostly because I don’t care about cool hardware and just want things to work. Glenn recommended these items (some of them), and I drove down to the local Microcenter after my Skype call finished.

Photo Jun 23, 5 39 33 PM

I purchased

  • ASRock Z170 Extreme6 motherboard (recommended) – $160
  • i7-6700k CPU ( recommended) – $290
  • CoolerMaster fan – $35
  • 32GB Corsair Ballistic RAM – $150

I know I could find better prices, but I wanted this done quickly. I also needed a new video card, but I delayed that for now. I wish I hadn’t.

After a few hours of work, and a day (filled with family commitments), I sat down with parts.

Photo Jun 23, 5 42 30 PM

I made sure my Crashplan was up to date, and my cloud services were sync’d. I’ve tried to do this in order to become productive quickly on new machines. I also had a new SSD that I’ve been traveling with, but I repurposed that as a new boot drive. The old one was 256, and this was 1TB, so I won’t be in danger of running out of space, something that I was close to doing on the old machine.

I disconnected everything and pulled out the old (dusty) motherboard.

Photo Jun 23, 5 53 11 PM

It’s worked well, but it’s aging. It’s also somewhat slow compared to newer technologies, though that was (hopefully) a bonus, not a requirement.

I then installed the new motherboard, which wasn’t too bad. The hardest part was getting the CPU cooler installed. Balancing the four arms and screwing things in while the large cooler wants to slide on thermal compound was a challenge. Fortunately I managed to get it secure without too much grease on my hands.

I connected all my drives back, using some of the new SATA 3 cables that came with the motherboard. I didn’t bother connecting the front side speaker stuff since those connections are covered by the video cards. I added back both video cards and then plugged in basics: monitor, power, USB keyboard/mouse. I left some things uncovered, since I’ve rarely had things work perfectly on the first assembly.

Photo Jun 23, 7 27 13 PM

Crossing my fingers, I booted.

Or didn’t. I got lights, which I haven’t always gotten. There’s a debug LED in there and I got a 99 code. Looking that up, I realized there’s a PCIe issue. There are only 3 slots, with 2 video cards, so I pulled one. Same issue.

Pulled the second and it booted.

I had made a bootable USB (borrowed from my son) and then added the W10 install iso. Things booted up and I installed Windows.

Photo Jun 23, 7 31 21 PM

I only had the onboard video, which worked, and I’m glad I had another machine since I needed to download the LAN drivers to connect to the network and video drivers to get beyond a basic 1024 resolution.

Total time to get Windows back up, around 90 minutes, though most of that was wrestling with the CPU cooler.

The next post will look at getting productive again, with newer hardware.

Have You Designed a Database from Scratch?

One of the things that I try to regularly promote is the #SQLNewBlogger project from Ed Leighton-Dick (and #SQLNewBlogger hashtag on Twitter). In keep a column open scanning for the tag and try to respond to those that post. I also write a series of posts under that tag, mainly beginner type posts, that showcase things I think are good, basic topics that I’d want a DBA or developer to know about SQL Server. These are exactly the types of posts that you write for your next resume (or next hiring manager).

I saw recently someone begin their blogging journey with a new post about designing a database. What struck me was this person with a lot of experience noted they had never designed a database from scratch. I’m sure that most of you have worked with a database, and have even modified various objects inside of a schema. Certainly you’ve changed the code of stored procedures or functions. I’m even sure many of you have modified tables, adding columns where you need them. After all, that’s a favorite first technique for many people that need a place to store data.

Probably the majority of you have built tables, at least temporary tables. I’m sure you spent a few minutes thinking about the columns, datatypes, etc. I’d also hope that most of you have added indexes and constraints to tables. Those are certainly skills that more people should better understand, especially the developers that work on software powered  by SQL Server. Above all, please learn how to index OLTP tables.

However, have you designed a database from scratch? Not a standalone database, but a database the actually has an API or application built against it. Have you tried to use the database and found places where your modeling was lacking? Did others complain about your design?

I know that lots of school coursework requires people to design parts of a database, but what about a full, complete database. One that fulfills the requirements to actually manage a set of data well? I’ve done a few, though admittedly, with input and help from other DBAs and developers. I am better for that, and I think that a good design requires some interaction and input from others.

With that in mind, whether you’ve built one or now, what about tackling this as a project for your own blog? Certainly there are some good problem sets out there, but I’d actually give you a new one if you’d like. The SQL Saturday site for PASS has a database behind it. How would you design a database, given the requirements that you need to support that site, and actually generate out the XML feed for the Guidebook app? Maybe that would be a fun, multi-part series that looks at your decisions for the various entities required. You’d learn something and get lots of feedback.

If you’re like to publish a series here at SQLServerCentral, let us know. We’d love to have some design series on databases that back applications, especially if you have a side project with an application.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.2MB) podcast or subscribe to the feed at iTunes and LibSyn.

Selecting Speakers for the Summit

This is a series of posts on the PASS Summit and SQL Saturdays. I’m outlining some thoughts here, sometimes for the first time, but on topics that I think make better events. These are opinions and thoughts, not mandates or demands.

I attended, well, listened to, the PASS Speaker Selection Q&A this week. In the spirit of being constructive, I want to thank the board members and committee members for attending. I thought the event was well run and informative. I appreciate the disclosure.

Tldr: I think the volunteers did a good job, with minor room for improvement. Disclose more numbers and guidelines, and involve the community more.

Hearing about the process is always good as this affects the community that attends the event. Both speakers and attendees. I’ve been involved before, and should do so again, to see how things might be different. I appreciate the amount of work involved, and would like to thank Allen White, Lance Harra and Mindy Curnutt for their work. I know many others are involved, and thanks for your time as well.

  • 840 abstracts.
  • 255 speakers (3+/speaker)
  • 112 session slots (+10 pre-cons, 4 lightning talks)
  • 5000+ comments from reviewers.

That last number is quite interesting. That’s an average of 5.9 comments per submissions, much higher than when I was involved. What’s more, Lance and Mindy reviewed all comments, ensuring none were inappropriate to return to speakers. While I abhor censorship, this is something that needs to be done. Some (very few) volunteers will poorly communicate their thoughts, or have a bad day. Reviewing and redacting (or asking for rewording) makes sense.

There also was a note that Lancy/Mindy tracked the timing of comments to ensure volunteers spent time actually thinking about the sessions/speakers and not racing through with a quick CTRL+C, CTRL+V. That is impressive.

I asked a question on first time speakers. Not to beat up the committee, but because I think the health of the community depends on regularly getting new speakers, both first timers at the Summit and new pre-con presenters. Note that I don’t want someone to give their first talk or their first pre-con at the Summit. They need to practice elsewhere, but we need a rotation of new speakers.

Allen mentioned that he looked for 20-25% new speakers, though that guideline isn’t published or listed. I know that the number depends on the submissions, but having guidelines and then giving reasons for deviating is what I’d expect. Give numbers and then react. Adjust and explain why. That’s what many of us do with data and our jobs.

For the record, I think 25% is high. Maybe 20%, especially as we have more and more speakers. I would like to see a goal of at least 10% each year. If you do more, fine, but explain a bit. Not a lot. A bit.

Allen has a nice post with some numbers and the process explanation. Thanks, Allen.

However.

More Data

I want to see more numbers. That’s our job. If PASS doesn’t have time, recruit a few volunteers. I’d bet there are a few people in the community that would love to play with data. Let us see the pre-cons broken out. Topics, resutls of surveys. There’s even this cool tool that lets you build some visualizations with data.

Or better yet, put out a feed, or if that’s too hard, some numbers in an Excel sheet. A flat file. Let people do some analysis and learn. You might learn something as well.

Honestly, while I think comments deserve some privacy protection, I’m not sure ratings do. I’d like to see those released. The committee is always going to upset people, and there will always be complaints. However, the more information you provide, the more we can comment, yes, also complain, but also you might learn something about how the community thinks about your choices.

After all, you’re supposed to consider the community.

I’m not asking you to only listen to the community. After all, the Summit is a business, and it’s fair to say that the 12th PowerBI session rated a 4.2 got bumped to get in a 3rd Extended Events talk rated 3.5.

Disclose more and analyze more. React and adapt. If you don’t want complaints, resign. That’s part of the job.

Enough on that.

Community Vote

I do think that it’s impossible to build a perfect schedule. Looking at last year’s numbers is inherently flawed. After all, how highly were the AlwaysEncrypted sessions rated last year? Or the first time speakers? It’s a guideline, and I’d like you to publish those numbers to show what you’re considering, but I also think the community deserves a vote.

I asked the question and Allen responded that not a lot of people voted and there were issues. I dislike terms like “issues” without specifics.

However, I’m not asking for sessions to bypass the program committee. I think there is good work done here. What I’m saying is that of the 112 sessions, when you get 100, put the last 10 or 12 up for a vote. Take the sessions rated 100-124 and drop them in a survey. Let community people, those that wish to, vote. After all, of your entire membership, how many vote for the Board? Probably a similar number to what you’d get here.

You take popularity votes from last year’s numbers already. Take a few more.
If it’s a lot of work, explain that. Maybe we can help you slim things down.

Break the Bubble

Mostly what I see from organizations, and what I’ve done, is that groups start to take on responsibility, feel the weight of that responsibility, and work in a bubble. Over time, they become more disconnected from the people they make decisions over.

This is a hard process, and a lot of work. I know you try and mostly succeed. You may be overthinking this, and over-controlling it. Let go a little, take more feedback, and look to improve the process.

Continually.

That’s what we do in this business, or at least, what we should do.

Getting the Previous Row Value before SQL Server 2012

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

I ran across a post where someone that was trying to access the previous value in a table for some criteria. This is a common issue, and  one that’s very easily solved in SQL Server 2012+ with the windowing functions.

However, what about in SQL Server 2008 R2-?

NOTE: I’m solving this quickly, the way many people do, but this is an inefficient solution. I’ll show that in another post. However, I’m showing how you can describe and solve a problem here. If you need to solve this, look for a temp table solution (or find a later post from me).

Setup

It’s pretty easy. Let’s get some data together. I’ll use a big sample since that’s easier to see the differences.

CREATE TABLE MyID
( myid INT
, myvalue INT
);
GO
INSERT MyID
VALUES (1, 10 ),
        (1, 20 ),
        (2, 400),
        (2, 500),
        (2, 600),
        (3, 8000),
        (3, 9000),
        (3, 10000),
        (3, 11000);

Now, what I want is something that returns the previous row, assuming we’re ordering by the ID and value. If there is no previous value, let’s return a zero. Essentially what we want is something like this:

select MyID        , MyValue       , MyPrevValue = ISNULL( x, 0)
from …

That’s the pseudocode. Obviously I need to fill in blanks. However, let’s build a test. Why? Well, I can then see my result data, and I can re-run the test over and over as I experiment with the query. It’s not hard, I promise.

EXEC tsqlt.NewTestClass
  @ClassName = N'WindowTests';
  go
CREATE PROCEDURE [WindowTests].[test check the previous row value for MyID]
AS
BEGIN
-- assemble
CREATE TABLE #expected (id INT, myvalue INT, PrevValue int) INSERT #expected
VALUES (1, 10  , 0  ),
        (1, 20  , 10 ),
        (2, 400 , 20 ),
        (2, 500 , 400),
        (2, 600 , 500),
        (3, 8000 , 600),
        (3, 9000 , 8000),
        (3, 10000 , 9000),
        (3, 11000 , 10000) SELECT *
INTO #actual
  FROM #expected AS e
  WHERE 1 = 0 -- act
INSERT #actual
EXEC dummyquery;
-- assert
EXEC tsqlt.AssertEqualsTable
  @Expected = N'#expected'
, @Actual = N'#actual'
, @FailMsg = N'Incorrect query' END

If you examine the test, you’ll see that I create a table, insert the results I expect, and then call some procedure. I compare the results of the procedure with the table I built.

That’s it. A simple test, but I’ll let the computer compare the result sets rather than trusting my eyes.

Last thing, I’ll build my dummy procedure, which can look like this:

CREATE PROCEDURE dummyquery
-- alter procedure dummyquery
AS
BEGIN select MyID   , MyValue , PrevValue = MyValue from MyID
  END

Now I have the outline of what I need. If I run the test now, I’ll get this:

2016-06-07 10_18_23-Photos

The test output tells me it has failed, the values in the #expected table (with a <), and the values from my query in the #actual table (with a >).

Now I can debug and work on this.

Solving the Problem

First, I want to order the data and get a number that counts the order. The ROW_NUMBER function does this, which is available in SQL Server 2005+. I won’t go into SQL 2000- solutions because, well they’re more complex and there should be very few SQL 2000 instances left coming up with new problems.

I can do this with this code:

2016-06-07 10_21_40-Photos

Note that I have a sequential counter that lets me order every row with an index. Now, I can access the previous row, since I know the MyKey value will be one less than the current row.

With this in mind, let’s turn this into a CTE (removing the previous value). Outside of the CTE, I’m going to self-join the CTE to itself. I’ll use a LEFT JOIN since not every row will have a previous row. In fact, the first row won’t.

The join condition, which you can play with, will be on the outer table’s ID being one less than the first table’s key. You could reverse the math as well, but that’s up to you.

2016-06-07 10_29_37-Photos

One last issue. Add an ISNULL to the previous value to return a 0 if there is no match. Now, let’s run the test.

2016-06-07 10_31_58-Photos

SQLNewBlogger

This was a slightly longer post, where I tried to explain how I setup the problem and solved it. I included a test, which didn’t add much coding time. In fact, the writing took far longer than the coding itself.

This is the type of problem I’d encourage you to solve on your blog. If you want to repeat this, look for a solution with temp tables, as the CTE incurs a lot of reads. This isn’t really what you’d like to do in production code.

Why Use R Services?

I ran across a blog post from the very talented Joe Sack recenty, who I hope makes a few more minutes for me the next time we’re in the same town together😉.

The post is a look at some of the customers that are using R Services in SQL Server 2016. As you might expect, there are highlights from customers that show dramatic improvement in performance. Going from 200 valuations/hour to 12,000/minute and taking a process from 2 days to 1 hour. I’m sure not all operations improved that much, but I bet most had some improvement. R Services is a big improvement in the way data is analyzed with the R language.

What I really like, however, is that the piece includes some of the gotchas customers experienced, with links on how one might go about fixing the issues. There are also hints on visualizations and performance tuning options. I like looks at technology that include some details on those items that work well or don’t work well. This is a good overview that I hope gets more customers interested in using R services.

But, I hope that we see deeper pieces that can give technical guidance on specific scenarios. Which models scale well and which don’t? Which options in some analysis improved (or hurt) performance? There are plenty of items which are best answered with specific examples, rather than general advice. With all the customers and data Microsoft gathers, especially for companies that might use R Services in azure, I’d expect that we could get some useful, detailed examples on how this (and other) technologies actually work in the real world. I’d like to see this guidance, not just with R Services, but with other technologies as well. Certainly providing some sample data sets and code that performs well really (or tuning options), is what most of us want to see.

Of course, I’d like to see more of these stories and details at SQLServerCentral as well, so if you are solving problems and want to publish something, drop us a note. We’d love articles from the real world, whether on SQL Server 2016, or any prior version. The more reference problems and solutions we have, the more people learn to code better.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.1MB) podcast or subscribe to the feed at iTunes and LibSyn.

Not a Great Search

Tldr: A bit of a rant. I’m not spending a bunch of time crafting this.

I have mixed feelings about searching for information on Microsoft sites. It kind of works, kind of doesn’t. I know that there are resource constraints to upgrading their sites, but I’m more and more annoyed.

Perhaps this is one reason I don’t use Bing. Maybe, not sure. I need to think about this.

Anyway, I go to the main BOL page online often. It’s supposed to be more updated than the local help, and I don’t have to use some procedure to get to it. I went today and entered in a search, getting these results.

2016-06-22 13_47_36-t-sql wildcard characters - MSDN Search

Check out the second one. That might be interesting, and it’s in the Support KB. I click on it and see this:

2016-06-22 13_53_40-How to use wildcard characters in the SQL-SELECT statement in Visual FoxPro

That. Is. Not. What. I. Want. (at least not these days)

Even surfacing the entire title, or maybe tagging this as a VFP article in addition to a kb one.

As I delve into more technologies, I find acronyms crossing over, so I want to look for a term in SQL Server, or in C#, and right now I have to search and then try to limit things with a click on the left. Except that I don’t always get a good way to limit searches.

2016-06-22 13_55_37-Settings

That’s not a great example, but that’s the term of I thought of today that crosses over.

Can we please get some way to default a search to the areas we’re working in? I can click something if I’m working with SQL and need to search wider, but I seem to get (more and more) results from Sharepoint, C#, etc that aren’t relevant.

Ugh, I know I’m ranting a bit, but search becomes more and more important in the see of data, especially with things like Azure rev’ing every quarter. Give me a way to search current docs, in my area, easily, by default.

Starting ReadyRoll with an Existing DB

I’ve been working with ReadyRoll quite a bit the last few months, and wanted to start putting out some information. I’m looking to do a longer series, but here’s a quick post on getting started.

When you install ReadyRoll, you get a new project type under SQL Server.

2016-06-14 12_27_14-Photos

This is a database project that will track all changes to your database as a series of migration scripts. Let’s get started. I’ll choose this and give it a name.

Once I do this, I get the ReadyRoll window with some information. What I really want to do here is point this to my existing database. Right now it’s looking at a LocalDB instance.

2016-06-14 12_28_07-Photos

I’ll click the configure link and get the Connection String dialog. This is the database that mirrors one of the SSC databases, with lots of objects in it.

2016-06-14 12_29_38-Photos

Once I’ve done this, the project will use this database. The Shadow database will also be on this instance.

I need to make one other change and set the correct version in the Project Settings.

2016-06-14 12_48_29-Photos

Now, back to the ReadyRoll window. I’ve configured the DB connection, so I can ignore that. What I want to do is import my database, getting a baseline script.

2016-06-14 12_49_32-Movies & TV

I click that and ReadyRoll creates a Shadow database (since one doesn’t exist) and begins to import objects.

2016-06-14 12_49_39-Photos

Once that’s done, I get a list.

2016-06-14 12_49_48-Photos

And a migration script.

2016-06-14 12_49_58-Movies & TV

This is my baseline, starting script. This contains all the objects that exist in the database at this point. These are also added to the Schema-Model, but I can ignore those. I’m going to work on the database itself.

The ReadyRoll widget notes this was an import, and there’s no need to deploy anything since these aren’t changes, but just the state of the db.

2016-06-14 12_52_09-Movies & TV

I can see this if I do a Visual Studio build. Note the message in the middle: No migrations pending deployment. The changes in script 1 (001_20160614-1249_sjones.sql) are already in the db.

2016-06-14 12_52_50-Photos

Now I can make changes to my database and get new scripts. Add a table in the designer?

2016-06-14 12_54_55-Photos

When I click Update (and Generate Script), I get a new migration script.

2016-06-14 12_55_09-Movies & TV

Note that I just generated this script. I’ll write more about this process later, but for now, I’ll click Deploy Project to execute this against my database. When I do that, VS does a build, and one migration is executed.

2016-06-14 12_56_06-Photos

Add a procedure?

2016-06-14 12_57_13-Photos

Generate a new migration script.

2016-06-14 12_57_35-Movies & TV

And so it goes. I can work with my database in VS and get new scripts. I can also do an import if someone else makes changes to the database from their own machine with VS, SSMS, SQLCMD, isql, etc. The import will generate another migration script that gets added.

This is a really basic look at ReadyRoll, but it will get you started. I’ll tackle more topics and different ways of working with database development in another series of posts.

A Beautiful Google Mind

I read a quote a few years ago that really resonated with me. The quote is attributed to Jeffrey Hammerbacher, an early Facebook employee that left to found Cloudera, a data analytics company. The quote went like this:

“The best minds of my generation are thinking about how to make people click ads. That sucks.”

That actually dovetailed with something I’d heard in the mid 2000s, when someone lamented that the best technical minds in computer science weren’t working to further our industry or even just produce amazing software. Many of them had left industries to work inside financial institutions, manipulating our stock markets and money systems to make millions.

I can’t blame anyone for choosing to engage in a business that will reward them. That’s the beauty of capitalism. I think you can go too far, but I’d hope that most people, while pursuing their success, would look beyond just money. At least at some point. I would like to believe that most of us find there to be things at least as, if not more so, important than money.

Larry Page founded Google, and is one of the richest people in the world. I ran across an article on his beautiful mind, which looks at the ways in which Mr. Page is trying to change the world. Perhaps he has an eye on profit, but my impression is that he also wants to make life better for more and more people around the globe. His focus happens to be ways technology might help us, and I’m hoping he succeeds. Much like I admire Bill and Melinda Gates are trying to improve health around the world. Those aren’t my passions, but I’m glad someone is driven by them.

Many of us aren’t financially secure for life or able to fund whatever endeavor comes to mind. However, we can make a difference in the world. We can be examples that show there is something beyond making money at work. We can also volunteer or donate some of our time to help others. We can all be a Larry Page in some way, impacting the world in a small way to make it better for everyone.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.6MB) podcast or subscribe to the feed at iTunes and LibSyn.

LAST_VALUE–The Basics of Framing

I did some work a 3-4 years ago, learning about the Windowing functions and enjoying them so much I built a few presentations on them. In learning about them, and trying to understand them, I found some challenges, and it took some experimentation to actually understand how the functions work in small data sets.

I noticed last week that SQLServerCentral had re-run a great piece from Kathi Kellenberger on LAST_VALUE, which is worth the read. There’s a lot in there to understand, so I thought I’d break things down a bit.

Framing

The important thing to understand with window functions is that there is a frame at any point in time when the data is being scanned or processed. I’m not sure what the best term to use is.

Let’s look at the same data set Kathi used. For simplicity, I’ll use a few images of her dataset, but I’ll examine the SalesOrderID. I think that can be easier than looking at the amounts.

Here’s the base dataset for two customers, separated by CustomerID and ordered by the OrderDate. I’ve included amount, but it’s really not important.

2016-06-06 13_38_55-Phone

Now, if I do something like query for LAST_VALUE with a partition of CustomerID and ordered by OrderDate, I get this set. The partition divides the set up into the two customer sets. Without an ORDER BY, these sets would exist as the red set and blue set, but in no particular order. The ORDER BY functions as it does in any query, guaranteeing the same order every time.

2016-06-06 13_46_36-Movies & TV

Now, let’s look at the framing of the partition. I have a few choices, but at any point, I have the current row. So my processing looks like this, with the arrow representing the current row.

2016-06-06 13_49_22-Movies & TV

The next row is this one:

2016-06-06 13_49_33-Movies & TV

Then this one (the last one for this customer)

2016-06-06 13_49_44-Movies & TV

Then we move to the next customer.

2016-06-06 13_49_54-Movies & TV

When I look at any row, if I use “current row” in my framing, then I’m looking at, and including, the current row. The rest of my frame depends on what else I have. I could have UNBOUNDED PRECEEDING and UNBOUNDED FOLLOWING in there.

If I used UNBOUNDED PRECEEDING and CURRENT ROW, I’d have this frame, in green, for the first row. It’s slightly offset to show the difference.

2016-06-06 13_53_22-Movies & TV

However, if I had CURRENT ROW and UNBOUNDED FOLLOWING, I’d have this frame (in green).

2016-06-06 13_54_21-Movies & TV

In this last case, the frame is the entire partition.

What’s the last value? In the first case, the last part of that frame is the current SalesOrderID (43793). That’s the only row in the frame. In the second frame, the last one is 57418, the last row in the frame, and partition.

What if we move to the next row? Let’s look at both frames. First, UNBOUNDED PRECEEDING and CURRENT ROW.

2016-06-06 13_56_16-Movies & TV

Now the frame is the first two rows. In this case, the last value is again the current row (51522). Below, we switch to CURRENT ROW and UNBOUNDED FOLLOWING.

2016-06-06 13_56_29-Movies & TV

Now the frame is just the last two rows of the partition and the last value is the same (57418).

There’s a lot more to the window functions, and I certainly would recommend either Kathi’s book (Expert T-SQL Window Functions in SQL Server) or Itzik’s book (Microsoft SQL Server 2012 High-Performance T-SQL Using Window Functions). Either one will help. We’ve also got some good articles at SQLServerCentral on windowing functions.

Updates During Outages

Slack went down on June 10, 2016. I noticed, which is funny, since I don’t necessarily use the tool every day. I tend to still communicate with email, but more and more people like using Slack at Redgate, so I do pop over there. It’s also a good way to drop a quick note and perhaps get a quick reply. In this case I’d asked a group to do something and hadn’t heard back. Since my request didn’t generate a ticket, I didn’t want to send another email, which could result in more delays if someone isn’t processing email. However, that’s not important.

What was interesting is my Slack web page didn’t connect, and when I pinged @slackhq on Twitter, they directed me to status.slack.com. That’s where they post updates. That site was also down, which somewhat defeats the purpose of having a separate location for updates.

I’ve experienced this a few times, where someone (sometimes me) has built an update or notification mechanism that is dependent in some way on the actual service we’re updating. Often this has been because the same http server is being used, but sometimes it’s the same database instance is used to allow non-technical people to post updates. In all those cases, at some point the update mechanism has broken.

I’ve learned to actually separate my update broadcast mechanism from the production server. We’ve done this in a few ways. I’ve had includes of a simple text file in Web applications in addition to a static page that can be served from a web server. I’ve learned to use a separate physical host that can be moved to the proper IP address in the event that our firewall or load balancers don’t work. They key, I’ve learned, is separation. Have a separate resource that can manage a simple message back to users. Perhaps even a small database that can respond to queries with a “we’re down” reply.

Downtime is never good for users, and rarely are people pleased with being unable to access their system, but good communication goes a long way to soothing the hurt feelings. Most of us accept that systems go down and problems occur. What we’d like is a short note (and updates) that let us know something is being done.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.2MB) podcast or subscribe to the feed at iTunes and LibSyn. feed