Rounding Challenges–#SQLNewBlogger

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

Rounding is an interesting activity, and one that I think people sometimes don’t pay enough attention to. Recently I saw a problem that intrigued me and I spent a few minutes looking at the issues.

Let’s suppose you have data like this for pricing.

0.3

0.7

1.2

1.6

How would you round this data? If you use a simple ROUND(n,0) function, you get this:

0.0

1.0

1.2

2.0

Now, that’s fine, but what if you want this:

0.3

1.0

1.2

2.0

That’s a bit more challenging. I’ll leave the solution out, but note that if I ROUND(n, 1), I get this.

0.3

0.7

1.2

1.6

That’s not quite right either. ROUND() is using the number of decimals I’ve given it, but in some cases, I might want to round up. In this case, I only round up when we’re at .5 or higher, but don’t round down. In those cases, I need to limit rounding. This could be done in a WHERE clause, or with a function (be really careful of scalar functions).

The important part is knowing that I really want this:

ROUND(0.3, 1)

ROUND(0.7, 0)

ROUND(1.2, 1)

ROUND(1.6, 0)

How I get these is up to me, and there are a few ways, but really I want to be sure that I understand how ROUND() works and then apply it appropriately for my situation.

SQLNewBlogger

This is a quick one, literally about 6 minutes to write. About a minute of that was playing with formatting. Understanding functions, and knowing how they affect data is important to show your knowledge.

Also, make sure that you know how to solve something like this if you write about it. I’d encourage you to write the solution as well as remember it. Someone might ask you in an interview Winking smile

Posted in Blog | Tagged , , | 4 Comments

Designing for the Public

I’m a bit believer that better data, and better software can help our governments around the world operate more efficiently, and better work for our citizens. I appreciate what Tim O’Reilly has said where he notes that we want governments to “specify less and do more.” Whatever your political leanings, I think that most of us would like to have our governments work better, whatever that might mean for us.

I ran across a short piece on the design principles of the GDS (Government Digital Service) in the UK. The details are inside of a set of tutorials that you get when you subscribe to Safari Books Online, but I did find a short list of their basic principles. The first seven are:

  • Digital by default
  • Putting users first
  • Learning from the journey
  • Building a network of trust
  • Moving barriers aside
  • Creating an environment for technology leaders to flourish
  • Don’t do everything yourself (you can’t)

There are more, but I think that these are overall good design principles for anyone that builds digital services. These are agile ideas, or DevOps ideas, Six Sigma ideas, the ways in which we can be better by learning and evolving as we move forward. Certainly I think governments struggle with the pace of the digital transformation of the world, and trying to solve problems in one large effort doesn’t work. We need the ability to debug, and modify our systems on a regular basis. Both in public and private sectors.

The last item might be one you don’t think about in some private industries, but certainly the idea that you can’t provide everything for customers. Perhaps you may to include an API or integration method to let customers build on what you do, and learn from what your customers might build, including those features in your next iteration. Designing for a diverse audience is hard, and certainly requires flexibility, but also the ability to make hard decisions and focus on the things you can do well, rather than trying to meet every goal and every need.

Hopefully we’ll see more governments doing this in the future.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.1MB) podcast or subscribe to the feed at iTunes and Mevio .

Posted in Editorial | Tagged | Leave a comment

A Slower Job

I noticed that my company, Red Gate Software, was looking for a support engineer in our CA office awhile back. It’s a good job, and while the salary is lower than that which experienced SQL Server professionals might make elsewhere, there are some good benefits, not the least of which are a generous vacation allowance and the chance to get a sabbatical. At first, I suggested looking at recent graduates from local colleges for candidates, as well as junior people at user groups or SQL Saturdays, but then I had another thought.

Working as a support engineer can be an interesting job, seeing lots of different types of problems with software. It can also be less stressful since you work on lots of small issues without looming,  deadlines. With great benefits, it might be the kind of job that an older technology worker would want. Especially if this were a part time position. Experienced, semi-retired professionals might really enjoy this job. I know this is something that might intrigue me in 15 or 20 years.

QA certainly needs more quality workers, and with the sporadic nature of the job, I could see organizations benefiting from using flexible workers that might not even want or need a full time job. Perhaps having a larger group of people available on demand, as many customer support centers use, might be a good way to take advantage of skilled workers that are looking for extra money.

We have lots of positions in technology that require lots of experience, but aren’t necessarily full time engagements in many companies. I would expect that much like Remote DBA services, we will see more and more limited engagements in the future, either part time or contract based. Hopefully many of us will find both fulfillment and excitement throughout our careers, perhaps slowing down over time, but still being engaged and excited in technology.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.8MB) podcast or subscribe to the feed at iTunes and Mevio .

Posted in Editorial | Tagged | 3 Comments

Remember the Alamo (with a SQL Saturday nearby)

This past weekend was the first SQL Saturday in San Antonio, TX. I was lucky enough to get the chance to go, and I was happy to do so. I like to support new events in new cities, and hopefully I can help keep them going in the future.

I went a day early. Traveling Friday during the day would have eaten up most of my day, so I got a late flight down Thursday night and spent Friday learning about SSIS from Tim Mitchell, as well as getting a little work done here and there.

Photo Aug 13, 1 19 02 PM

The facility for the SQL Saturday was Texas A&M University – San Antonio. It’s a small, but beautiful school. The entrance to the campus is above. Below, the doors greeted me as I walked up, opening automatically.

Photo Aug 12, 4 12 01 PM

Friday night the speakers had a nice dinner, with live entertainment.

Photo Aug 12, 6 15 24 PM

And we each received a cowboy hat to wear. Some fit better than others.

Photo Aug 12, 6 27 17 PM

As much as I’d like SQL Saturdays to be cost-conscious and slimmer, I am not complaining if organizers decide to do more. I just don’t want anyone that puts on a SQL Saturday to think that myself or other speakers won’t come. We will.

In fact, we had a nice discussion about this at our dinner. We don’t volunteer and teach for food or gifts. We do it because we enjoy it.

Anyway, the event was fun. There were over 300 registrations and I think there were 200+ people out for a day of learning. I delivered my Branding Yourself for a Dream Job talk to a full room. I heard a couple people say they didn’t come in because there weren’t chairs.

Photo Aug 13, 11 56 40 AM

Please, always come in. Unless the Fire Marshall stops you, I have no issue with you standing in the room, sitting on the floor, or finding a space anywhere.

I had some nice chats with people, which is one of the reasons I like these events. I attended a fun talk from Chris Hyde on R, which was a good way to open the day. I also saw Tim Costello do a fantastic Agile Data Warehouse talk, including the live affixing of post-it notes to the projector screen to demonstrate how to analyze your requirements. Go see Tim if you can. He does a good job and has a lot to teach.

Photo Aug 13, 12 33 50 PM

I did leave a touch early, since I hadn’t had a chance to go by the Alamo and know almost nothing of Texas History. I wandered over, too late for a tour, but I had the chance to see the compound and the beautiful gardens, as well as learn a bit about the siege, battle, and how Texas was formed.

Photo Aug 13, 2 27 49 PM

It was interesting, and if you get an hour when you’re in San Antonio, go by.

Photo Aug 13, 2 31 01 PM

I hear the tour is really good, and hopefully I’ll get back someday and get the chance to experience it.

Photo Aug 13, 2 32 07 PM

The after party was fun, though I saw relatively few attendees there. I think fewer people come when there isn’t a place near the venue, but I enjoyed myself before getting to the hotel and ready for an early flight.

A good couple days in San Antonio, but back to work today. Sporting my hat.

Photo Aug 15, 8 16 07 AM

Posted in Blog | Tagged , , | 2 Comments

To Infinity and Beyond

This isn’t data related, but a story caught my eye recently, and I suspect quite a few of you will enjoy reading it.

One of the tasks I had to do in university was try and rewrite the landing software for the Mars Viking lander. This was in a software engineering class, and we treated the task like a project, each of us having to work on different parts and integrate them together. We had the same requirements in terms of memory and disk space, though not CPU. We ended up building a small piece of software that worked in the professor’s simulation, but the real joy was going through the process and solving the problem. It was quite a challenge.

In the real world, quite a few people around the globe have been disappointed by the very limited space travel efforts of our species. After sending multiple missions to the moon, not much happened outside Earth orbit for decades. A few recent actions with rockets have gathered attention, and with the interest shown by SpaceX and other companies, it seems that we might be poised for more exciting opportunities in space across the next few decades.

A company’s application to send a commercial mission to the moon has been approved by the US federal government. There are rules that require each country to authorize and supervise space activities, and it’s quite a process in the US. However Moon Express has gotten the approval and they want to send a spacecraft to the moon. And not before the end of the decade, as the US tried to do in the 60s. They’re looking to land on the moon before the end of 2017. That’s just over a year away.

I think the idea of space travel is really cool. Well, it’s cool for others. Not for me. Until there’s artificial gravity, I’m not interested, but I know plenty of others are. Friends in technology dream of going. The DBA in Space contest from Redgate was a hit that many people wanted to win, and get the chance to ride a rocket. My son even wants to study aeronautical engineering in college, hoping to get the chance to work in space someday. Even as launching rockets gets to be commonplace, there are many people that dream of the chance to one day leave the planet.

The explosion in aircraft innovation came when a variety of private companies began to build and fly aircraft. I suspect a similar thing will happen with space flight, especially as the technology advances in computing, 3D printing, and more have become affordable for many private concerns. I expect that the next few years will start to see space tourism become a regular, expensive but affordable option, even as a few private groups will make the first steps toward moon exploration, and someday, colonization.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 4.1MB) podcast or subscribe to the feed at iTunes and Mevio .

 

Posted in Editorial | Tagged | 1 Comment

Opening .SQL Files in SSMS 2016 by Default #SQLNewBlogger

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

This is an easy one, but you’ll want to try this yourself.

As I’ve been trying to move off the various SSMS versions I have installed to the new, free, updated SSMS 2016, I’ve found that on every system (4 so far) that the default action for .SQL files is to open SSMS 2012 or SSMS 2014. That used to be OK, but it’s annoying. I have SSMS 2016 open (maybe just SSMS?) and waiting on the long load times for 2012/2014 SSMS, especially in demos, is annoying.

If I right click a .SQL file, I get the chance to choose, but what do I choose?

2016-08-04 16_34_37-2-sqlserver-replacing-profiler-extended-events-m2-exercise-files

A little experimentation shows that I want SSMS, not SQL Server Management Studio. So let’s change that.

First, open the Control Panel. On Windows 10, this is the Win+I key combination. I find it hard to know where some things are located, but I like the Search in the upper right.

2016-08-04 16_36_28-Settings

When I type “default,” I get a list of things. The fourth one down is the one I want to click.

2016-08-04 16_36_38-Settings

This brings me a list of each file type and the app. Boy, there are a lot of types to go through. You’ll want to scroll down about two thirds of the way to find .SQL.

2016-08-04 16_37_09-Settings

Once I find the file, I click the icon on the right, where it says SQL Server Management Studio. This brings up a “Choose an app” dialog as shown below. I click the SSMS choice.

2016-08-04 16_37_14-Settings

Now when I double click a file, it just opens in SSMS (2016). Quickly.

SQLNewBlogger

A handy tip. This shows I saw a problem and fixed it. I’m sure some of you could write this in 10 minutes. Some might even built a .reg file or some other automation to show off skills that would help you fix this.

Posted in Blog | Tagged , , | Leave a comment

Keyboard Hardlines

A few years ago, I had a keyboard die. At the time, I needed something quickly and ended up with a Logitech wireless model that included a mouse. I’m not sure of which model, but I’ve ended up getting two or three more Logitech devices over the years. I think I go through a keyboard every 18-24 months, though the mice seem to last longer. For me, having devices available without cords is more important for the mouse than the keyboard, but since Logitech will bundle them, I get both at the same time. I’m currently with a K350 keyboard and an M510 mouse, both of which seem to hold up well.

However, I’m at home, usually working alone, so I haven’t been concerned much about security. In corporate environments, I remember playing jokes by moving someone’s mouse to my desk, and giving them a disconnected replacement, trying to mimic their movement. Or doing the same with a keyboard. That’s a great typing challenge if you ever try it.

I thought about those times when I read this piece on the security of wireless keyboards, or maybe the lack of security. Apparently a relatively simple device can intercept and replace, or just record, keystrokes made on a variety of keyboards. These devices use their own dongles, not a Bluetooth connection, and security is non-existent. Perhaps I’ll take one to the Redgate office this fall and see who’s actually vulnerable.

Actually, I think this is an issue, especially in large companies that may regularly hire consultants. Who knows what temporary workers might do with information they can gather by just sitting in proximity to a domain administrator. Or a DBA. We might be disclosing passwords to all our systems. Even malicious insiders may end up with access they shouldn’t have, including the ability to run queries or commands under another individual’s account.

Since keyloggers are cheap and simple, perhaps this seems like an unwarranted concern. However making the move to actually touch someone else’s machine, and add a device that could be noticed is much more dangerous than just sitting, or even standing with a tablet, by someone’s workspace. Even a simple conversation, asking an admin to look up information could result in a breach of security.

Fortunately this hack targets a few devices, and doesn’t work against Bluetooth devices. However, there are attacks against those, and against wireless access. I know some environments have very sensitive information, or even tight security regulation. I really hope we don’t find our administrators going back to hardlines for administrative access from specific machines, or even only using secured wired because we can’t strong wireless security for our devices.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 4.3MB) podcast or subscribe to the feed at iTunes and Mevio .

Posted in Editorial | Tagged | 2 Comments

What’s a Database Pipeline?

One of the things I ran into a few years ago was the idea of a software pipeline. A set journey across machines and stages that your software takes. In modern, DevOps style development shops, this usually means:

  • A development machine(s)
  • A Version Control System (VCS)
  • A CI/Build server/system
  • An artifact repository
  • Test server(s)/system(s)
  • Production server(s)/system(s)

Usually software flows along this path as code is written, compiled, and moved from one stage to the next. You may have more or less stages in your environment.

For databases, this should be similar, with databases in

  • development
  • test/QA
  • staging/pre-production
  • production

You could also have other stages like integration, beta, user acceptance, etc. I’ve seen people with 7, 8, or 9 databases in their environment, all a part of their entire development and operational areas.

The database development pipeline is the set of databases that your code flows through. We’ve represented this with visually in DLM Dashboard as a flow, allowing up to 5 environments to be defined. You can see one of my pipelines in this image:

2016-08-01 22_16_05-SalesDemo-2015-12-01-1745-export-i-fgod4b6h - VMware Workstation

Keeping code flowing smoothly along this pipeline is the goal of any development effort. We know this at Redgate, and our mission is to build tools to help you do this efficiently.

Posted in Blog | Tagged , , | 2 Comments

Data Overload

Data rich and information poor. I think that describes most of the companies I’ve worked for. It’s a theme in this piece from Forbes, noting that many companies use only a fraction of the data they have to make decisions. Certainly I think that many of us that work as data professionals might note that most queries access a portion of our data, often the newest data, with older data sitting on storage systems, constantly powered on, but rarely included in a report.

Retrofitting archival processes into an existing application can be hard. Even more so when there is rarely queried data that clients want to be sure is still accessible somehow. The Stretch Database feature in SQL Server 2016 might help, but I bet it’s a long time before most of us have all our systems on SQL Server 2016 or later versions, let alone finding someone to pay for this feature.

There are real costs to keeping this data around, first and foremost of which is the stress for us as developers and DBAs as we try to tune queries the must run against larger and larger data sets. Actually, I’m only somewhat kidding. Management and clients might not care about this, but having to work against larger and larger can be stressful for technical professionals.

There are other, more concrete and measurable costs to keeping this data around. The cost of power and larger storage systems. With many companies keeping multiple copies of production systems around for different purposes, these can be noticeable costs. There’s also the time factor. If our systems run just 10% slower, that’s potentially 10% less business we can handle. Or maybe all that extra data means more annoyance and frustration from our customers due to slow systems.

We are going to get more and more data in our systems. While much of this data may be useful, if we’re overloaded, we may not be able to take advantage of the information. We also might get erroneous results if we don’t recognize that data gets old, and the value we might have from a row today might not exist in a few years. We should also realize that at times we have lots of data that isn’t useful at all for our organization.

I really would start thinking about the ways in which we can actually remove older data from our systems, with archival to cold systems, or even deletion if we’ve moved copies of data to other applications, such as data warehouse systems. Or maybe just deleting data we know isn’t going to provide any information. Above all, remember that warehouses will fill up at some point, unless you buy more and more (expensive) storage. Keeping all data accessible might not be the best decision for your organization.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.5MB) podcast or subscribe to the feed at iTunes and Mevio .

Posted in Editorial | Tagged , | Leave a comment

T-SQL Tuesday #81–Sharpening Skills

This is an interesting month for T-SQL Tuesday. The challenge is from Jason Brimhall to Sharpen Something, and learn something new. However, his challenge isn’t just a simple “learn something.” Instead, he writes:

“This month I am asking you to not only write a post but to do a little homework – first. In other words, plan to do something, carry out that plan, and then write about the experience.”

I didn’t see the original invite, but I saw the reminder, and so had less than a week to actually try something. Fortunately I was starting something new, so I decided to incorporate that into my post.

The Plan

I have been doing a little work with auditing, as part of trying to better understand security around SQL Server. With that in mind, I’ve been meaning to do more Extended Events work. I’ve done a little, inspired by Erin Stellato, and as I read the invitation, I thought this would be a good time to tackle something. My plan:

I know there are potential issues with the matching of events here, but this is an experiment, and the chance to learn.

The Actions

Last week my daughter had a volleyball tournament during the week. As a result, I was working remotely, in coffee shops and gym bleachers, trying to get work done. The day after starting this plan, my laptop started to die, with no good outlets nearby. At least not where I could see my daughter.

So I started Erin’s course. While I saw pieces of a few sets, I also watched the first 3 modules in the course, and even made a few notes in Evernote of things to try. The next day I finished the course late in the afternoon.

Part 1 done.

Now on to actually practicing.  I downloaded the exercise files from the course and started to work through some of them. The first XE sessions are simple, but worth setting up to get a feel for the code. I could do things in the GUI, but I wanted to play with code a bit.

I ran a few items, trying to remember how they worked, and I realize that I needed to review a few things.

Testing Myself

The challenge for myself is to create a new session that does event matching, but looking for those transactions started, but not ended. I decided to use the GUI to do this, mainly because I was pressed for time. It’s been a busy week, and weekend, so I didn’t have as much time to experiment as I’d like.

I started with a new session:

 2016-08-08 16_46_23-SQLQuery8.sql - (local)_SQL2014.Sandbox (PLATO_Steve (66))_ - Microsoft SQL Serv

I gave this a simple, descriptive name, and selected to start on server startup, and also to track causality. My understanding is that since I want to pair up events, I’ll need this. I might be wrong, as I’m certainly not an XE person, but let’s see.

2016-08-08 16_46_55-New Session

Now we need events. Let’s use the Search, as Erin mentioned, to find those events with “transac” in them. This screen alone is worth leaving Profiler and Trace.

2016-08-08 16_48_36-New Session

As you can see below, I’ll add these events and then click “Configure” in the upper right.

2016-08-08 16_49_44-New Session

This slides the pane over and I can see my Actions, Predicates, and Fields.

2016-08-08 16_50_41-New Session

As Erin did, I want to start with the Event Fields. Let’s see what’s available.

2016-08-08 16_51_08-New Session

Essentially nada. This won’t help. So let’s go to the Actions, which I know is something I want to avoid, and get a few fields. I’ll select both events and click a few items that seem useful.

2016-08-08 16_51_46-New Session

I’ll want to compare the events based on the session and database, to find those items that have a begin, but no end.

Let’s now match. In the Data Storage pane, I’ll pick the pair matching target.

2016-08-08 16_53_32-New Session

Once I pick this target, I need to pick the events I use in my matching, along with the fields. I’ll do that here. I’m going to match the begin with the end on session and database. Session might be enough, but let’s go ahead and try to separate out those sessions that might do something silly in two databases.

2016-08-08 16_54_25-New Session

That’s it for me. I’ll click OK and then start the session. Once I do that, I see the session and the target, which I can view data for.

2016-08-08 16_55_53-SQLQuery8.sql - (local)_SQL2014.Sandbox (PLATO_Steve (66))_ - Microsoft SQL Serv

I see the ALTER EVENT item, which I’m guessing is the start of a transaction inside SQL Server. Note this data isn’t refreshed by default, so I’ll need to right click and do that.

2016-08-08 16_56_37-._SQL2014 - Find Open Transactions_ pair_matching - Microsoft SQL Server Managem

Now let’s make a transaction. I’ll run this:

2016-08-08 17_07_21-SQLQuery8.sql - (local)_SQL2014.Sandbox (PLATO_Steve (66))_ - Microsoft SQL Serv

Once I do this, I have an open transaction. My session id is 60 for this transaction. If I refresh my data, I see the item.

2016-08-08 17_09_31-._SQL2014 - Find Open Transactions_ pair_matching - Microsoft SQL Server Managem

I can also see the open transaction.

2016-08-08 17_07_32-SQLQuery8.sql - (local)_SQL2014.Sandbox (PLATO_Steve (66))_ - Microsoft SQL Serv

Let’s try another one. I’ll start another new transaction, this time for session id = 58.

2016-08-08 17_10_45-SQLQuery10.sql - (local)_SQL2014.Sandbox (PLATO_Steve (58))_ - Microsoft SQL Ser

Sure enough, I see another open transaction. 

2016-08-08 17_11_13-._SQL2014 - Find Open Transactions_ pair_matching - Microsoft SQL Server Managem

Now let’s commit the second transaction, the UPDATE dbo.abc.

2016-08-08 17_11_48-SQLQuery10.sql - (local)_SQL2014.Sandbox (PLATO_Steve (58))_ - Microsoft SQL Ser

My transaction goes away from the pair match.

2016-08-08 17_12_00-._SQL2014 - Find Open Transactions_ pair_matching - Microsoft SQL Server Managem

The Results

The third part of the invitation was to write this. I covered what I did, and some of what I learned. I’ll add a bit more here.

I certainly was clumsy working with XE, and despite working my way through the course, I realize I have a lot of learning to do in order to become more familiar with how to use XE. While I got a basic session going, depending on when I started it and what I was experimenting with, I sometimes found myself with events that never went away, such as a commit or rollback with no corresponding opening transaction.

This was a good challenge, and it forced me to work through a bit more than I might have done this past week, given a busy schedule. However, I’m glad it did, and I might challenge any of you writing about this in the future for your own T-SQL Tuesday #81 post to limit yourself to a week and force yourself to learn and try something.

Posted in Blog | Tagged , , , , | 2 Comments