I’ve got a semi complicated desktop setup, with multiple monitors, lots of USB stuff, and a few ways to record and listen to sound. I hadn’t thought of things as complex, especially with regards to audio, but between recording podcasts, having regular video conferences, and the speakers I added to my machine, I found the need to control where my audio goes in and out of.
The other day I was waiting for a conference call to start on Skype and was listening to an interview through my computer. I’d plugged in a USB headset for the call, but the interview (Flash-based web page), came through the speakers. I’ve gotten used to redirecting audio easily from the speakers to headphones to Bluetooth devices, but I wasn’t sure how to do this in Windows. I messed with the audio mixer a bit, but didn’t see anything.
I did a bit of searching and ran across this post on Superuser. I decided to blog briefly in the hopes that I’ll remember this process for next time.
The long and short of it was that I needed to right click the audio speaker icon.
I could then select Playback devices. This allows me to see the setup, not just the volume. I got the Control Panel app, with all my devices listed. I can also see the recording devices.
From here, if I right click a device, in this case, the USB headset, I can select "Set as Default".
Once I did this, the audio for the interview came through on the headset. I could listen and when my Skype call came in, it cut out and I could continue on with work.
This is especially handy for recording, as I’ve had the wrong device setup for recording at times because I’ve expected the default to be used.
It’s the holiday season and a joyous time for many of us. I do hope you are feeling the giving spirit and looking to creates smiles for others as you prepare for your particular celebration during the break from work next week. This is certainly a time to remember that many others are less fortunate that most of us that work in technology, and I’d encourage you to consider giving something to help others.
Howver, many of us will receive gifts this holiday season, and in the spirit of having some fun, I wanted to make a fun poll this week.
What is on the top of your wish list that would help improve your time at work?
If you are like me, the one thing you’d really like is more time. It’s unlikely I (or you) will receive that particular gift, but I know that it’s the one resource that is always limited for me. Barring that, I think there are a few things that might make work time a bit better for me.
I’m not in an office, but having a nice set of headphones to help me concentrate on a task, or even relax while traveling would be good. My laptop is starting to have some issues, so I’ll be looking to replace it at some point, but many of you might find yourself working faster with a newer, more powerful machine. No matter what your hardware, you need a place to work, so maybe 2015 is the time to try a taller, or faster, desk. Perhaps you just need a few more USB ports, a little power on the go, or even a new way to commute to the office.
The array of possibilities for changing work continues to grow as technology advances the world. We can work anywhere, though it is important to stop working and relax. Perhaps the best thing for many of us might be a way to refresh and recharge away from work so that we’ll be ready to hit the ground running when we return. The best way to do that is to take your vacation.
Let us know what might be on your fun list, and have a fantastic holiday season. I know I will as I’m leaving tomorrow for a week in Steamboat Springs.
The Voice of the DBA Podcast
Like Tim Berners-Lee, I find the “right to be forgotten” law in Europe to be dangerous. The tremendous growth of data in our world means that searches become increasingly important in order for us to find data. If data is removed from searches, then for all practical purposes, it might cease to exist.
I think that is disturbing. As a data professional, I try to ensure that the quality and integrity of data is maintained. I really to try to avoid ever deleting data (preferring to archive or hide it) in case it’s ever needed later. Far, far too often I’ve had someone ask me to remove something, only to have them ask for it’s return a short time later.
I know that this law isn’t removing the data. The original sources will still contain the data, but search engines won’t return it. On one hand that means that many of us won’t ever realize that the data exists. In some sense, this will return us to the very limited, analog search engines of the past, where we depending on indexes compiled on microfiche to find information stored in libraries. On the other hand, perhaps this will give rise to data investigators that will develop their own methods and archives that enable them to offer services where they can find the “forgotten” data.
I can’t decide if I think this law is a huge step back, or a correction against some of the overzealous data creation that occurs in the spur of the moment. Certainly data about past events is valuable and important, but the way much of the world uses a search engine, clicking on a link or two and accepting the information as valid, can be misleading. Time will tell if this is a good idea or not, and I’m curious to see how well the law performs.
The Voice of the DBA Podcast
I’ve spoken at lots of events, but all in the US and the UK. Next year I get my first talk outside of those locations, going to the SQL Server Konferenz near Frankfurt, Germany.
The agenda is up, and I’ll be talking testing and tSQLt there, and it’s cool to see my name up there. I’m also glad to see the UK flag there to note I’ll be presenting in English.
Fingers crossed that things go well.
I noticed a contest this week while working on the Database Weekly newsletter. It’s the Cloud Hero contest, with the chance to win a Surface Pro 3. I could always use another device, or at least a device I could give away, so I decided to enter.
There are a few things you can do, all of which are interesting to me in terms of a direction that I, and Red Gate, want to move. I don’t know if Azure works everywhere, but we are considering moving SQLServerCentral, or perhaps parts of it, to Azure, so this was a good chance for me to try out some new Azure stuff.
I’ve messed with a few things in Azure, but mostly on the PaaS side. That interests me more, and I’ve done little with IaaS. I certainly haven’t really worked with IIS much in Azure. I decided to go through the VM setup, to create two IIS machines, load balanced on the same URL. I used this blog post with a cartoon and demo to run through the process.
It was a bit more than 10 minutes, mostly because some of the allocation stuff in Azure took time, and the responsiveness from the VM in Azure was slow. From the time I connected to the time Server Manager popped up was over two minutes for each machine. Since I was going through some of the steps sequentially, that meant it was slow to get going.
The video and the portal bring to light some of the issues of Azure. It’s a great tutorial and I was able to get the two machines load balancing IIS in 20 minutes (or less). It was surprised how quickly it went, but I also had to stop and think. The load balancing and cloud services are different now than they were when the post was written.
I’m sure that’s the case with lots of Azure content. In some sense, this means that we will have lots of issues with people trying to learn how to use Azure as they’ll find content and information that is woefully out of date, sometimes quickly. I wonder if we need to think about having some code on blogs for Azure that marks the content as potentially out of date after it’s been out for 6 months.
It’s a challenge to keep the content up to date, and luckily the changes weren’t too different in the portal.
I am glad that I was able to get to IIS machines up and load balanced, delete them, and bring them back. That makes me think I may find some use for this Azure stuff, yet. I have a few projects in mind, including rebooting my personal site. Perhaps Azure will be the place I give it a go.
The SQL Server community is a surprisingly close knit one. I find that people are much more willing to help each other and share knowledge. We have so many events and user groups, it’s amazing to me that on almost every weekend of the year, some event is taking place, somewhere in the world. I’ve been to dozens of SQL Saturdays, and most of them are run very well, but I’ve also seen lots of extravagance in putting on the events. There’s competition between the events and organizers, which is mostly healthy, but I do worry about the long term health of our community.
Most events depend on some sort of sponsorship to get going. Venues can cost a good bit of money, and while many events now charge for lunch, the breakfast, coffee, sodas, etc. are the burden of the organizers. Add in signs, printed guides, gas, etc., and events can get expensive. Many events get shirts for volunteers and speakers (or a small gift), as well and a Friday night dinner to thank everyone for their help. These expenses have become commonplace.
However as we continue to add new events, I can tell you that the overall cost to vendors is significant. I can’t speak for other vendors, but I know Red Gate wants to support these events, and we plan to continue providing sponsorship, as well as sending Grant and myself to speak. However we also have to make choices about which events to support and how many we can participate in. That means that as more events are run, fewer will get funding. Many existing events might see less funding from all vendors.
I really like the idea of bare bones events. Jen McCown proposed a format and I like it. As we look to grow more events in the future, we need to be lean, efficient, and most importantly, focus on the goal: teaching. Big events are fine, and if you can make them happen, great. However let’s not let the lack of a big budget get in the way of helping teach people about SQL Server, growing our skills and bond as a community.
The Voice of the DBA Podcast
Recently I was working on transforming some dates, and wanted to generate a large n number of dates for testing. I decided to use SQL Data Generator, and a little RegEx to meet my needs.
The format I needed was CYYMMDD, which is the century as a 0 or 1 (1900 or 2000) and then the yymmdd format. While there are some pre-made expressions to build dates, there wasn’t an easy one to handle the century like this. I could have used a date expression in T-SQL and randomly allocated a century, but I decided to play around with RegEx.
I know that the brackets allow a choice of values to be used. The regular expression can choose any of the values to match. An example is for the first part of my date, the century. It can be zero or one, so I can do this:
When Data Generator runs, it will randomly build expressions that match this pattern, which in my case results in
That makes it easy for me to pick numbers, and I could do something like this for the years:
That works, as any number from 00, as in 2000, up to 99, as in 1999, is valid. That gets me this:
However that causes issues when I get to the month. I need a two digit month, but I can’t have some combinations of two digits. If I were to write [1-9], I’d get months like 18, which aren’t valid. Instead, I need a pattern that only matches a 0 with 1 to 9, and only allows a 1 with a 1 or 2.
To do that, I’ll use an OR. That’s a pipe (|) in regular expressions. I’ll say (in pseudocode), give me a (01 to 09) OR a (10-12). The easy way to build that is like this:
This says that if we match the first half (before the pipe), then we literally have a 0 there, with a second character in the range 1-9. That gives us 01 to 09. The second half, after the pipe, does the same thing, but it matches a literal “1”, and then a 0, 1, or 2. As you can see, I have random months (only showing this expression).
Now the hard part: days.
Days are strange in the calendar because the possible days depend on the months. Years and months are consistently in ranges, but the days are not. Let’s start with the most common days: 31.
I have 31 days in months 1, 3, 5, 7, 8, 11, 12. In order to match these up, I’ll need to combine the month and day items. Let’s first change our months to be just those particular months. That gives me:
With these months, I am going to allow up to 31 days. The patterns for the first 29 days of the month are the same. A 0, 1, or 2, with any combination of 1-9. Putting that together gives me:
This handles the first 29. The next two, 30 and 31, are an OR expression like the months. I’ll use a literal 3 and a choice of zero or one. That gives me:
Whew! This is a lot of work, but it matches things up well. Now I need to handle 30 days. I’ll do that the same way, but I’ll now OR both expressions together. The expression is:
And the data:
That gets me almost all the months. The last part of February, the hardest. Now I could worry about leap years, but I’m not going to. Proper handling here means verifying the year (and century here) and doing math to ensure a leap year is viable. Instead, I’m going to just ignore the 30s and manage days 1 to 29.
Now I have a nice set of random dates if I put everything together.
I leaned on a few examples to decode a few of the expressions and also to check that I wasn’t messing up.
Those of you that manage replicated environments have learned to have one thing handy: a script that recreates your replicated publishers, distributors, and subscribers. I was reminded of my past needs for those scripts recently when I saw this post on dropping and recreating all the synonyms for a database.
It’s easy to depend on backup and restore to recover from issues, but how often do you face problems with an environment that aren’t related to data? If you lose a stored procedure, or have a problem with the configuration of jobs, or principals, can you easily drop and recreate an object? That code is usually tiny, but if the only copy you have is in a backup file, you have to restore a lot of data just to get some code.
Certainly everyone should keep all this data in version control, and I’d encourage you to be sure that not only development code (tables, views, stored procedures, etc) are kept in there, but also configuration settings, jobs, roles, and the various other things a DBA is responsible for in a production environment.
However I’d also go one step further and ensure that you have scripts to recreate all aspects of your environment if you need to do it. Many of the comparison tools will let you store a snapshot of database schema items in a folder and then easily help you recreate a script if needed. That covers the database, but for the instance level items, you need to be sure you have an easy way to checkout a copy from version control (you are using version control now, right?) and execute the scripts on a SQL Server. You can use T-SQL, PoSH, or even VBScript, but be sure you have the code handy.
The Voice of the DBA Podcast
I was chatting with one of the more experienced SQL Server professionals I know recently and was surprised to know this person had actually retired, but come back to technology because they were bored with not working. I suspect that’s how I’ll be later in life, and am not really looking forward to retiring anytime soon.
However I do think about jobs and employment as I age. I know that it can be tougher to keep a job over the long term. It seems the flexibility many of us appreciate as younger workers can be a detriment later in life when you value stability. With that in mind, here’s this week’s question:
Is this the last full time, technology job you’ll have?
I have an amazing job, perhaps the best one I could have imagined. I love what I do, but I do think about my options every year. I take some time to think about how my employment has gone in the past year, what else I might do and what I want to do in the future. I re-evaluate how I feel and try to be honest with myself about how I feel about my career. If I decide to make a change, I want it to be my decision, not something that’s forced upon me by a change in my employment status.
I wonder if this will be my last job. I certainly think it could be, and the way it’s gone the last few years, I hope it is.
The Voice of the DBA Podcast
I was scanning Twitter the other day and saw a note from someone that they had written a query using an obscure T-SQL command and were glad it had worked. I exchanged a note with the person and they mentioned that they had to look up the command and syntax periodically when they had to write a similar query.
I mentioned templates.
If you haven’t used these, you should, and I wrote a basic post about how to access them and one on customizing these for yourself. These templates are like Snippets in SQL Prompt (Which are way more useful to me), and they are a tool every DBA should use.
Here’s one way I think they’re really helpful:
Suppose I need to write a PIVOT query. I rarely do this, and it’s not too hard, but I write this query:
select * from ( select runner , miles , mins from results ) as rawdata pivot ( avg(mins) for [miles] in ( , ,  ) ) as pivotresults ; GO
That’s easy enough, but it’s specific for my tables. However when I glance at it, I can see that there’s an aggregate columns, and I know the PIVOT requires that I list the values that are to be used in the columns.
What if I change the query? I can do this:
select * from ( select runner , <pivotcol, varchar, miles> , <aggcol, varchar, mins> from results ) as rawdata pivot ( avg(<aggcol, varchar, mins>) for [<pivotcol, varchar, miles>] in ( , ,  ) ) as pivotresults ; GO
Now if I make this a template:
I can drag this into a new query window. When I see it, I can CTRL+Shift+M and get this:
Now I change a few values and I have a pivot.
Of course, I need to actually enter the values I want, but this gets my PIVOTs done quickly without the need to decode BOL or swing by SQLServerCentral. Once I do that, I have a query I can use.
I’d encourage you to use templates. They’re very, very handy for quick sections of code that you use often, or want to remember in the future.