A site that’s a comparison of the platforms.
Certainly some good arguments being made, but (admittedly) biased towards specific workload items that the author uses.
However, I agree with lots of the items listed. There are quite a few items, such as 1.1 and 1.2. Not having 1.5 is stupid silly. I know I can write a CLR assembly and stick this in here, but why hasn’t it just been included? 1.6 is interesting for sure and as someone that lives in BOL, it is very frustrating.
Without a doubt there are some cumbersome things in SQL Server that I think could be made much, much easier. However, as pointed out numerous times, MS is concerned about money, and not necessarily making the product better.
I don’t agree with the Linux v Windows debate. Lots and lots of people run Windows and the latest surveys I’ve seen of servers from automated places show Windows as close in terms of numbers.
I’m also not sure some of the BI stuff matters a lot in a general comparison, but if that’s what you do, then this is interesting. I’d also point out that since MS is trying to move into these areas, some of these things are being addressed. SQL Server 2016 will support JSON and R.
Enjoy the read. Don’t get too wrapped up in the details, but certainly if you think some of these items are important, let MS know.
I wrote at SQLServerCentral about using tSQLt to check table metadata. In essence we are testing the API of our table. However, since the table could change, and may need to, what’s the value of having a test fail if the table changes?
In my mind, I don’t necessarily want to have table structure tests for all my tables. After all, developers need to have flexibility to work with and change tables in our applications. If it’s a pain for a developer to change every table, because a test fails and they have to go change the test, that’s an issue.
There’s also the problems of a developer changing a table, changing the test, and then having everything pass, without passing along information that a schema change was made.
I would limit the API tests for a metadata to those tables that are important, with the caveat that anytime someone fails a metadata test, they need to inform the team.
But Steve, isn’t every table important?
Yes and no. Certainly all tables should be important to the application in some way, but really many of them are contained in the application. If changes are made, it’s not necessarily a problem to change other objects to catch up to the table change. However, some tables may cross teams or applications and they are an issue.
As an example, I have lots of tables in the SQLServerCentral database.
If the Blogs table, or the Articles table changes, then we need to alter stored procedures and possible ASP.NET code for our application. In fact, in this list, pretty much all of these tables could be changed by a developer without a large impact, assuming they’re going to look at the other objects or code affected.
However the table highlighted, the emails table, along with a few others, are important. These tables not only support SQLServerCentral, the web app, they are also called by our emailer process, which is a completely separate application. In essence, these tables are the opposite of a microservice. They’re shared.
If someone wants to change the Emails table, I want to be sure that others are informed. In fact, I might choose to include a note in the test header that various groups need to be informed or that the table affects another application. In that case, before a developer went to change the test, they might at least have a chance or noting this has far reaching implications.
It’s not a perfect solution, but it does help. The other thing I could do is limit access to metadata tests for various tables/views and merely call these tests in a CI, or other automated, process. That way failures would be public, and a variety of people could be informed, preventing a developer from making changes without a discussion.
As I mentioned, I wouldn’t do this for all tables. In fact, I’d limit this to particularly sensitive tables that might require lots of rework if they were changed. We want to speed development, and ensure code works, not slow developers down.
Do you want to continue to perform boring, repetitive, mundane tasks as a part of your job? Many of you might not be challenged at work, or you might be burdened with a series of requests that repeat themselves over and over. They’re easy tasks, many of them probably take minutes. I’m sure there’s also a level of mindlessness that you find comforting at times with just working through a familiar task.
However many of you also get busy. You have no shortage of new tasks that get assigned to you on a regular basis. You probably also get stressed from your heavy workload at times. What do you do when you’re too busy to work through the mundane tasks, but they still need to be done? It can be a challenge to manage that burst in a workload if you haven’t prepared for it.
There is a way to remove some of the mundane administrative work from your job. It’s not simple, and it’s not going to solve all your issues right away, but over time, you can certainly reduce the burden of working on dull tasks over and over again, across multiple machines.
Learning PowerShell (PoSh) or some other scripting language. VBScript works fine, as does Perl, and there are others, but if you’re a Microsoft person, especially a SQL Server person, learn PoSh. It’s used in all the products, it’s becoming a standard for all Microsoft products, it works in the cloud, and it works with SQL Server. It takes some getting used to, and it certainly can help with repetitive tasks. It can also run all your SQL scripts for you, just in a more automated fashion.
You can grow your career, add a new skill, reduce your workload, and become more efficient at your job.
Or continue to be boring.
The Voice of the DBA Podcast
How often have you been caught off guard by some issue with one of your databases? I would guess most people have received a phone call, email, or trouble ticket that alerts them to the fact that something has gone wrong. A few of you might have even had an irate customer or manager wander over to your desk and loudly inform you of an issue. Those are not great times, and most of us would prefer to find out about problems before others bring them to our attention.
This past week was the 66th T-SQL Tuesday blog party. If you haven’t heard of it, I would encourage you to look each month for the announcement on Twitter and participate. It’s a great chance to share some knowledge, and showcase your own experiences or viewpoint on a particular topic. I’d actually encourage all of you to go back and write posts about the past topics at some point. It’s a great way to challenge yourself to document your own experiences in a concrete way.
The theme this month was monitoring (thanks to Catherine Wilhelmsen b | t), and this is exactly the way that you build awareness in your environment. Monitoring allows you to ensure that you, as the DBA, learn about issues, or potential issues, first. Monitoring doesn’t solve your problems; that’s up to you to do, but it can help you proactively detect issues early and perhaps mitigage them before users notice them.
The blog entries this month cover all sorts of issues that you might think about watching for. The 825 errors in your log, metadata about your backups, jobs, and more. There’s a variety of posts that cover all sorts of home grown systems, as well as noting that third party products work well. You have a huge variety of choices, from building your own tools and processes or purchasing something like SQL Monitor from Red Gate. However no matter what your budget, resources, time, desire, and more, I’d highly recommend that you really think about ensuring that you are monitoring your systems in a way that lets you know how things are working.
What to do when they’re working poorly, that’s up to you. Hopefully the information we publish at SQLServerCentral, and the links we send out every week from Database Weekly, can help you learn to be better at your job.
I have been looking for a new screen capture tool. I used to use Snagit, but it felt too heavyweight and buggy for me. I just need to snag part of a screen for my job and blog. I used Quick Screen Capture, but in Win 8 it didn’t want to accept my license key. I’d tried a few more, but none really worked well.
I mentioned this to Andy Warren on the phone the other day and he told me to try Green something. I decided to do that today, and couldn’t remember the name, so I surfed over to chocolatey.org and searched for screen captures. Sure enough, Greenshot popped up first.
I quickly opened an administrative level command prompt and typed “choco install greenshot” and went along with my work.
Chocolatey is an amazing package manager and it started the install for me in the background.
A few minutes later I noticed the browser pop the greenshot website as the final part of the install. The program was set up with its install defaults and running on my system.
Chocolatey is amazing and I urge you to look at it for installing software on your system, especially if you’re a developer or consultant that regularly moves hardware or systems.
If you’re a software developer, get your stuff packaged into here.
Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.
I had someone ask me a question about security recently, and while working through the answer, I ran across something I didn’t know well: databasepropertyex(). Here’s a few notes.
Each database in SQL Server has all sorts of options and settings. Most people (including me) get in the habit of checking here for the information. First we right click the database (after starting SSMS if it isn’t running)
Then you get this dialog, with lots of stuff.
And more on the other tabs.
Time consuming, and error prone. As I get older, I find that trying to decide which selection is set for which option becomes harder. I find my finger tracing across the screen. It’s entirely possible I’d make a mistake when glancing at this to check the ArithAbort setting.
Scripting and querying is usually better. Not the sp_configure scripting where you get a whole list of options, but looking for a particular item. That’s where DatabasePropertyEx() comes in. This lets you query for a database property.
The problem comes in when you run it. If you run the function, you get little information.
Certainly you can go look at BOL to get more data, but that’s annoying. If I run sp_configure, I get data. However here, nothing. Even if I do what I think would be helpful, with a NULL parameter, I don’t get a list of stuff. SQL Prompt alerted me to the fact that the first parameter is the database, and the second is the property, but that doesn’t work.
Fortunately, I have SQL Prompt, so I get this when I put in quotes for the second parameter.
As you can see, the parameters don’t map to the properties, though you can figure out what they mean if you see them. They tend to follow the conventions that most application programmers use (IsAutoClose).
That’s fine, and it just means you need to have a reference handy for properties to query. I wish MS would give all properties with a NULL parameter, or a link to BOL.
This one took a bit longer to write. Once I realized I didn’t know databasepropertyex() very well, I had to read about it (5 minutes) and experiment a bit. I took some screen shots, which is always cumbersome. As I wrote this, I had to change the wording and ordering a few times to try and convey a simple message. I was originally going to look at more details, but decided to keep this simple and talk about just querying properties.
This was about 30-40 minutes for me.
Documenting code isn’t a task that many people enjoy. Those of you that have been through a computer science curriculum probably added lots of verbose, and obvious, comments to code. It’s probably unlikely that you ever found much value in the comments in a paying job.
There are all sorts of ideas on how to document your software, writing good comments, and even refactoring code to remove unnecessary comments. I’m sure that if we put five developers in a room, we’d end up with eight different ways to comment code that would be debated, with no agreement on how to proceed.
I ran across an interesting approach from Ed Elliot on documenting your code with unit tests. I hadn’t thought about the tests providing some documentation, but it’s an interesting idea. I’d have to work with the concept a bit, but I’m skeptical I’d get enough information from unit tests to ensure I understood what a stored procedure was doing, especially if I had 10 tests for a long procedure.
I do think unit tests are important, and perhaps in conjunction with some type of code header that gives the requirements the procedure fulfills, I’ll get enough information to understand the code.
I don’t have a universal solution that I think will work in most situations, but I do think that having a tool like intellisense or SQL Prompt helps you self document code with expressive names for columns, variables, and aliases. If nothing else, those phrases for variables can clue the next programmer in to what is happening better than single letter names.
The Voice of the DBA Podcast
Trying to implement version control, and Continuous Integration (CI), and automated releases of code is hard. The concept is simple, but all of the various processes, tools, and links that need to be connected can be hard. I’ve seen too many organizations that struggle to get started, often because they see the entire process as just too complex and overwhelming.
Whether you’re trying to release application code or database code, you need to treat them in a similar manner. While there are lots of books, tools, blogs, and companies that can help you work with application code, the database has been more challenging.
At Redgate, we’ve put together a few workshops to help break down the stages of engineering your database delivery and get you moving to help smooth the process of making database changes. These workshops are coming to locations all around the US and UK in the next few months. Check out our schedule, which has dates in :
- Bellevue, WA (May 15)
- London, UK (May 20)
- Philadelphia (June 4)
- Belfast, Northern Ireland (June 26)
- London (July 8)
- San Diego, CA (July 16)
- Manchester, UK (July 24)
- Baton Rouge, LA (July 31)
There are more dates, and we’re certainly open to scheduling more locations and even private classes if you’d like. Contact us if you have a request.
Most of these classes are delivered by experienced partners of Redgate, often consultancy organizations that have been working with clients for years to build smoother application delivery processes, and are moving into the database space. The classes use our tools, and Grant, myself, or one of our other Redgate experts will be on hand to help you learn the skills you need to start implementing DLM concepts. We can even recommend consultants to work with you for a two day DLM Health Check or on longer term projects that help you build better software. Contact DLMPartners@red-gate.com for more information.
We find that many of our customers are struggling with different parts of building a smooth software pipeline. They want to get better, and they find the Redgate tools help, but they need a little push or confidence to implement an engineered process.
Our three workshops are designed to teach you about different parts of a DLM environment. They map to the Database Lifecycle, helping you move to a different stage and a better engineered process at each step. The three workshops are:
- Database Source Control – getting your database code into a Version Control System and dealing with the issues of branching, merging, team coordination and more.
- Database Continuous Integration – learning to automate the building and testing of your code each time a change is checked into a VCS. This lets you find issues quickly.
- Automated Database Deployment – Gain the skills to automatically generate upgrade scripts and ensure they are tested and deployed smoothly, without any of the release day stress many of you experience.
These classes won’t solve your DLM challenges in a day, but they’ll give you the skills and confidence to move forward. Ask you boss to send you and get a bit of training to help you start building better software.
Come join us and learn to ship often and ship safe.
This is a T-SQL Tuesday that not only makes me smile, but makes my boss happy. In fact, I’m sure that quite a few people at Redgate Software, my employer, are thrilled with this topic since we have a product that helps you here: SQL Monitor.
T-SQL Tuesday #66 has the theme of monitoring, hosted by Catherine Wilhelmsen. Anything having to do with monitoring is fair game. Custom scripts, home grown applications, even third party software.
If you’re a #SQLNewBlogger, this is a great chance to keep going with a new post. If you can’t write now, come back and publish something later. If you post something on May 12, 2015, you’ll get included in Catherine’s roundup. However if you can’t make it, you’re still a T-SQL Tuesday writer.
Either way, follow #tsql2sday and learn a few things.
My post is going to be more of a philosophical one, with a little code.
Most of the time I’ve worked with SQL Server, I’d had some responsibility for a production instance. Even when I was writing code and building software, I’d still find myself as the backup, if not the primary, accidental DBA. I learned early on that I needed to have some way to track what was happening on an instance.
One of the ways in which I did this was by ensuring each instance could monitor itself. I’d found far too many times that having a process connect remotely to the instance and gather data was a way to lose data. Too often something would get missed. The exact time that a process connected was the time things didn’t work.
As a result, I decided to keep some monitoring on the local instance. Even though I usually had a central server roll up information, if that instance dropped off the network (or my monitoring server died), I’d still know what was going on. This was especially important when I worked in the financial industry or was bound by ISO 9000 standards.
Note: These days I don’t bother to manage monitoring software of my own on an instance. I use SQL Monitor because I work for Red Gate, but no matter who my employer might be, I’d buy software because it’s worth the cost. I don’t have the time to worry about maintaining my own software for most monitoring tasks.
I had a setup routine that I used to use on every instance. It did a few things, the first of which was create a DBAAdmin database. This was the place I could put anything I needed to monitor my instance. I tried to keep all the code the same between instances, and tried to write idempotent code so that re-running the install wouldn’t cause issues.
Once I had the database, I’d add tables and procedures for various purposes. For example, one of the things that needed to be tracked each day was the backups for the databases. So I created a table called DBBackups.
CREATE TABLE DBBackups ( BackupID INT IDENTITY(1,1) , Backupdate DATETIME , DatabaseName VARCHAR(2000) , BackupType CHAR(1) , Filename VARCHAR(2000) , sizeGB NUMERIC(15,2) ) ;
This table had a corresponding procedure that was designed to scan the filesystem and report back on the latest backups written to the file system. The details of each file are recorded here.
Why? If there’s a problem with monitoring, the local machine still has the information about backups. What’s more, I can usually get more details here than I’m capturing in a monitoring system that’s looking at the last backup date or just getting full backups. If msdb history is wiped, I have a second copy here.
However I also need to remove data over time, so this solution usually has three parts.
- Table for data storage
- Procedure(s) for gathering data
- Procedure and job to remove data older than xx days
That’s simple, but it’s a decent amount of work. However I only use this for certain areas. I used to really care about backups, but SQL Monitor captures that for me. However I might care about sp_configure settings. While SQL Monitor can alert me if there are changes, how do I know what changed? I’ve used a process like this to keep the last week’s worth of sp_configure information, captured every day to let me look back.
As I noted above, I wouldn’t rebuild a monitoring solution for overall checking of a database, but I might capture specific information using this process, with the table and procedures tailored to the specific information I care about.
Tim O’Reilly has been an advocate of open data access and standards for some time, especially from governments. He’s pushed for more interoperability and certainly more accessability from all sorts of groups. He gave an interview earlier this year to LinuxVoice where he talked about a variety of things, but data was foremost on his mind.
There are some good thoughts, but I was pleased to see him looking for more software to adapt how it works with data rather than asking data to match the application. An interesting thought he had was in the area of control systems. Does every device or sensor need a separate application and way of interacting or should we have some guiding design principles that let similar applications work in similar ways with different data? That almost sounds like good data modeling and normalization principles in action, backing a data driven application.
I also liked his acknowledgment of the fact that so much of our data isn’t very portable. Between social networks and proprietary storage, it becomes hard to move data around. The pattern of downloading data, perhaps editing, perhaps not, and then uploading elsewhere works great with ETL tools, but it’s cumbersome for many users and applications to deal with. Building ways for us to interact with disparate data, allowing for queries to remote sources, sometimes transforming and copying data, all of this needs to be easier to implement and integrate inside software.
In some ways, I think the 3.0 model of our Internet interaction will take place around data. I think SSIS will continue to be one of the most valuable tools in SQL Server (along with lots of demand for work), but it still needs improvement and enhancement to catch up to other ETL tools. I really hope Microsoft believes this and continues to invest in the tool.
I also think that the data professionals that really stand out in the next decade will be those that learn to make the choices about when to use R, JSON, XML, HADOOP, or whatever non-RDBMS tool to meet a need. But also when not to use these tools. The better data professionals will make good decisions about when to query data, and when to move it to another system.
It’s an exciting time to work with data as the opportunities and rewards continue to expand and grow. I look forward to what the future will bring us.