Skip to content

Windows Phone – What Should Microsoft Do?

I’ve been intrigued by Windows Phone as a platform. I’ve never owned one, and while I considered getting one, the dearth of apps led me to an Android instead. I really tried to get down to a list of core apps that I needed, which was 12 or 15 at the time. However 3 or 4 weren’t in the Windows App Store and I decided against Windows phone.

My daughter really wanted a Windows Phone and got one a few years ago. She loved it at first and almost made it a year before she got annoyed by missing apps and moved to Android (which she loves).

I have been hoping that WP would continue to improve, not only in it’s internal quality, but also with more apps and users. I read a piece about the issues with Windows Phone (The End of Windows Phone?) and while I’m not sure I think it is dying, the author brings up some good points that I worry about.

Apps

Apps are critical. I know plenty of people think that a couple calculator apps, and a few password apps, etc. are good enough. People can choose from a few. However that’s missing the point.

People like diversity. They like choice. They like looking for apps. There are a few places I get news from that publish new and cool apps, or apps that are free for a limited time every week.

Every week I get a list of 5 or 10 apps, some new, some old. Periodically I’ll download one. Sometimes I’ll buy a paid version. The point is, I’m interacting with the app store and my phone, changing it to my liking on a regular basis.

My phone is something I’m always tweaking a bit, like my early PCs. However Microsoft seems to almost want to treat the phone like a network router. I set it up and forget about it, just using it. That’s not what’s going to grow the store.

I checked this week and there isn’t a native Starbucks app. That particular app has become somewhat a strong desire for me, so no Windows Phone for me. There are others, and I think that’s an issue.

As I’ve noted to many people before, I think the big failure from MS here is not just building more apps. Certainly it takes third parties to provide some diversity, but spending $10mm on developers as salary to build apps, and not as incentives, would be better. Paying interns to work with Starbucks and MLB and many other places to build native apps, would be a wise move.

Windows Everywhere

I had a Windows 6 phone. I was excited about the idea of Windows everywhere. Over time I realize that model doesn’t work. The whole way I work on a desktop or laptop, with it’s semi-designed interfaced (it’s more evolved) isn’t how I want to work on a tablet or phone. It just isn’t. The idea of Windows everywhere, with apps running on all platforms without being rebuilt doesn’t make sense. The buttons, flows, controls, actions, etc. on a desktop aren’t practical elsewhere.

The reverse is also true. The way I use a tablet doesn’t work on a desktop. I’d have hoped Windows 8 taught you that.

I really want my data everywhere, not my apps. Stop fighting this. Make native Office apps for iOS, Android, WP8, separate from those on Windows. Don’t try to force things to work well and be highly engineered. We saw that fail with the CORBA architectures, with OS/2, etc. You’re better off worrying about data, not apps.

The Future

Windows 10 doesn’t really excite me on the desktop, except for one thing: Copy/paste in the command prompt. That’s it. What else do I need Windows to do? A better Start menu? Please, I spend seconds a day in the start menu. Even on a laptop, it’s very few interactions. Perhaps more on a tablet, but to get me to move to a Windows Phone or tablet, I need something compelling over the iPad.  What will Windows Phone 10 do?

Cortana is interesting, but I still hesitate to speak to my devices. I’m in too many public places and it’s not an interaction I’m looking for. I’ve seen some people that use speech to text heavily, and successfully, even in louder environments. However it still feels silly, and an interruption on life for me. Perhaps if Cortana does more reminders, prediction, and analysis of my interactions on the device, I’ll try it.

The movement towards limiting switching between apps is good, especially for messaging. The quick respond on iOS hasn’t caught on for me, for some reason, and switching apps is annoying. The link to my desktop makes some sense, but if that’s going to happen, then one thing better be ready:

Undo.

I make mistakes all the time. I will continue to do so, and any actions I take better have an easy, quick, and intuitive way to "undo" things.

I don’t have great hopes for Windows Phone unless the app situation is resolved. If it is, then perhaps there’s a low enough bar to just switch to try a new OS on the phones. However I’m not sure any other features you could build into a phone would overcome the app deficiency.

Monitoring After Deployment

Deployments of database and application changes have tended to be stressful in my career. I’ve seen the actual deployment of changes fail, installation programs crash, the startup of an application error out, or the famous blue screen of death from a .DLL change. I’ve been most wary, and worried, when the changes were out of my control, and I had no chance to review things.

For most database upgrades, however, I’ve been able to examine scripts. Even when an installation program is making changes, I could run a test on a separate instance and use Trace to capture the SQL that would be executed. However even when the actual installation succeeds, what I’m often most worried about is a logic or performance problem that won’t reveal itself for hours, often not until the next business day when the system is under full load.

This week I wanted to know what formal (or informal) checks people might have to detect issues. I’d like to know if you are you actually monitoring systems in a scientific way that allows you to determine if some database deployment has gone awry.

What Monitoring process do you use to determine if a deployment is working?

One of the customers for Red Gate software noted recently that they have a series of metrics being captured on their systems with SQL Monitor. They can compare the performance of two time periods, say the 4 hours before a deployment with the 4 hours after (or 4 hours the next day) to determine if any issues have surfaced. That’s similar to what I’ve done in the past for performance using other tools, or my own metrics. I’ve often kept baselines of common queries run during the day, and I can compare that to post-deployment metrics to uncover potential issues.

Logical errors are more problematic, and I’ve often wanted to be sure I had a couple of backups stored, and at least one backup on a warm standby in case we find information is being incorrectly changed or calculated. Those are the worst errors, and since they is no way to determine how they might surface, I have usually ensured myself and other DBA staff were on standby to perform some manual ETL to move data around between databases in a hurry.

Monitoring is an important part of being a DBA. It’s not as critical as backups or security, but it’s not far behind in letting you know when something is not right. Let us know this week what tricks you might have to ease the worries you have after database changes.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.8MB) podcast or subscribe to the feed at iTunes and LibSyn. feed

April Blogger Challenge 3 – Filtered Indexes

The April Blogger Challenge is from Ed Leighton-Dick and aimed at new bloggers, but anyone is welcome. I’m trying to motivate and cheer people on.

Why Filtered Indexes

I haven’t used filtered indexes much in my career as they didn’t exist while I was a full time SQL developer and DBA. They appeared in SQL Server 2008, and are a very useful feature in certain circumstances.

Imagine I have a simple table:

CREATE TABLE Users ( MyID int IDENTITY(1, 1) , firstname varchar(250) , lastname varchar(250) CONSTRAINT pkUsers PRIMARY KEY CLUSTERED (firstname, lastname) , gender char(1) , postalcode varchar(12) , contactphone varchar(12) ); GO

I want to add some indexes that look for specific fields since those are times I am limiting searching in the table. For example, suppose that gender is an optional entry by users. I find that I often have NULL in this column, so I create a filtered index.

I’ll do that as follows:

CREATE INDEX Users_Gender_Filtered 
   ON Users (gender) 
   WHERE gender IS NOT NULL;

This is the same type of index statement we normally use, but we’ve added a WHERE clause to the statement. This means that only those indexes with a non-NULL value in the gender column will be included in this index.

Why Use Filtered Indexes?

There are some reasons given in the BOL article below, but the main reason to do this is performance. I could easily add an index using gender that isn’t filtered, but if I had 1.5mm rows in my table and 1.4mm of them have NULL here, then do I want to scan those 1.4mm rows if a query wants a specific gender? I don’t.

Indexes take space, which means more space on disk, more backup space, and more time for backups. However they also impact performance when we read the index, or we update a column used in an index. Each of those operations requires either a read or write of the index. If I can reduce the reads and writes to indexes, then I can improve performance dramatically.

I would urge you to look at filtered indexes whenever you have data where a large number of rows are not going to match a query. In these cases, you can use a filtered index that is highly selective.

References

A few places I used to research this post.

Quick and Easy Blogging

This post occurred to me while I was writing some code and was the first post I thought of for the challenge. However I realized I could start with the other posts, and so I wrote those posts first. This one took about 10 minutes to read through limits on filtered indexes, try 4-5 experiments to get the code to work, and write the post.

Look for the other posts in the April challenge.

QA for Data

Here’s a typical DBA scenario. You’re at your desk, working on something, when you get a trouble ticket, phone call, or tap on your shoulder. There’s a problem in production and an order, or some other important piece of data, is incorrect. Can you fix it?

Many of us are happy to oblige. Often we know what needs to be changed and can write the INSERT/UPDATE/DELETE statement to correct the issue. Even if we have to dig in further, we can usually find the data in a table and determine what should be changed.

However, what do we do then?

If you are asked the make data changes to production, do you QA the script? Do you save the scripts in a VCS or a log? Does anyone else examine the data or code? Do you scan the script yourself, have SQL Server parse it, or bother to look for triggers that might fire? Do you bother to put the script in a transaction with error handling?

I have to say that in most of my career, I didn’t bother with any of that. I’d examine a script, perhaps have another DBA or developer look at it, but most of the time I’d run it in production if I thought it was ready. I did often wrap the code in a transaction, with a query to check the results, and an immediate rollback to prevent blocking. However if I thought the results looked good, I’d rerun the query without the transaction. Most of the time that worked well. A few triggers and fat fingers caused issues that I had to fix again, sometimes after quite a bit of work to correct my mistakes.

It’s hard to prevent data change errors, especially if it’s something we do often, or we work in a small environment with limited staff. Ultimately, we might not even care because if we update the data incorrectly, we can run another update to fix the new issue. That’s not necessarily what most managers would want to hear, but it’s reality.

The one thing I have insisted upon in my environments was logging any queries run. A simple cut and paste with a timestamp. That way if things were done incorrectly, at least I know what was done.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.6MB) podcast or subscribe to the feed at iTunes and LibSyn.

SQL Prompt – Killing the Detail Box

I love SQL Prompt and use it constantly. When I don’t have it, my code writing process slows to spurts and stutters, and it’s painful.

However one of the defaults for SQL Prompt is to pop up details about objects when I’m typing, as shown below:

prompt2

I usually want the object name to appear, but I hate seeing the definitions. That’s rarely useful to me.

I can turn that off with a quick setting change. If I choose SQL Prompt from the menu and select Options, I’ll get a dialog with a number of sections on the left. The one that pops up first should be "Behavior" and there’s a checkbox in this for "Display Object definitions (see the image below).

prompt3

If I uncheck this, then I still get the list of objects, but no definition. The view in this image is a much easier way for me to write T-SQL code.

prompt4

You can see this noted in the SQL Prompt docs.

Have you tried DLM Dashboard?

DLM Dashboard is a new product from Redgate Software that’s free. Free as in beer, which should be attractive to many of you. The product is in Beta, but it’s fairly stable and has worked well for me. It’s still maturing, and we’re certainly looking for feedback on what works and what doesn’t.

What Does DLM Dashboard Do?

There are a couple of things DLM Dashboard does that I think many of you are interested in. Both of these promote DevOps collaboration, and I’d like to see these concepts take hold in most organizations.

  1. Track changes to Production
  2. See changes coming through Development

There’s more to the dashboard, but essentially it works by tracking the version of the schema in all environments. If anything changes, then you get an alert on the dashboard (and email notification if you need it).

That’s it. The versions are what you call them. You can set up versions as numbers (v4.2, 4.21, 4.22. 5.3, etc.) or you could do it by deployments (Mar 3 2015, Apr 6, 2015, etc). Whatever works for you.

The Dashboard lets you know if it detects a change and the version is incorrect, or if it detects a known version that’s been deployed to a new environment.

I’ll describe it more in other posts, but for now, check it out. It’s free, and it’s worth a few minutes of your time to try.

Download DLM Dashboard

T-SQL Tuesday #65 – Learning Computed Columns for XML

It’s T-SQL Tuesday time again, and this month the invite comes from Mike Donnelly (@sqlmd). The topic is Teach Something New and it’s a good one.

I try to learn something all the time, and this post is about something I ran into recently with a submission from a user that I needed to check. I had to dig in a bit and see how XML worked in computed columns and I decided to write about it for this month’s entry.

T-SQL Tuesday is a monthly event on the second Tuesday of each month. There’s a new invite each month, and you need to watch Twitter and the #tsql2sday tag to find the next one. I keep a running list of past topics, and feel free to write on any of them and post something.

Learn Something New

I haven’t done a lot of work with computed columns or XML in my career. At least not in SQL Server. I do remember using VB and traverse XML documents and pull out nodes, but that was years ago.

Recently I had to verify how XML computed columns work and that required me to dig in a bit to the rules for both. This post looks at the way in which I learned how to return part of an XML column from a table.

Imagine that we have a table storing some XML data inside it. For my demo, I have a table storing profile information for users. I’ve actually seen a table like this, grabbing information for a website and then shredding it later if there are changes needed in the database. Here’s the setup of the table with a row of data inserted.

CREATE TABLE UserProfile ( UserID INT IDENTITY(1,1) , profile XML ); GO INSERT INTO dbo.UserProfile ( profile ) VALUES ( N'<user> <username>way0utwest</username> <history> <pageurl>/default</pageurl> <pageurl>/scripts/</pageurl> <pageurl>/scripts/id/2433</pageurl> </history> <points> <forums>32432</forums> <qotd>1123</qotd> </points> </user>' );

If I want the username from this table, I can use this query:

SELECT profile.value('(/user/username)[1]', 'varchar(max)') FROM dbo.UserProfile AS up;

That will return this data:

xmlshred_a

However, suppose I don’t want to use that code in all my queries in the application. Certainly it’s not bad for stored procedures, but I don’t want to repeat that over and over either.

Instead I want to use a computed column. I’d think that we could do something like this:

ALTER TABLE dbo.UserProfile ADD Username AS (profile.value('(/user/username)[1]', 'varchar(max)'));

However that doesn’t work. I get this error:

Msg 435, Level 16, State 16, Line 1

Xml data type methods are not supported in computed column definitions. Create a scalar user-defined function to wrap the method invocation. The error occurred at column "Username2", table "UserProfile", in the ALTER TABLE statement.

I haven’t seen this documented in Books Online, but it’s a restriction. However, there is a specific document on using XML in computed columns, which gives a workaround. The way to do this is to use a user-defined function. I need to build a function with my query:

CREATE FUNCTION UserProfile_XML_UserName(@xml xml) RETURNS varchar(200) AS BEGIN RETURN @xml.value('(/user/username)[1]', 'varchar(max)') END ; GO

Now, I can use my function in the computed column

ALTER TABLE dbo.UserProfile ADD Username AS (dbo.UserProfile_XML_UserName(profile));

Now when I query the table, I have a new column with the username:

SELECT top 10 * FROM dbo.UserProfile AS up

Here are my results

xmlshred_b

That’s what I learned in the last month in working with T-SQL. This came from a user submission, and it required me to dig into BOL and find out how I could make this work.

References

Here are the references I used to learn about this:

The Opportunistic Hacker

You’re sitting in Starbucks, working. You need to use the restroom and you ask the hip, cool looking person sitting coding next to you, with whom you’ve chatted with for the last hour about C# and SQL, to watch your laptop. You leave and come back to an empty table.

This was a hacker, coding away at their side job to pay a bills in between causing mayhem. They walked away with your laptop, not knowing if you work for a bank or a soy milk manufacturer. The hacker won’t know if you have a copy of the production database on your laptop or simple a few .ASPX files that are useless without a VPN connection to your office. Hopefully that can’t access other machines from yours.

This is a crime of opportunity, and it happens regularly. Criminals steal laptops from coffee shops, airports, cars, and more. Often just because they can. Some may just wipe the device and sell it, but others may spend some time poking around to see if there is any value in the data on the machine. Or on a connected machine.

Many of us have far too much data on our machines, with many, many saved passwords and connections. We have copies of code, databases, or backups, and losing our machine to a random criminal might be just as damaging to our organization as losing it to a hacker that targets us.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 1.9MB) podcast or subscribe to the feed at iTunes and LibSyn.

Pre-built or Ad Hoc

One of the advantages of NoSQL databases id that the schema and organization of data is very flexible. The various types of databases usually allow the schema or organization of data to vary across entries. I hesitate to call them rows, but essentially each time you add data to a store, you can alter the format of the data inserted.

For relational database professionals, this seems to be a recipe for disaster, with entirely too much data being captured in an un-organized fashion. At some point a user will want this data to be returned in a report format, which almost always seems to be rows and column related data, even behind the scenes of the incredible visualizations that appear in modern dashboards.

I had someone recently note that their users don’t want to write ad hoc queries or try and discern the meaning of varying structures of information. They want pre-built structures they can count on and use reliably to answer questions. I suspect many users don’t want to decode the meaning of structures that change, despite the fact that so many users want to reformat and change the shape of data in Excel. Those of you that have to re-import some of these spreadsheets know just how unstructured a set of rows and columns can become.

I really think that it is important that structures of data be decided upon and ordered in a known way so that users can easily understand the meaning behind the data. However we are gathering more and more data in new ways, from new sources, and we don’t have consistent ways of recording that information. That will continue in the future, and I do think that learning how to access new sources, like Hadoop, and present that data back to users in a familiar format will become a way to show you are a valuable resource for your organization.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.3MB) podcast or subscribe to the feed at iTunes and LibSyn. feed

Hardware Issues

My backup drive failed last night. I was struggling with a few VM issues and when I finally resolved a few, I went to back up the VM in case I had more. I plugged the external, 2.5” SSD into my laptop and got nothing. No response, no new drive in Explorer.

That’s not what I want to see, though I’m glad it was a backup drive, and not one I needed for presenting. Not much I could do at night, on the road, but in the morning I resolved to get a new one.

I did have a California geek moment, heading to Fry’s for the first time to replace my drive. I’ve read about, and heard about the store for years. I’ve passed them in Northern California, but never gone in.

Photo Apr 10, 8 33 10 AM

It was a bit of a mistake, since I was late getting back to our Red Gate event. I got a little entranced with the various displays and options in there. It’s like a Best Buy++, combined with a Radio Shack and more. Microcenter in Denver is similar, but Fry’s has more.

I decided to go mSata rather than a 2.5” one, mostly for space and weight. It’s amazing to me how small things have gotten. I got the Samsung 840EVO 500GB drive and a small case.

Photo Apr 10, 11 35 00 AM

I carry a screwdriver, but the case came with a tiny one that I used to mount the mSata drive. How small is this? Small.

Photo Apr 10, 2 08 58 PM

The picture above shows my hand, a 16.9 oz water bottle, my Arc mouse, and the 500GB mSata drive in an enclosure.

500GB!

I could probably carry 4TB worth of these one hand. For $249 for the drive and $20 for the case, it’s amazing. I’ve got a nice backup for multiple copies of my VMs, which is very handy and cost effective.

If you haven’t tried mSata, take a look. I’m not sure I’d look at any other formats for portable storage.

Follow

Get every new post delivered to your Inbox.

Join 5,206 other followers