Writing Better Queries Presentation Materials

I delivered a webcast for the PASS DBA Virtual Chapter yesterday. It was called Tips & Tricks for Writing Better Queries. I really appreciated everyone who made the time to attend. I hope you found it to be time well spent.

As promised, here is a copy of the slide deck and demonstration code I used during the session. Please let me know if you have any questions.

The presentation was recorded and should be available on the PASS web site within a few days.

Related Posts

Removing Time From A Datetime Value

How much time would a DateTime have if a DateTime had no time?

Ok, forgive the bad pun. But the point remains is a valid one. How can we efficiently strip off the time portion of a DateTime value in SQL Server?

I’ve seen a several methods proffered on the Internet. Some are quite clever; others are not so clever. The latter includes converting the DateTime to a VarChar and using string manipulation techniques to loop off the time. That’s not going to perform well.

So what is the best way? Let’s find out.

Three Methods

I’m going to narrow down the field to only three methods for manageability. There may be other methods out there. If you know of one that’s not covered here and you think may be better, please let me know.

To test each case, I’ll employ the conversion 10,000,000 times to see which finishes the quickest. I’ll do this with the following code.

SET NOCOUNT ON
DECLARE @cnt INT = 1;
DECLARE @start DATETIME = GETDATE();
DECLARE @var DATETIME;

WHILE @cnt < 10000000
BEGIN
SET @var = [time removal option]
SET @cnt += 1;
END

SELECT @var;
PRINT CAST(DATEDIFF(millisecond, @start, GETDATE()) AS INT);

Cast To A Float And Back

First let’s look at a technique that casts the DateTime to a float, gets the floor of the resulting value, and then casts it back as a DateTime.

CAST(FLOOR(CAST(getdate() AS FLOAT)) AS DATETIME);

This option finished in a 16,256 milliseconds.

Adding A Date

The next option uses the DATEDIFF function to calculate the number of days since the beginning of time and then uses the DATEADD function to convert it back to a DateTime value.

DATEADD(dd,0, DATEDIFF(dd,0, GETDATE()));

This completed in 15,593 milliseconds, slightly better then the first option.

Converting To A Char And Back

Finally, let’s consider a technique that converts the DateTime to a Char of a specified format and then casts it back to a DateTime.

CAST(CONVERT(CHAR(11), GETDATE(), 113) AS DATETIME

As you may have expected, the conversion to a Char significantly slowed the process down. In fact coming it at 32,698 milliseconds, it took over twice as long as the second method.

Say, Do You Have The Time?

From the three methods I’ve considered, the second method which included using a DATEDIFF to count the number of days since the beginning of time slightly edged out the first method. Both significantly beat converting the DateTime to a Char string.

So, do you have a favorite method? If so, use the test script I’ve posted about and see how it performs. Let me know how it does.

When Were The Statistics In SQL Server Last Updated?

When SQL Server receives a new query, the Query Optimizer undertakes the task of identifying a good plan to resolve the query. It considers a number of different factors as it analyzes the query and maps out a way in which to retrieve the information requested. Dr. DeWitt did an amazing job of explaining this process in understandable terms at the 2010 PASS Community Summit.

Whether or not the query optimizer deems an index to be useful in resolving a query largely depends on the information contained in the statistics for that index.

If the statistics are outdated and do not accurately represent the distribution of values in the table, the query optimizer may not produce an optimal plan for resolving the query. Misleading statistics may result in the optimizer not using an index when it should, or using an index when it would be more efficient to scan the table. Statistics also influence the logical join order and physical types selected.

That’s why it is crucial that the statistics be updated regularly. How often? Well, that, of course, is going to depend on the data being stored, the frequency of updates, inserts, and deletes to the table, etc. It could be nightly; it could monthly. It just depends.

So, how can you tell when the statistics where last updated for an index?

The following query demonstrates this in SQL Server 2005/2008. It makes use of the sys.indexes and sys.tables catalog views, along with the STATS_DATE() function, to retrieve the date that each index was last updated for every user table in the current database.

SELECT
   t.name AS Table_Name
   ,i.name AS Index_Name
   ,i.type_desc AS Index_Type
   ,STATS_DATE(i.object_id,i.index_id) AS Date_Updated
FROM
   sys.indexes i JOIN
   sys.tables t ON t.object_id = i.object_id
WHERE
   i.type > 0
ORDER BY
   t.name ASC
   ,i.type_desc ASC
   ,i.name ASC

In SQL Server 2000, a similar query can be run.

SELECT
   o.name AS Table_Name
   ,i.name AS Index_Name
   ,STATS_DATE(o.id,i.indid) AS Date_Updated
FROM
   sysobjects o JOIN
   sysindexes i ON i.id = o.id
WHERE
   xtype = ‘U’ AND
   i.name IS NOT NULL
ORDER BY
   o.name ASC
   ,i.name ASC

These queries can be useful while troubleshooting and diagnosing performance-related issues. Sometimes, it’s as simple as outdated statistics.

Lightening Talk On Covering Indexes Ka-chow! Ka-chow!

“I’m a precision instrument of speed and aerodynamics.” That’s what Lightning McQueen tells Mater when they first meet in Cars. The ever astute Mater replies “You hurt your what?” I love that movie!

I often get a similiar response when I ask people about covering indexes. It’s not that aren’t bright, knowledge professionals. It just seems as though this topic isn’t very well known. And that’s a shame because queries that can be resolved using only a nonclustered index are some of the fastest queries you can run in SQL Server. I explain and demonstrate why in a Simple-Talk article.

I’ll also take up this topic in a Lightening Talk session at the 2010 PASS Summit. We’ll go from zero to covering indexes in 5 minutes.

If you’re attending the Summit, join me for this Lightening Talk. I’ll be the one who’s downing seven expressos right before the session so I can be sure to “cover” all the material in 5 minutes.

SQL Server Locking & Blocking At DevLink 2010

Wow! Thanks to everyone who came to my SQL Server Locking & Blocking session at devLINK this morning! I’m completely humbled by the turnout!

Thanks for hanging around until the end, too! At the beginning of the session, I said we’d have 45 minutes to cover 90 minutes worth of material. Even so, I had in my mind that the session would last until noon. So when I finished at 11:55pm, I thought I had finished 5 minutes early. I didn’t realize until later that I was actually 10 minutes late! I hope it was worth your while.

As promised, here is a link to the presentation slide deck and the demo scripts (Query 1, Query 2). Feel free to ask any follow up questions in the comments area below. And I’d appreciate your feedback on the session in SpeakerRate.

Lots More SQL To Come

If you’re interested in more topics and sessions on SQL Server, the Nashville SQL Server User Group is hosting a SQLSaturday #51 in just a couple of weeks (August 21, 2010). There are over 30 great sessions presented by SQL Server MVPs, authors, and industry experts from around the country. And it’s completely free. Space is limited so register soon.

Thanks again and I hope to see you at SQLSaturday #51!

Writing Better Queries in Steel City

T-SQL is a very forgiving language. As long as you have the basic syntax of the statement correct, SQL Server assumes you know what you’re doing and it will gladly do your bidding for you.

But anyone who has written queries for very long will tell you that not all queries are created equally. Some return with results in under a second; we like those almost as much as we like bacon. Almost. Some take a few minutes to mull over the data and get back to you; that’s like fake bacon. And alas, some queries will run for eternity minus one.

So what’s the difference? And how can we go for the bacon and avoid the fake bacon and eternity minus one queries?

That’s the topic of a presentation I’m delivering at the Steel City SQL Server User Group meeting next week. If you’re in the Central Alabama area, I hope you’ll come out and join me.

Update:
The meeting starts at 600pm on Tuesday, July 20th, 2010 at
601 Beacon Pkwy, West Suite 106
Birmingham AL 35209.

When you enter the office park, there’s an office building immediately on the right, follow the short curve past it to the next building, it is the New Horizons one.

Partition Alignment and Late Night Infomericals

Late night infomercials make some amazing claims. “If you order now you too can cut through a cinderblock wall and then slice the most delicate of tomatoes. Add a side of bacon and you’ve got a tasty summertime treat that’s sure to please guests of all ages.”

Ahhh, yeah. Right. Except for the part about adding a side of bacon, I don’t believe it.

There’s an old axiom “If something sounds too good to be true, it probably is.” So it is with those infomercials. It’s Buyer Beware.

But axioms were made to be broken; they can be wrong.

SQLCat White Paper on Partition Alignment

Such is the case with a white paper from the SQLCat team. In it, the authors describe how properly aligning disk partitions can help servers realize an up to 30% boost in disk I/O performance. The relatively short paper discusses the theory behind the findings and give practical steps to ensure your partitions are aligned.

A 30% boost in performance at no cost to you? Sound too good to be true? It’s not.

So what are you waiting for? Download your white paper today for the low, low introductory price of just the time it takes you to read it. Click now!

SQL Saturday #29 Presentations & Twittering

presentations_2010-03-26.jpg

Tomorrow, I’m presenting a couple of sessions at SQLSaturday #29 in Birmingham, Alabama. Here are the presentation materials I’ll use.

Reaching Out to the Twitterverse

While I was in Seattle last fall for the PASS Community Summit, I saw a couple of presenters effectively reach out to the Twitter community while giving their sessions. They had someone tweet important points for them as they presented their session. Then, as the Twitterverse posted questions in response, the designated liaison would voice them and tweet the presenters answers. It seemed to work out really well.

So, I’m considering trying that for tomorrow’s sessions – if I can find an attendee who doesn’t mind acting as the liaison.

Question: Would that be of interest to you? Would you like to follow some of the sessions for tomorrow’s SQLSaturday via Twitter?

I’ll let you know what the Twitter tag will be if it works out.

I hope to see you there, or at least hear from you via Twitter during the session.

SQL Server Locking & Block Scripts


A couple of weeks ago, I gave a presentation to the PASS DBA Virtual Chapter on SQL Server Locking & Blocking Made Simple.  The original post had a link to the presentation materials, but I neglected to attach the demo scripts. So, here they are:

To review them, click on the link and open the Microsoft Word document. (For some reason, my blog site won’t allow me to post text files.) Copy and paste the contents of each file into a new query window in SQL Server Management Studio and experiment all you wish. You’ll need the AdventureWorks database; it’s available for download on CodePlex.

If you have any questions or comments, feel free to post them below in the comments section.

Viewing Missing Indexes in Management Studio 2005

phonebook_2010_02_22.jpg

If you’ve used SQL Server 2008 Management Studio (SSMS 2008), you may have noticed that Microsoft added a pretty neat little feature when looking at Execution Plans, the Missing Index message. This subtle message may appear when you’re looking at either the Actual or Estimated Query Execution Plan in graphical mode. It can be seen written in green just below the query text.

showplan_xml_2010_02_22_b.jpg

The message tells you that the optimizer would have been able to resolve the query faster if only it had another index. It’s even tells you what that index should be. I’ve heeded its advice on more than one occasion with very good results.

But I’m Running SQL Server 2005

Unfortunately, not everyone has access to SSMS 2008. And Management Studio in SQL Server 2005 (SSMS 2005) doesn’t provide this kind of information – at least not in an automatic and graphical. The image below shows the same query as run in SSMS 2005.

showplan_xml_2010_02_22_c.jpg

You can, however, retrieve the missing index information for a query another way – viewing the Execution Plan as XML.

Viewing the Execution Plan as XML

To view the Execution Plan in XML, open a query window in SSMS 2005 and use the SHOWPLAN_XML option as shown below.

SET SHOWPLAN_XML ON;

Execute the statement to set the option. Setting the SHOWPLAN_XML option causes SQL Server to return the estimated execution plan in XML for subsequent queries on this connection instead of actually executing the queries. For more information about the SET SHOWPLAN_XML statement, visit Books Online.

Once the SHOWPLAN_XML option has been set, “execute” the query whose execution plan you wish analyze. You’ll see something similar to the follow image. Note the SHOWPLAN_XML works best when the output is set to “Result to Grid.”

showplan_xml_2010_02_22_d.jpg

Double click the XML hyperlink to open the execution plan. Scroll down until you see (or search for) the MissingIndexes node. In the example below, there’s a single missing index identified, an index on the phone number. In this case, the optimizer is suggesting that simple, nonclustered index would improve performance. The impact attribute estimates the improvement that adding the index would have. The optimizer may also recommend nonclustered indexes with included columns and composite indexes.

showplan_xml_2010_02_22_e.jpg

Use Your Head

The Missing Index feature in Management Studio can be really nice. It’s another tool available to us as database professionals to help us do our job. However, as helpful as it sometimes can be, it’s no substitute for using your own skills and knowledge when considering which indexes to create, or not create.

Consider what the message recommends in a broader context. What effect will the new index have on inserts and updates? On maintenance plans? Then decide if the index is worthwhile.

Additional Information

To learn more about the Missing Index feature, visit some of these site.

Have you used this feature? Have you found it useful? I’d like to hear you experiences.

Follow

Get every new post delivered to your Inbox.

Join 34 other followers