Tag Archives: 70-433

Exam Eve: Pre-Test Thoughts on the 70-433

It’s the eve of my 70-433 exam and I’m not feeling terribly confident about my chances of passing the test tomorrow.

I’m sure a portion of my self-doubt is just normal pre-test jitters. It’s been nearly four years since I took a Microsoft exam and I recall being convinced I had failed the last one (70-290 – Windows Server 2003 Administration) until the nice little screen popped up and said that not only had I passed, but I’d only missed two questions!

The other portion of my anxiety is because I really haven’t focused on studying as much as I had hoped – despite setting a clear goal and mapping out time to accomplish it. I’m really good at procrastinating and in the midst of these few weeks of studying I also made a quilt, painted my home study and started playing a few new video games – I’m sure these were in some ways just to avoid studying.

Here are my pre-test thoughts on the test objectives:

Implementing Tables and Views (14 percent)
I feel pretty confident about this objective. I’m strong in creating and altering tables and views and indexing. I’m not quite as confident about spatial data types, partitioning and I’m worried there will be nit picky questions where they put some command in a statement that isn’t compatible because the table doesn’t have a PK and I’ll miss something little like that.

Implementing Programming Objects (16 percent)
Overall, not as confident on this one. I’m good with stored procedures, transactions and error handling. Not quite as confident about CLR and UDFs.

Working with Query Fundamentals (21 percent)
I’m probably the most confident about this section – this is what I do all the time and the new features I learned about (MERGE, OUTPUT, GROUPING SETS, PATINDEX, etc.) are things I could find practical applications for and I really got into trying them out. Unless there’s some nit picky trick questions, I think this section will go well.

Applying Additional Query Techniques (15 percent)
I have medium confidence here. I’m not too worried about subqueries, controling execution plans or the collation stuff but although I’ve done a lot of practicing, CTEs (especially recursive) and the ranking functions might stump me.

Working with Additional SQL Server Components (11 percent)
I could set up Database Mail and get it to work but I didn’t memorize all the ins and outs. Full Text search seems pretty straight-forward but I didn’t spend a lot of time on it and didn’t actually try it out at all. Not feeling great about the Service Broker or the change tracking either. This section probably won’t go well.

Working with XML Data (12 percent)
Ugh. I dread this section. I actually even had a real value use for some of the features of XML with SQL Server but I couldn’t get them to do what I wanted and found the syntax – especially OPENXML and XQUERY to be painful and think it would be easier to write C# code to convert XML data to relational data. I am comfortable with XML in general, the FOR XML portions, basic XPATH and schemas. I doubt that will be enough knowledge though.

Gathering Performance Information (11 percent)
This section is also something I’m pretty familiar with. Although I don’t use them a lot in my current role, I spent several years needing to regularly use Profiler or read execution plans to diagnose issues. I’m not as experienced with the Database Tuning Advisor (but it’s pretty much just a wizard from what I can tell) and I can figure out which DMV or catalog to use when I need it but I certainly don’t have them memorized. Overall, I’m fairly confident with this section though.

The exam is at 10am tomorrow. I’m a fast test taker – either I know it or I don’t – but I’m going to try to go slow and really use all of the time I can to go over things in detail. I should have a verdict by 1pm.

MERGE and OUTPUT in SQL Server

I spent a decent chunk of this afternoon studying for the upcoming 70-433 exam (rescheduled for December 20th). Today I focused primarily on MERGE and OUTPUT clauses.

MERGE is pretty interesting – compare the contents of two tables and update one based on the content in another. MERGE is new to SQL Server in 2008. OUTPUT is also interesting and I wish I’d known about it previously as it would’ve been very handy. OUTPUT let’s you see the changes you made and (optionally) store them! OUTPUT has been around in SQL Server since 2005.

I went through a tutorial on MERGE and then read up on both MERGE and OUTPUT on Books Online. Then, I closed by browser and tested myself to see what I’d learned.

Here’s what I came up with.

/* I'm opening a store that sells only things I like */

CREATE SCHEMA store
GO
CREATE TABLE store.products
( product_id INT
, product_name VARCHAR(150)
, product_desc VARCHAR(400)
, product_price MONEY)
GO
INSERT INTO store.products VALUES
(1, 'Bacon', 'Tasty Bacon', 5.49)
, (2, 'Cheese', 'Sharp Cheddar', 4.79)
, (3, 'Beer', 'Sierra Nevada Pale Ale', 8.49)
GO
SELECT * FROM store.products
GO

/* We're making some changes to our products and have another table with updates to our products list. */

CREATE TABLE store.product_updates
( product_id INT
, product_name VARCHAR(150)
, product_desc VARCHAR(400)
, product_price MONEY)
GO
INSERT INTO store.product_updates VALUES
(1, 'Bacon (Original)', 'Tasty Bacon, original recipe', 5.49)
, (3, 'Beer - IPA', 'Sierra Nevada Pale Ale', 8.49)
, (4, 'Bacon - Peppered', 'Tasty Pepper Bacon (new!)', 5.79)
, (5, 'Beer - Cheap', 'Yuengling Lager', 4.99)
GO
SELECT * FROM store.product_updates;
SELECT * FROM store.products;
GO

/* Use a Merge Statement to update our products based on the information in the product_updates table
NOTE: Not all products are listed in our source table, only products that have been changed */

MERGE store.products AS TARGET
USING store.product_updates AS SOURCE
ON (TARGET.product_id = SOURCE.product_id)
WHEN MATCHED AND
/* Update Product IDs that are matched*/
TARGET.product_name <> SOURCE.product_name
OR TARGET.product_desc <> SOURCE.product_desc
OR TARGET.product_price <> SOURCE.product_price THEN
UPDATE SET TARGET.product_name = SOURCE.product_name
, TARGET.product_desc = SOURCE.product_desc
, TARGET.product_price = SOURCE.product_price
/* WHEN NOT MATCHED BY SOURCE THEN
The product has no update, don't do anything
If we wanted to, we could delete rows that weren't found in the updates table */
WHEN NOT MATCHED BY TARGET THEN
/* This must be a new product, insert it! */
INSERT (product_id, product_name, product_desc, product_price)
VALUES (SOURCE.product_id, SOURCE.product_name, SOURCE.product_desc, SOURCE.product_price);
GO
SELECT * FROM store.products;
GO

Pretty sweet! Our products table is now updated with the appropriate changes!

Keeping a history of your changes could be very useful. In this case, we could use them to track all sorts of things – like how sales compared when prices changed, etc. In my daily life, it would be nice to have the changes recorded as a backup in case I needed to undo some particular change and didn’t realize it for a few hours or days. OUTPUT comes in handy here. It’s not limited to being used with MERGE either – it can be used with INSERT, UPDATE or DELETE. It can just display the changes or it can write them out to a table.

/* What if we wanted to keep a record of changes? Use an OUTPUT clause with the MERGE statement! */

-- undo changes from previous merge
DELETE FROM store.products;
GO
INSERT INTO store.products VALUES
(1, 'Bacon', 'Tasty Bacon', 5.49)
, (2, 'Cheese', 'Sharp Cheddar', 4.79)
, (3, 'Beer', 'Sierra Nevada Pale Ale', 8.49)
GO

/* Create a table to store the historical changes */
CREATE TABLE store.product_history
(change_date DATETIME2 --when change occurred
, change_type VARCHAR(100) --insert, update or delete
, old_product_id INT --beginning product_id
, old_product_name VARCHAR(150) --beginning name
, old_product_desc VARCHAR(400) --beginning description
, old_product_price MONEY --beginning price
, new_product_id INT --new product_id
, new_product_name VARCHAR(150) --new name
, new_product_desc VARCHAR(400) --new description
, new_product_price MONEY) --new price
GO

/* Same MERGE statement with an OUTPUT clause */

MERGE store.products AS TARGET
USING store.product_updates AS SOURCE
ON (TARGET.product_id = SOURCE.product_id)
WHEN MATCHED AND
/* Update Product IDs that are matched*/
TARGET.product_name <> SOURCE.product_name
OR TARGET.product_desc <> SOURCE.product_desc
OR TARGET.product_price <> SOURCE.product_price THEN
UPDATE SET TARGET.product_name = SOURCE.product_name
, TARGET.product_desc = SOURCE.product_desc
, TARGET.product_price = SOURCE.product_price
/* WHEN NOT MATCHED BY SOURCE THEN
The product has no update, don't do anything
If we wanted to, we could delete rows that weren't found in the updates table */
WHEN NOT MATCHED BY TARGET THEN
/* This must be a new product, insert it! */
INSERT (product_id, product_name, product_desc, product_price)
VALUES (SOURCE.product_id, SOURCE.product_name, SOURCE.product_desc, SOURCE.product_price)
OUTPUT
GETDATE() --date of change
, $action --type of change
, DELETED.* --old stuff
, INSERTED.* --new stuff
INTO store.product_history;
GO
SELECT * FROM store.product_history;

On a side note, I found that Oracle has a command similar to MERGE. Since it’s a combination of an UPDATE and a DELETE, they called it UPSERT! That sounds way more fun than MERGE.

Trusting The Material

My primary resource in studying for the 70-433 exam is Microsoft’s Self-Paced Training Kit for MCTS Exam 70-433. While I understand that mistakes will happen, I was quite disappointed to find that their description of GROUP BY in relation to ROLLUP and CUBE was actually wrong!

The topic of ROLLUP and CUBE in addition to GROUP BY can get a bit confusing, I’ll admit, but it was the GROUP BY portion that they got wrong in the book!

The book gives the following query (for AdventureWorks2008)  as an example:

SELECT Production.Product.ProductSubcategoryID
, AVG(Listprice) AS 'Average'
, MIN(Listprice) AS 'Minimum'
, MAX(Listprice) AS 'Maximum'
FROM Production.Product
WHERE ListPrice <> 0
GROUP BY Product.ProductSubcategoryID

The book then cites the explanation of the query results as:

“The top row, where the ProductSubcategoryID is listed as NULL, is the summary row that provides the average, minimum, and maximum list prices of products across all subcategories.”

NO. No it isn’t. It’s the average, minimum and maximum of all entries in the Production.Product table that have a ProductSubcategoryID of NULL. And here’s how to prove it.

If you query the table, you’ll see there are 209 entries with a ProductSubcategoryID of NULL. If we perform these aggregate functions on just those NULL ProductSubcategoryIDs, we get the exact same results:

SELECT Production.Product.ProductSubcategoryID
, AVG(Listprice) AS 'Average'
, MIN(Listprice) AS 'Minimum'
, MAX(Listprice) AS 'Maximum'
FROM Production.Product
WHERE ListPrice <> 0
AND ProductSubcategoryID IS NULL
GROUP BY Product.ProductSubcategoryID

Total rows that appear with the label of NULL are a product of using WITH ROLLUP or WITH CUBE with a GROUP BY but any NULLs you see in a plain old GROUP BY are a product of the actual table data.

While I’m happy that I understand this topic and I’m not worried about it on the exam (unless the test creator is also the book author/editor, uh-oh!), I am a bit concerned about the quality of material I’m learning from. What happens if the information they provide about querying XML or Full Text are equally as “off”? I could be going into this exam with the completely wrong information and that would be a massive waste.

30 Days to 70-433

Yesterday was Day 1 of my new short-term goal: take and pass the Microsoft 70-433: SQL Server 2008 Database Developer exam by December 1st.

Now, before you worry too much, please know that this is not an attempt at an exam cram or a brain dump style of test taking. My goal is to actually know the material well enough to pass the exam.

I’ve been studying for the exam off and on for nearly 10 months. Many of the concepts are not new to me, in fact I’ve worked with a lot of them for over 9 years – since I first touched SQL Server. But other concepts are new and I’ve struggled with them (and/or not given them as much attention).

I have several tools in my arsenal in preparation for this project:

Today, I took a practice exam (which came with the book) and failed miserably (42%). I plan to focus on a chapter at a time – which I can customize the practice exams to also focus on and master one of these every 4-5 days.

I’ll update the blog with my progress as I go – things that I learn and things that I struggle with alike.

Wish me luck! When this is over, I’ll only have to write a white paper for work before the end of the year!