So you need more information in your trigger?

October 31, 2011 10:11 pm

Lately I’ve been redesigning a lot of older systems. Generally, speaking solutions that were good in sql 2000 are still good in sql 2008, but 2008 has a lot of features that can really help make more elegant solutions. One such feature (introduced in 2005) is Context_Info.

In a nut shell, Context_Info is a 128 byte binary associated with the current session (SPID). On the surface you might be thinking that 128 bytes isn’t a whole lot, and you’d be right. But generally speaking, it should be plenty to hold some basic information about the session that would be uses frequently, or briefly; when going to a table is either too much work to do, or too much work to maintain. In my case, I have a very slim but very large table that performs automatic auditing with an Instead of Insert, Update, Delete trigger. Consider the following table.

CREATE TABLE dbo.Temperature (
	LocationId int,
	TempTime datetime,
	Temperature float NOT NULL,
	PRIMARY KEY (CityId, TempTime)
)

Lets say the table holds temperature information for 500,000 locations and at each the temperature is taken 4 times a day within a 1 hour window. Lastly, the table holds temperate information for the last 10 years. To keep an audit trail on this table you could do a couple things.

1) You could add an extra column just to keep track of who made the last edit, but if that’s all you did you’d have an incomplete audit trail, and would miss any deletes. You’d also add a huge amount of data to this table for less than optimal return.

2) You could create a view that has a dummy “EditedBy” column and do all your edits against the view with a custom trigger. This can be rather cumbersome to implement and just feels like an abuse of the technology. Not to mention if someone comes along and modifies the underlying table directly your audit trail will be compromised.

3) Lastly, you could make a trigger on the table itself that maintains a separate audit table. Something like the following.

CREATE TABLE dbo.TemperatureAudit (
	LocationId int,
	TempTime datetime,
	[Timestamp] datetime,
	ModifiedBy varchar(128),
	ModificationType bit, --1 Insert, NULL Update, 0 Delete
	Temperature float NOT NULL,
	PRIMARY KEY (CityId, TempTime, [Timestamp])
)

All the information in the audit table can be easily gathered in the trigger with nothing too fancy, except ModifiedBy. For that you’ll need Context_Info. Basically, in the trigger you make sure Context_Info is initialized, and if it is, you convert what’s in it to varchar(128) and dump it in your audit table along with everything else. If it isn’t initialized you throw an error and do nothing. With this example your audit trail is only compromised if the trigger is dropped or disabled. If a malicious user can drop or disable a trigger, you’ve got bigger things to worry about than the fact this audit trail is compromised. Here’s some code to show you how easy is it to do some IO with Context_Info.

--Do this part to set up for the trigger
DECLARE @ModifyingUser varchar(128)
	,	@BinVar varbinary(128)
SET @ModifyingUser = 'John.Johnson@example.com'
	
SET @BinVar = CAST(@ModifyingUser AS varbinary(128) )
SET CONTEXT_INFO @BinVar

GO

--This gets the value back out and puts it in another variable. You'd do this in the trigger and just dump the variable into the audit table.
DECLARE @ModifyingUser varchar(128)
SELECT @ModifyingUser=CAST(CONTEXT_INFO() as varchar(128))
SELECT @ModifyingUser

GO

--it's good to clean up after yourself, so you may want to clear out context_info
SET CONTEXT_INFO 0x --clears CONTEXT_INFO
SELECT CAST(CONTEXT_INFO() as varchar(128))

There are a few things I should point out here though. If it’s common to get numerous changes to a single LocationId-TempTime combination then you’ll have to make sure your entries into TemperatureAudit.Timestamp don’t collide. Also, since this is all in the trigger you probably won’t want to take any extra time to check that what’s in Context_Info is valid. You’ll probably just want to take it and run. So a malicious user could fill it with garbage and then perform DELETE FROM Temperature, and it would let that happen even if the ModifiedBy was “InYourFaceFromSpace” or something else equally as useless. That’s why I opted to put Tempurature in the audit table. This has the potential to basically double the space requirement, but it makes the audit trail very robust. For my project, I’m filling Tempurate with the new tempurate during updates and inserts, and the value that was deleted when deleting. That way if a malicious or accidental delete happens the audit table serves as a record of what was there which makes retrieval simple.

Basically, when an insert occurs, the audit table tells you who added what, and when. When an update happens it tells you who updated which to what, and when. And when a deleting, it records who deleted what, and when. Not all solutions demand a solution that is so robust, but for my particular scenario the requirements demanded it.

One final note. You might be thinking, “But Zach, your audit table has LocationId and TempTime, which is the primary key of Temperature. Why not add an arbitrary id to the Temperature table and use that instead to save some space?” Doing so would mean you’d have to jump through some interesting hoops to hide the data in the initial table when deleting. Those hoops would mean a lot more logic every time you select. Plus, to have an arbitrary id that can deal with the same amount of variance as the 2 part primary key I originally laid out, you’d have to use a bigint (16 bytes) which adds up to the same as int (8 bytes) and a datetime (8 bytes). So any space savings you’d gain would really only restrict the growth potential of the table and convoluted the whole scheme. The 2 part primary key embeds meaningful data so any situation that references it becomes more intuitive. Instead of 456187 being the reference for Las Vegas on 2011-10-10 06:00.000 the reference is 732 (Las Vegas’s Id) and 2011-10-10 06:00.000. Why would you represent it in any other way?

zlangner

DBCC FREEPROCCACHE

May 26, 2011 6:41 am

The other day I encountered a fun issue. When running a stored procedure it would take about 30-50 seconds. However, if I ran DBCC FREEPROCCACHE before running the stored procedure with the exact same parameters, it would run in about 4 seconds. This seemed odd because everything I knew about FREEPROCCACHE said performance would temporarily tank because all the plans have to be recreated. After some digging, I found that inside this stored procedure it called 3 others. All of which modify temporary tables that already exist. Two of which take only a few parameters. You see, those temp tables are how the sprocs get most of their information about what’s suppose to happen. So much so, that one of the sprocs only takes a single bit for it’s parameter. However, the kind of information that might be passed in through those temp tables has the ability to change drastically. So much so that using a bad execution plan caused it to run 6 to 10 times slower. Worse yet, even after running it numerous times it didn’t ever opt to make a new execution plan. Both of the inner stored procedures now have WITH RECOMPILE assigned in their definition to prevent similar issues else where. Interestingly enough executing the outer sproc with recompile on didn’t seem to change anything. I believe this is because the inner sprocs maintained their bad execution plans, even though the outer one was told to recompile. In other words, recompile isn’t a recursive action, which seems correct; if it were it would be too easy for it to fall into a loop. In all cases it would likely over step what you intended.

NOTE: WITH RECOMPILE will cause the sproc to never save an execution plan. Instead a new one is created at every execution. Generally, there is no need to do this, but if the execution plan should vary when the sproc’s parameters don’t, it’s probably a good idea to look into.

zlangner

Filling boxes

February 25, 2011 6:50 am

I don’t have much time right now but I’ll clean up this post later. For now just enjoy the snippet.

DECLARE @Threshold int
DECLARE @input TABLE (
	ID int,
	name varchar(10),
	size int,
	groupNum int
)

SET @Threshold = 500

INSERT INTO @input (name, size)
	SELECT 'rrkedt', 67
	UNION ALL
	SELECT 'zugmto', 114
	UNION ALL
	SELECT 'jpddyv', 121
	UNION ALL
	SELECT 'lzrphx', 239
	UNION ALL
	SELECT 'jvltsm', 212
	UNION ALL
	SELECT 'yvowre', 107
	UNION ALL
	SELECT 'sxpchv', 244
	UNION ALL
	SELECT 'ighgvn', 249
	UNION ALL
	SELECT 'kgomhv', 173
	UNION ALL
	SELECT 'eshsya', 104
	UNION ALL
	SELECT 'xhqfpj', 175
	UNION ALL
	SELECT 'epptoi', 146
	UNION ALL
	SELECT 'ainkze', 218
	UNION ALL
	SELECT 'tbpukv', 148
	UNION ALL
	SELECT 'visdid', 30
	UNION ALL
	SELECT 'cpjhkn', 27
	UNION ALL
	SELECT 'rapqxl', 221
	UNION ALL
	SELECT 'fakkki', 176
	UNION ALL
	SELECT 'mgkuos', 69
	UNION ALL
	SELECT 'hlnvjc', 190
	UNION ALL
	SELECT 'yrlmfh', 244
	UNION ALL
	SELECT 'pjrnhx', 95
	UNION ALL
	SELECT 'fnggir', 11
	UNION ALL
	SELECT 'pnkxhd', 88
	UNION ALL
	SELECT 'jtmahw', 192
	UNION ALL
	SELECT 'kwapjv', 5
	UNION ALL
	SELECT 'chpsde', 30
	UNION ALL
	SELECT 'fwptfk', 96
	UNION ALL
	SELECT 'znjsiu', 228
	UNION ALL
	SELECT 'etamhm', 112
	UNION ALL
	SELECT 'tjzuhv', 219
	UNION ALL
	SELECT 'npzgdg', 107
	UNION ALL
	SELECT 'ripcml', 99
	UNION ALL
	SELECT 'rggrtk', 8
	UNION ALL
	SELECT 'momslh', 204
	UNION ALL
	SELECT 'fotckt', 142
	UNION ALL
	SELECT 'rqurdw', 155
	UNION ALL
	SELECT 'veonhb', 64
	UNION ALL
	SELECT 'rkhrjc', 146
	UNION ALL
	SELECT 'jfvxri', 241
	UNION ALL
	SELECT 'qhsdyq', 208
	UNION ALL
	SELECT 'uwndum', 15
	UNION ALL
	SELECT 'aokicj', 60
	UNION ALL
	SELECT 'iytjto', 26
	UNION ALL
	SELECT 'ovyzic', 222
	UNION ALL
	SELECT 'lpixws', 92
	UNION ALL
	SELECT 'bnhuod', 223
	UNION ALL
	SELECT 'emszvg', 248
	UNION ALL
	SELECT 'udznjm', 68
	UNION ALL
	SELECT 'kdxjql', 45
	UNION ALL
	SELECT 'biggy', 501
	UNION ALL
	SELECT 'cust1', 278
	union all
	SELECT 'a', 250
	UNION ALL
	SELECT 'b', 249
	UNION ALL
	SELECT 'c', 249
	UNION ALL
	SELECT 'd', 2
	UNION ALL
	SELECT 'e', 125
	UNION ALL
	SELECT 'f', 125
	UNION ALL
	SELECT 'g', 1
	UNION ALL
	SELECT 'h', 75
	UNION ALL
	SELECT 'i', 1
	UNION ALL
	SELECT 'j', 75

--select * from @input 



DECLARE @Concatchar char = ','
	,	@CurrentHistory varchar(max)
	,	@GroupNum int = 0
	,	@LongestPath int
DECLARE @Paths TABLE (
	length int,
	history varchar(max),
	size int
)
DECLARE @R TABLE (
	length int,
	id int,
	size int,
	history varchar(max)
)

RAISERROR('start', 0,0) WITH NOWAIT
/*
;WITH r as (
	SELECT length=1, name, size, history = cast(@Concatchar+name+@Concatchar as varchar(max))
		FROM @input
	UNION ALL
	SELECT length=r.length+1, name= i.name, size=r.size+i.size, history=r.history + i.name+@Concatchar
		FROM r
		JOIN @input i ON r.name < i.name 
			WHERE i.size + r.size <= @Threshold 
)

INSERT INTO @Paths (length, history, size)
	SELECT length, history, size
		FROM r
--3740068 rows

WHILE EXISTS(SELECT * FROM @Paths)
BEGIN
	SELECT TOP(1) @CurrentHistory = history
		FROM @Paths
	ORDER BY size desc, length, history
	
	UPDATE @input 
		SET groupNum = @GroupNum 
			WHERE @CurrentHistory LIKE '%' + @Concatchar + name + @Concatchar + '%'
			
	DELETE p
		FROM @Paths p
		JOIN @input i ON i.groupNum = @GroupNum AND p.history LIKE '%' + @Concatchar + i.name + @Concatchar + '%'
		
	SET @GroupNum = @GroupNum + 1
END
*/

--fill in ID column with unique ints

UPDATE i
	SET i.id = r.id 
	FROM (
		SELECT id=ROW_NUMBER() OVER (ORDER BY size desc), name, size
			FROM @input 
	) r
	JOIN @input i on i.name = r.name and i.size = r.size 
	
--if anything is over the threshold, put each in their own group now
UPDATE i
	SET i.groupNum = r.groupNum 
	FROM (
		SELECT groupNum=ROW_NUMBER() OVER (ORDER BY id), id
			FROM @input 
				WHERE size >= @Threshold 
	) r
	JOIN @input i on i.id = r.id

SELECT @GroupNum=ISNULL(MAX(groupNum),0)+1 FROM @input 

INSERT @R (length, id, size, history)
	SELECT 1, id, size, cast(@Concatchar + CAST(id as varchar) + @Concatchar as varchar(max))
		FROM @input 
			WHERE groupNum IS NULL
SET @LongestPath = 1
			
WHILE (SELECT SUM(size) FROM @input WHERE groupNum IS NULL) > @Threshold
BEGIN

	/*SELECT * FROM @R 
		WHERE length = @LongestPath 	
	*/
	INSERT @R (length, id, size, history)
		SELECT r.length + 1, i.id, r.size + i.size, r.history + CAST(i.id as varchar) + @Concatchar
			FROM @R r
			JOIN @input i ON r.id < i.id 
				WHERE i.groupNum IS NULL 
					AND i.size + r.size <= @Threshold
					
	--if we didn't find any new paths just quit
	IF @LongestPath = (SELECT TOP(1) length FROM @R ORDER BY length DESC)
		BREAK;
			
	SELECT TOP(1) @LongestPath = length 
		FROM @R 
	ORDER BY length DESC
	
	WHILE EXISTS(SELECT * FROM @R WHERE size = @Threshold )
	BEGIN
		SELECT TOP(1) @CurrentHistory = history
			FROM @R
				WHERE size = @Threshold
		ORDER BY length, history
		
		UPDATE @input 
			SET groupNum = @GroupNum 
				WHERE @CurrentHistory LIKE '%' + @Concatchar + CAST(id as varchar) + @Concatchar + '%'
				
		DELETE r
			FROM @R r
			JOIN @input i ON i.groupNum = @GroupNum AND r.history LIKE '%' + @Concatchar + cast(i.id as varchar) + @Concatchar + '%'
			
		RAISERROR('group found', 1, @GroupNum) WITH NOWAIT
		SET @GroupNum = @GroupNum + 1
	END
END

WHILE EXISTS(SELECT * FROM @R)
BEGIN
	SELECT TOP(1) @CurrentHistory = history
		FROM @R
	ORDER BY size desc, length, history

	RAISERROR(@CurrentHistory, 1, 0) WITH NOWAIT
		
	UPDATE @input 
		SET groupNum = @GroupNum 
			WHERE @CurrentHistory LIKE '%' + @Concatchar + cast(id as varchar) + @Concatchar + '%'
			
	DELETE r
		FROM @R r
		JOIN @input i ON i.groupNum = @GroupNum AND r.history LIKE '%' + @Concatchar + cast(i.id as varchar) + @Concatchar + '%'
			
	RAISERROR('group found', 1, @GroupNum) WITH NOWAIT
	SET @GroupNum = @GroupNum + 1
END

RAISERROR('end', 0,0) WITH NOWAIT

select * from @input 
order by groupNum 

select groupNum, SUM(Size)
	from @input 
GROUP BY groupNum 
zlangner

3D technology done better

October 21, 2010 6:59 am

Today I was exploring around and found this article about a new 3D teleconferencing technology that allows near life like representation of a person feasibly to anywhere in the world. The technology requires no special glasses and in some cases is being used for live performances. Imagine a standard TV being used as the backdrop, with one of these projections being used to show the characters and other fore scene props. Perhaps Ray Bradybury‘s image of a TV Wall isn’t so far off.

zlangner

A thought about proofs

October 14, 2010 9:24 am

This morning I was reading slashdot and came across a story about some math enthusiasts that proved 1=.999.. The gist about how they did this is as follows. Read the rest of this entry »

zlangner

An opportunity to learn

October 1, 2010 8:17 pm

As some of you may know this site used to be a redirect to my old dsu student space. I’m planning to make this more of a proper web presence. First and foremost, I’ve opted to use word press. Secondly, I intend to learn a little php and create a custom wordpress template. Don’t be alarmed if the page appears to be in shambles, I’m probably just working on it.

zlangner