Menu

Tech blog (43)

Powershell script to zip/rar files in subfolders and delete old ones

This is a slight change from the usual “pure programming” stuff, but I’ve been looking for a complete solution for this one and was unable to find it, so why not. A change of pace is good sometimes.

The problem is this: I have a script that backs up my database server. It creates one folder for each database and puts a new file into it every day. I want to rar each file (with a password), delete the original and delete all files in the folder except the two newest.

Here’s the script – you may recognize parts of it from other scripts, but unfortunately I can’t remember the URL’s where I picked these pieces up, sorry… Google around for solutions to this problem and you’ll probably find them.

#Powershell Script to recurse input path looking for .bak files, move them into a rar archive
# and delete all archives in each folder except the newest two. 

$InputPath = $args[0] 

if($InputPath.Length -lt 2)
{
    Write-Host "Please supply a path name as your first argument" -foregroundcolor Red
    return
}
if(-not (Test-Path $InputPath))
{
    Write-Host "Path does not appear to be valid" -foregroundcolor Red
    return
} 

$BakFiles = Get-ChildItem $InputPath -Include *.bak -recurse
Foreach ($Bak in $BakFiles)
{
  $ZipFile = $Bak.FullName -replace ".bak", ".rar"
  if (Test-Path $ZipFile)
  {
      Write-Host "$ZipFile exists already, aborted." -foregroundcolor Red
  }
  else
  {
    & "C:\Program Files\WinRAR\winrar.exe" m -m1 -pyourpasswordhere "$ZipFile" "$Bak" | Out-Null 

    if(Test-Path $ZipFile)
    {
      # Keep two newest files in the directory based on creation time
      $path = split-path $ZipFile -Parent
      $total= (ls $path).count - 2 # Change number 2 to whatever number of files you want to keep
      $path = $path + "\*.rar";
      ls $path |sort-object -Property {$_.CreationTime} | Select-Object -first $total | Remove-Item -force
    }
  }
}

An update to “Remote File Sync using WCF and MSF”

This is a follow-up to Bryant Likes’ post where he gave a prototype solution for file synchronization over WCF. I converted the code to Microsoft Sync Framework 2.0 so now it compiles and seems to run well enough. But you have to keep in mind that it isn’t a complete example (it wasn’t that in the original code either): it only does upload sync, it doesn’t have any conflict resolution logic etc. It is my opinion that this is not really worth pursuing any further, because one would need to develop two complete custom providers - and all that (just for copying files?) when there’s an existing FileSyncProvider in the framework which knows how to cooperate with other providers so it should be able to communicate over WCF… On the other hand, if you do pull the heroic act of completing this code, please let me know because I’m (obviously) very interested in it. I tried to keep the modified code as close to the original as possible so that a simple diff (e.g. WinMerge) can show what I’ve done, because I’m not sure I got it all right (as I’m afraid Bryant wasn’t, too).

Here’s the complete solution: http://www.8bit.rs/download/samples/RemoteSync converted to MSF 2.0.zip

Rewriting LINQ expressions

In this post I’m going to show a real-life example of analyzing LINQ expressions and converting them into other LINQ expression on-the-fly. The real-life example will be making an expression that retrieves individual elements from a path expression: for example, given an expression like “a.PropertyB.PropertyC.PropertyD”, we will create another expression that retrieves all objects on the path, that is, a, a.PropertyB, a.PropertyB.PropertyC, etc. To be more specific, the concrete application for this would be handling a PropertyChanged event on a path: let’s say that, given the “a” value from the above example, we need to be notified when the “PropertyD” property is changed on the object pointed by “a.PropertyB.PropertyC” expression. But at any given moment, either of the objects along the path can be null or it can be replaced: in that case, we need to wait until any of the properties gets changed (that is, subscribe to the PropertyChanged event on its parent in the path), and re-attach our PropertyChanged event handler(s). We could solve this problem by describing the path with a string and utilizing reflection, but LINQ gives us not only compile-time safety (if we miss something within the path expression, we would get a compile error instead of a runtime exception) but the ability to compile the lambda expression and get it to run faster…

Ok, the first step is a brain-twister: we need to construct a LINQ expression. What would it look like? We need to convert

a.PropertyB.PropertyC.PropertyD

into something returning non-null elements in the path so we can attach to their PropertyChanged handlers. Now, one thing is sure here: we don’t have to build the whole expression in LINQ. It is perfectly sensible that we make our own utility methods – this way, we will get at least that part compiled by C# compiler which would probably make it run faster and reduce the amount of work done by the runtime LINQ compiler.

The troublesome detail here is that we cannot call a.PropertyB.PropertyC if a.PropertyB is null. That one should be skipped – but we cannot (yet?) write procedural code inside LINQ expressions, so the best we can do is use the IIF construct – i.e. the “?” and “:” operators in C#. If we could generate an array of values like this -

a, (a == null) ? null : a.PropertyB, (a == null) ? null : ((a.PropertyB == null) ? null : a.PropertyB.PropertyC), …

- then we could possibly feed it to some hardcoded method to extract non-null values and do the rest of the processing. It is easy to call a method from LINQ, we can just use a Call expression. Here’s a snippet of code that would produce something of this sort:

static void Test1()
{
	// example of the input expression
	Expression<Func<ClassA, object>> fn = (a => a.PropB.PropC);

	// example of the output expression
	Expression<Func<ClassA, IList<object>>> res =
	(
		a => Process(a, ((a == null) ? null : a.PropB), 
		((a == null) ? null : ((a.PropB == null) ? null : a.PropB.PropC)))
	);


	// arguments to our method call
	List<Expression> callArgs = new List<Expression>();

	// the first argument will be the parameter: but we also need to use
	// this parameter on the main expression itself
	ParameterExpression paramExpr1 = Expression.Parameter(typeof(ClassA), 
		"a");
	callArgs.Add(paramExpr1);

	// this is the conditional expression
	Expression conditionalExpr1 =
		Expression.Condition
		(
			Expression.Equal(paramExpr1, 
				Expression.Constant(null, typeof(ClassA))),
			Expression.Constant(null, typeof(ClassB)),
			Expression.Property(paramExpr1, "PropA")
		);
		
	callArgs.Add(conditionalExpr1);

	// method call expression: the method signature is below in the source
	Expression callExpr = Expression.Call(null, 
		typeof(Program).GetMethod("Process"), 
		Expression.NewArrayInit(typeof(object), callArgs));

	// and our final expression
	Expression<Func<ClassA, IList<object>> final =
		Expression.Lambda<Func<ClassA, IList<object>>>
		(callExpr, paramExpr1);

	// expression can be compiled to run faster
	Func compiledFunc = final.Compile();

	// a couple of examples of usage
	ClassA a1 = new ClassA();
	ClassA a2 = new ClassA() { PropA = new ClassB() };

	object ret1 = compiledFunc(a1);
	object ret2 = compiledFunc(a2);

	// note that the same thing can be done in a similar way but 
	// may mislead you to do DynamicInvoke which is slower
	LambdaExpression lambda = Expression.Lambda(callExpr, paramExpr1);

	// The slow version can also be compiled, produces a delegate - 
	// which is the same as compiledFunc but has to be cast into Func<> 
	Delegate del = lambda.Compile();

	// this is slower than directly calling Func, although it's calling 
	// the same compiled code
	object ret3 = del.DynamicInvoke(a2);

	// it will be faster to cast it to Func<> and then call it directly
	Func<classa , IList<object>> castDelegate = (Func<ClassA,
		IList<object>>)del;

	// this is as fast as compiledFunc
	object ret4 = castDelegate(a2);
}

public static IList<object> Process(params object[] objs)
{
	return new List<object>(objs);
}

This source is very sketchy - deliberately so since it's idea is just to illustrate the principle (I think that way it would be more useful if you need to do something similar but slightly different). Moreover, if you don't really need this exact solution, the best way to continue is to write an example expression, compile it and then decompile it with reflector ilspy: the compiler produces code for your expression that is exactly the same as the one required to dynamically build it. For the reverse operation, analyzing an existing expression, a very useful tool is the Expression Tree visualizer - it's somewhere in the Visual Studio/Samples folder and needs to be compiled. Once you copy it to My Documents\Visual Studio whatever\Visualizers folder, you can view expression trees inside Quick Watch.

One word about performance: I ran a couple of ad-hoc tests using a simple property accessor expression – not a really serious test but it does show the orders of magnitude we’re dealing with. The speed of the compiled Func is comparable to the speed of compiled C# code. When I try the same operation using reflection, it gets around ten thousand times slower (note that this includes calling GetType().GetProperty() each time, but optimizing this increases its speed for about 20%). DynamicInvoke has similar performance – but this is because there’s only one operation in the expression itself, it would be safe to expect that the overhead of DynamicInvoke doesn’t increase with expression complexity, while the overhead of using reflection would.

The biggest resource hog here is expression compilation, it is one million times slower than compiled execution – that means a hundred times slower than reflection. Not that any of the tests were noticeably slow – it is a simple expression, but even so it performs a thousand compiles for less than a second, which is decidedly not bad.

 

So we now have an idea how to build the output expression: the next step is to analyze the input expression. This is not so simple because the LINQ expression tree elements don’t have anything resembling a tidy class hierarchy (even the “mostly decent” DOM API is a space shuttle compared to it): because of this it seems that any expression type that can possibly appear in the expression tree should be special-cased in our logic. Luckily, we limit our ambition to property-referencing expressions only. In LINQ expression speak, this means we have a series of chained MemberExpressions pointing backwards to a ParameterExpression. So, an expression like “a => a.PropB.PropC” would have an expression tree like this:

MemberExpression
(
	Member = {PropertyInfo pointing to the PropC property}
	Expression = MemberExpression
	(
		Member = {PropertyInfo pointing to the PropB property}
		Expression = {ParameterExpression for parameter a}
	)
)

This should be fairly simple, all we need to do is get the MemberExpression contained in the Body of the root expression, then recursively run through all chained MemberExpressions and stop when we reach the ParameterExpression – the parameter we can copy into our own rewritten expression.

There is at least one small catch here – this is the only one I discovered, there may be more: the compiler may insert a conversion expression at the root of the expression tree if we use nullable types (for what reason, I can’t say, possibly value boxing?) It is represented as a UnaryExpression. In this example, we’ll simply skip over it (just use its Operand property which is a MemberExpression), but I’m quite sure this example is oversimplified and there could be more special cases that need to be handled. (Like, for example, casting, which could be quite legal – even necessary – in expressions like these).

Ok, on to the example... This is an excerpt from working code where PathExpression is the LINQ expressions we want to process.

 

Stack<MemberExpression> expressionStack = new Stack<MemberExpression>();

Expression exp = PathExpression.Body; 

while(true)
{
	if (exp is MemberExpression)
	{
		expressionStack.Push((MemberExpression)exp);
		exp = ((MemberExpression)exp).Expression;
	}
	else if (exp is UnaryExpression 
		&& ((UnaryExpression)exp).NodeType == ExpressionType.Convert)
	{
		// skip convert nodes (there could be one at the beginning of the 
		// expression for some reason if we use nullable properties
		exp = ((UnaryExpression)exp).Operand;
	}
	else if (exp == null || exp is ParameterExpression)
	{
		break;
	}
	else // exp.Expression != null but it’s not a member nor parameter expression
	{
		throw new InvalidOperationException("Unsupported expression type: " 
		+ exp.NodeType + ". Only member access expressions are supported.");
	}
}

ParameterExpression inputParamExpression = null;
Expression previousExpression = null;

// arguments to the method call
List<Expression> callArgs = new List<Expression>();

// the first one should point to the parameter

MemberExpression firstMe = expressionStack.Peek();
if (!(firstMe.Expression is ParameterExpression))
{
	throw new InvalidOperationException("The first expression element 
	doesn't reference an input parameter. The expression should be like 
	'x.PropA.PropB.PropC' where x is an input parameter.");
}

inputParamExpression = (ParameterExpression)firstMe.Expression;
callArgs.Add(inputParamExpression);
previousExpression = inputParamExpression;

List<string> propertyNames = new List<string>();

// now unwrap the expression: we want to build an expression like
//Expression<Func<ClassA, IList<object>>> res =
//(
//    x => Process(x, ((x == null) ? null : x.PropA), ((x == null) ? null 
//	: ((x.PropA == null) ? null : x.PropA.PropB)))
//);
while(expressionStack.Count > 0)
{
	MemberExpression me = expressionStack.Pop(); 

	// skip the last property in the expression: 
	// we don't need its value because
	// we won't attach to its PropertyChanged event
	if (expressionStack.Count >= 1)
	{
		Expression conditionalExpression =
			Expression.Condition
			(
			// in each step we reference the previous expression
				Expression.Equal(previousExpression, 
					Expression.Constant(null, 
						previousExpression.Type)),
				Expression.Constant(null, 
					((PropertyInfo)me.Member).PropertyType),
				Expression.Property(previousExpression, 
					(PropertyInfo)me.Member)
			);

		callArgs.Add(conditionalExpression);
		previousExpression = conditionalExpression;
	}

	propertyNames.Add(((PropertyInfo)me.Member).Name);
}

ParameterExpression thisExpression = 
  Expression.Parameter(typeof(PropertyChangedOnPathWrapper<T>), "this");

Expression callExpr = Expression.Call(thisExpression,
  typeof(PropertyChangedOnPathWrapper<T>).GetMethod
  ("Process", BindingFlags.Instance | BindingFlags.NonPublic),
	Expression.NewArrayInit(typeof(object), callArgs));

Expression<Action<T, PropertyChangedOnPathWrapper<T>>> finalExpr = 
  Expression.Lambda<Action<T, PropertyChangedOnPathWrapper<T>>>
  (callExpr, inputParamExpression, thisExpression);

ExtractionExpression = finalExpr.Compile();

Howto: SQL / LINQ JOIN on TOP 1 row

Problem: you want to do a LEFT or INNER JOIN between two tables but include only one record from the other table: that is, you don’t want the join to create duplicate records. Interestingly enough, I found the solution to this through LINQ. In LINQ, you can do this without really thinking about it:

from cmp in ctx.Companies join pers in ctx.Persons on cmp.Persons.First().ID equals pers.ID

Surprisingly for me, this query gets translated into working SQL, which looks something like this (note that I cleaned it up quite a bit for readability):

FROM Company INNER JOIN Person ON ( SELECT TOP (1) top1Person.ID FROM Person AS top1Person WHERE top1Person.CompanyID = Company.ID ) = Person.ID

Once you think about it, the solution is quite simple. All you need to remember is that a JOIN can contain subselects (even subselects with their own JOINs).

Collection owner not associated with session? Not quite.

I hate when this happens. I upgraded to NHibernate 2.0 and then quickly afterward to 2.1.0 (you guessed it: because of LINQ). I had to change a couple of things to support it in my company’s application framework and it all seemed to work well – until I discovered that deleting any entity that has a one-to-many relation with cascade=”all-delete-orphan” stopped functioning. It died with a cryptic error message of “collection owner not associated with session”… If I changed to cascade=”all” it worked, but this is not the point, it wasn’t broken earlier. Of course, I tried looking all over the web and apart from a page in Spanish (which wouldn’t be helpful even if it was in English) came up blank. Tried moving to NHibernate 2.1.2 - which is not that simple since we’re using a slightly modified version of NHibernate (a reason more to suspect that the solution to this problem would be hard to find). So here’s a short post for anyone stumbling upon a similar problem.

In the end, I traced it to this behaviour: the collection owner is not found in the session because NHibernate tries to find it using ID = 0, while it’s original ID was 48. The logic is somewhat strange here, because the method receives the original collection owner (which is in the session), retrieves its ID (which was for some reason reset to 0) and then tries to find it using this wrong ID. Moreover, there’s a commented-out code that says “// TODO NH Different behavior” that would seem to do things properly (I checked it, it’s still standing in the NHibernate trunk as is). But the real reason why this happened is that blasted zero in the ID: further debugging (thankfully, there’s a full source for NHibernate available), revealed that it was reset because “use_identifier_rollback” was turned on in the configuration. Well… I probably set this to experiment with it and forgot. Turning it off solved the problem for me… Luckily, I didn’t really need this rollback functionality - as it’s not exactly what it seems to be: it doesn’t rollback identifiers when the transaction is rolled back, it rolls them back when entities are deleted! Why the second feature made more sense to implement than the first one is a mystery to me...

Replicating self-referencing tables and circular foreign keys with Microsoft Sync Framework

Self-referencing tables – or, at least circular foreign key references between tables – are probably a common thing in all but the simplest database designs. Yet Microsoft Sync Framework doesn’t have a clear strategy on how to replicate such data. I found various suggestions on the net: order rows so that the parent records come before children – this is usable for self-referencing tables (although not endorsed by Microsoft because the framework doesn’t guarantee it will respect this order), but not nearly good enough for circular references – if you have two rows in two tables pointing at each other, ordering them cannot solve the problem. On an MSDN forum there was a suggestion to temporarily disable foreign key constraints: this I cannot take seriously because it opens my database to corruption, all it takes is one faulty write while the constraint is down and I have invalid data in the database (unless I lock the tables before synchronization, and I’m not sure how to do this from within the Sync Framework).

So, when all else fails, you have to sit and think: what would be the general principle for solving this, Sync Framework notwithstanding? Exactly - do it in two passes. The problem is present only when inserting rows, if the row contains a reference to another row that wasn’t yet created, we get a foreign key violation… Our strategy could be to insert all rows without setting foreign key field values, then do another pass to just connect the foreign keys. If we do this after all tables have finished their first pass (inserts, updates, deletes and all), we also support the circular references because required rows are present in all tables. Ok, that was fairly easy to figure out (not much harder to implement either, but more on that later). We have another issue here that is not so obvious, deleting the rows… There may be other rows referencing the one we are deleting that haven’t yet been replicated. Since the Sync Framework applies the deletes first, we can be fairly certain that the referencing rows are yet to be replicated - they will either be deleted or updated to reference something else. So we can put a null value in all fields that reference our row. (Note that this will probably mark the other rows as modified and cause them to be replicated back – this is an issue I won’t go into in this post, but I’m quite certain there needs to be a global mechanism for disabling change tracking while we’re writing replicated data. I currently use a temporary “secret handshake” solution: I send a special value - the birth date of Humphrey Bogart - in the row’s creation/last update date fields that disables the change tracking trigger).

Ok, on to the code. I won’t give you a working example here, just sample lines with comments. You’ve probably figured out by now that it will be necessary to write SQL commands for the sync adapter by hand. I don’t know about you, but I’m no longer surprised by this: many of the tools and components we get in the .Net framework packages solve just the simplest problems and provide nice demos – if you need anything clever, you code it by hand. My solution was to create my own designer/code generator, and now I’m free to support any feature I need (also, I am able to do it much faster than Microsoft, for whatever reason: it took me a couple of days to add this feature… It may be that I’m standing on the shoulders of giants, but the giants could really have spared a couple of days to do this themselves). For simplicity, I’ll show how to replicate a circular reference: there’s an Item table that has an ID, a Name, and a ParentID, referencing itself. For replication, I split the table into two SyncAdapters: Item, that inserts only ID and Name and has a special delete command to eliminate foreign references beforehand, and Item2ndPass, which has only the insert command – but the only thing insert command does is wiring up of ParentID’s, it does not insert anything. I’ve deleted all the usual command creation and parameter addition code, the point is only to show the SQL’s, since they hold the key to the solution.

[Serializable]
public partial class ItemSyncAdapter : Microsoft.Synchronization.Data.Server.SyncAdapter
{
	partial void OnInitialized();

	public ItemSyncAdapter()
	{
		this.InitializeCommands();
		this.InitializeAdapterProperties();
		this.OnInitialized();
	}

	private void InitializeCommands()
	{
		// InsertCommand
		// 1899-12-25 00:00:00.000 is a 'Humphrey Bogart' special value telling
		// the change tracking trigger to skip this row
		this.InsertCommand.CommandText =  @"SET IDENTITY_INSERT Item ON
INSERT INTO Item ([ID], [Name], [CreatedDate], [LastUpdatedDate]) VALUES (@ID, @Name,
@sync_last_received_anchor, '1899-12-25 00:00:00.000') SET @sync_row_count = @@rowcount
SET IDENTITY_INSERT Item OFF";

		// UpdateCommand
		this.UpdateCommand.CommandText = @"UPDATE Item SET [Name] = @Name,
CreatedDate='1899-12-25 00:00:00.000', LastUpdatedDate=@sync_last_received_anchor WHERE
([ID] = @ID) AND (@sync_force_write = 1 OR ([LastUpdatedDate] IS NULL OR [LastUpdatedDate]
<= @sync_last_received_anchor)) SET @sync_row_count = @@rowcount";		

		// DeleteCommand
		this.DeleteCommand.CommandText = @"UPDATE Item SET [ParentID] = NULL
WHERE [ParentID] = @ID DELETE FROM Item WHERE ([ID] = @ID) AND (@sync_force_write = 1 OR
([LastUpdatedDate] <= @sync_last_received_anchor OR [LastUpdatedDate] IS NULL))
SET @sync_row_count = @@rowcount";

		// SelectConflictUpdatedRowsCommand, SelectConflictDeletedRowsCommand
		// skipped because they are not relevant

		// SelectIncrementalInsertsCommand
		this.SelectIncrementalInsertsCommand.CommandText = @"SELECT  [ID],
[ParentID], [CreatedDate], [LastUpdatedDate] FROM Item WHERE ([CreatedDate] >
@sync_last_received_anchor AND [CreatedDate] <= @sync_new_received_anchor)";

		// SelectIncrementalUpdatesCommand
		this.SelectIncrementalUpdatesCommand.CommandText = @"SELECT  [ID],
[ParentID], [CreatedDate], [LastUpdatedDate] FROM Item WHERE ([LastUpdatedDate] >
@sync_last_received_anchor AND [LastUpdatedDate] <= @sync_new_received_anchor AND
[CreatedDate] <= @sync_last_received_anchor)";

		// SelectIncrementalDeletesCommand
		this.SelectIncrementalDeletesCommand.CommandText = @"SELECT FirstID
AS ID FROM sys_ReplicationTombstone WHERE NameOfTable = 'Item' AND DeletionDate >
@sync_last_received_anchor AND DeletionDate <= @sync_new_received_anchor";
	}

	private void InitializeAdapterProperties()
	{
		this.TableName = "Item";
	}

} // end ItemSyncAdapter 
[Serializable]
public partial class Item2ndPassSyncAdapter : Microsoft.Synchronization.Data.Server.SyncAdapter
{
	partial void OnInitialized();

	public Item2ndPassSyncAdapter()
	{
		this.InitializeCommands();
		this.InitializeAdapterProperties();
		this.OnInitialized();
	}

	private void InitializeCommands()
	{
		// InsertCommand
		this.InsertCommand.CommandText =  @"UPDATE Item SET [ParentID] = @ParentID,
CreatedDate='1899-12-25 00:00:00.000', LastUpdatedDate=@sync_last_received_anchor WHERE ([ID] =
@ID) AND (@sync_force_write = 1 OR ([LastUpdatedDate] IS NULL OR [LastUpdatedDate] <=
@sync_last_received_anchor)) SET @sync_row_count = @@rowcount";

		// SelectIncrementalInsertsCommand
		this.SelectIncrementalInsertsCommand.CommandText = @"SELECT  [ID],
[ParentID] FROM Item WHERE ([CreatedDate] > @sync_last_received_anchor AND [CreatedDate] <=
@sync_new_received_anchor)";

	}

	private void InitializeAdapterProperties()
	{
		this.TableName = "Item2ndPass";
	}

} // end Item2ndPassSyncAdapter

In this case, it would be enough to setup the second-pass sync adapter to be executed after the first one. For circular references, I put all second-pass adapters at the end, after all first-pass adapters. Notice that the commands for selecting incremental inserts and updates read all columns - this is probably suboptimal because some fields will not be used, but it's much more convenient to have all field values handy than to rework the whole code generator template for each minor adjustment.

UPDATE (24.11.2010): I’ve added a source file with an illustration for this solution. I haven’t tested it (although it does compile and may well work) but it could be useful in showing the overall picture. It was created by extracting one generated sync adapter from my application, hacking away most of our specific code and then making it compile. http://www.8bit.rs/download/samples/ItemSyncAgentSample.cs Note that the file contains a couple of things that stray away from standard implementation, like using one table for all tombstone records, using the sample sql express client sync provider etc. Just ignore these. One thing, though, may be of interest (I may even do a blog post about it one day): the insert command knows how to restore autoincrement ID after replication, so that different autoincrement ranges can exist in different replicated databases (no GUIDs are used for primary keys) and identity insert is possible. This is necessary because SQL server automatically sets the current autoincrement seed to the largest value inserted in the autoincrement column. Keep in mind that this (as well as the whole class, for that matter) may not be the best way to do things – but I’ve been using it for some time now and haven’t had any problems.

Pre-fetching data with LINQ to SQL?

(Yes, I know I’m behind the times - “LINQ to SQL? Who needs it when there’s the newest preview/alpha/beta of the Entity Framework?" Well, I did start this application in EF v1 and ran away when I saw “unsupported” stickers plastered all over it. So, no thanks, I’m waiting for the proverbial “Microsoft v3.11” (or 3.51, whatever they call it)).

Looking superficially, one would say that all ORMs are alike. Moreover, as one of the newest to come into the world, LINQ to SQL would be expected to have it’s philosophy and design done according to previously accumulated knowledge. Erm, yes, it’s a polite way of saying that I expected it to be a rip-off of NHibernate…

This similarity may exist in general, but there are some areas in which the two are completely separate worlds. The example that I encountered is performance optimization. Coming from the NHibernate background I was surprised to discover that there are not much optimization topics in common with the two. In some aspects, NHibernate has already solved (at least for me) issues that LINQ to SQL has not yet stumbled upon, but in others, LINQ to SQL focuses on performance issues that don’t even exist as topics in NHibernate.

NHibernate 2.1 updates schema metadata without being asked to

In NHibernate 2.1, the session factory is set up to access the database immediately when you build it. This is done by a Hbm2ddl component to update something called SchemaMetaData: I’m not sure what this is all about, but I am certain that such behaviour is not nice. The previous version of NHibernate didn’t do it, so I expect the new one to behave likewise unless I explicitly order the change.

The solution for this is to add a line to your hibernate.cfg.xml file that says:

<property name="hbm2ddl.keywords">none</property>

Note that completely omitting this setting will actually enable the feature… Did I already mention I don’t like it? I don’t, so much that I decided not to change config files but to hardcode it disabled. I use one global method to load the NHibernate configuration, so this is easy. The code looks something like this:

_configuration = new global::NHibernate.Cfg.Configuration();
_configuration.Configure();
_configuration.SetProperty("hbm2ddl.keywords", "none");
Subscribe to this RSS feed