Software Mechanics
Why do we even have that lever?

Entlib 5.0 Hands-on labs released

May 25, 2010 06:53 by chris

At long last, our final core deliverable for Enterprise Library 5.0 is released - the Hands-on labs. This set of labs (in either VB or C#, your choice) walks you through the various features of each of the blocks. It's a great way to learn what Entlib can do for you, and gives examples of just about everything you can do.

It's available for download here.

And that's it, I'm going back to bed. :-)


Currently rated 1.5 by 803 people

  • Currently 1.523039/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5
Tags:
Categories: .NET | Entlib | p&p | Unity
Actions: E-mail | Permalink | Comments (1) | Comment RSSRSS comment feed

Unity 2.0, Entlib Docs released

May 5, 2010 12:35 by Chris

After a bit of a struggle, we have released the final Entlib 5.0 documentation set, the Unity documentation, and the final Unity 2.0 standalone binary release.

As usual, Grigori has the official announcement.

<phew> Hopefully people will be able to stop asking me how to write their config files now. :-)


Currently rated 2.6 by 7 people

  • Currently 2.571429/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5
Tags:
Categories:
Actions: E-mail | Permalink | Comments (1) | Comment RSSRSS comment feed

Catching up with recent stuff

May 1, 2010 15:39 by chris

Wow, get a little behind and how time flies. Sorry for the delays, folks, I’ve had some personal issues which kept me from updating this blog when I really should have.

So, in no particular order:

  • Enterprise Library 5.0 has been released! Grigori has the full release announcement, along with download links. The doc set is going through the final, final, FINAL edit passes right now, and hopefully should be out early next week. Once we get the docs out, the final standalone Unity 2.0 installer will release as well. In the meantime, if you want the Unity 2.0 bits, they’re included in the Entlib 5.0 bin directory. And yes, we know the MSDN landing page still points to Entlib 4.1 – MSDN is doing some kind of database migration (I neither know nor want to know the details) which have rendered our landing pages read-only until they’re done. The update is ready to go but until their system is updated we’re stuck with the old one.
  • As part of the Entlib 5.0 release, I did an interview with the Pluralcast about Entlib. I don’t think I made too much of a fool of myself. :-)
  • I did two presentations at the recent Seattle Code Camp. One was, to be honest, a flop, but the other one, titled “Abusing Lambdas for Fun and Profit” was quite well received. I’ve had several requests for the source code to that presentation, so I’ve put the slides and code up on my download page. Enjoy!

I guess that’s it for the moment. Have fun, and be sure to let the team know what you think of Entlib 5!


Currently rated 1.6 by 47 people

  • Currently 1.638295/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5
Tags:
Categories: .NET | Entlib | p&p | Unity | Personal
Actions: E-mail | Permalink | Comments (2) | Comment RSSRSS comment feed

End of an era

April 3, 2010 05:36 by chris

I’m not quite sure why I’m writing this; it’s not like I have anything insightful to offer. But it needs to come out so I guess I’ll do my best to put my thoughts here into pixels.

I consider my career to have truly started in January of 2000. I’d been working successfully at a variety of positions, until I got my first “dream job” offer. I moved my family from Connecticut to Portland Oregon so that I could start working with Chris Sells on a secret project. It was nerve-wracking for me, and apparently just as nerve wracking for him as well. That project (later named Gen<X>) was successful in every way but financially – unfortunately, the only way that actually counts in the end.

I learned a TON from Chris, Shawn, and Shawn, my coworkers there (yes, we had two Chris’s and two Shawn’s. Cut down on the confusion that way.)

One of the things that had a major impact was the first scheduling exercise Chris and I did over the phone before I moved. He pointed me at a web article describing the scheduling approach he wanted to use – Painless Software Schedules by Joel Spolsky.

Joel’s site was a breath of fresh air after formative post-college years spent in the military-industrial complex. I devoured his writing, and got quite active on his forum. It was one of the best places at the time for insightful discussion around software development.

So it is with bittersweet feelings that I saw this today:

Last one out turn off the lights

I won’t deny that it was time for this. The tone of the forum changed dramatically over time; at the end it was mainly a giant echo chamber blinding agreeing with things Joel had said two years earlier. I personally started dropping out when I got lambasted by Joel himself for daring to suggest I actually preferred to sit in the same room with my coworkers (and no, I never did get that pony.)

I found it a sign of my own professional development that I was disagreeing with Joel’s writings more and more, and was able to articulate why. That didn’t go over so well on the forums though, so I ended up stopping posting there about a year ago.

Still, it was a great community while it lasted. Joel on Software was my “third place” for quite a while, and I will miss it. Joel, thanks for the thought provoking topics and providing a place to let me hang out. Best of luck with StackOverflow and everything else you’re working on.


Currently rated 1.0 by 1 people

  • Currently 1/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5
Tags:
Categories:
Actions: E-mail | Permalink | Comments (1) | Comment RSSRSS comment feed

Entlib 5.0 and Unity 2.0 – Beta 1 release

February 8, 2010 16:05 by chris

Hey all. Grigori has the full details, but I wanted to post a quick message that we’ve released a beta release of Entlib 5.0 and Unity 2.0. So, what makes it a beta? Well, the biggest one is that this is a signed binary release, with a full MSI. There are still a few things to tweak (Quickstarts haven’t been updated and so aren’t in the MSI, and the config tool is still getting shaken out) but I’m very proud of what my team has produced for Enterprise Library 5.0.

So, what we desperately need from you is – go download and try it! This beta will install side-by-side with existing previous Entlib versions, and it uninstalls cleanly (if it doesn’t, it’s a bug, please let us know!) We aren’t shipping the VS integrated config tool (yet) so it won’t screw up your copy of Visual Studio either. We really, really need feedback from people on how it works! We’ve done a lot of testing on our end, but nothing compares to real-world exercising of the bits.

So, please go download it and let us know!

Enterprise Library 5.0 Beta 1

Unity 2.0 Beta 1


Currently rated 3.0 by 2 people

  • Currently 3/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5
Tags:
Categories:
Actions: E-mail | Permalink | Comments (0) | Comment RSSRSS comment feed

Unity 2.0 Automatic Factories

January 7, 2010 18:25 by chris

As we’re sloping down the glide path into the release of Unity 2.0 and Entlib 5, I wanted to start talking about some of the things that got added or changed in the container. For this post I want to discuss the new “automatic factories feature.”

This one was a recent addition that I threw in over the Christmas holiday, “inspired” by a feature recently added to AutoFac. At it’s core it answers a simple problem: what if I have to create objects through the container, but I don’t need them at injection time, but later?

This comes up in lots of situations. The object may be heavy, so you don’t want to create it until you absolutely need it. Or you need many of them, but can’t predict when.With Unity 1.2, you basically had two choices in this scenario.

First, you could inject the container as a dependency. This works, but has several issues. You’ve now coupled yourself to the container. Also, you have no way of knowing as a consumer of that class what needs to be in the container – the dependencies have become completely opaque.

A second choice is to inject a factory object. Then call the factory when you need to create an object; the factory implementation can call back into the container. This improves testability and solves the lack of metadata problem above. However, it has the downside of lots of boilerplate code. A separate factory class for every class in your system. Ugh.

Automatic factories solve these problems pretty easily. It’s so simple I should have done it a long time ago. Simply have a dependency of type Func<Whatever>. Then you’ll get that delegate injected. When you invoke the delegate, it calls back into the container to Resolve<Whatever>().

If you resolve the dependency with a name, that’s the name the func will use when invoked to do a resolve.

Finally, if your dependency is Func<IEnumerable<Whatever>>, then you’ll get a ResolveAll call executed.

Simple. straightforward, and solves all the problems of coupling, lack of metadata, and boilerplate code.

This is in the drop I just posted yesterday on Codeplex source control – go take a look and let me know what you think!


Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5
Tags:
Categories: .NET | p&p | Unity
Actions: E-mail | Permalink | Comments (4) | Comment RSSRSS comment feed

Thinking about the future of the Caching block

December 15, 2009 08:36 by Chris

We’re hard at work on Enterprise Library 5.0. One of the things that we’re going to be digging into pretty heavily is .NET 4.0 compatibility. As part of that, I’m beginning to wonder (again) about which blocks still make sense in Entlib.

The Caching block is one of our more popular blocks. It’s been around since before Entlib 1.0 as a stand-alone piece. It was originally intended to suit the needs of occasionally connected client apps to have an offline, persistent cache of information from the server. In practice, it’s been used a lot as a simple in memory cache. We also get constant requests to build a distributed or shared cache.

Requests like this are what got me thinking. In .NET 2.0, the ASP.NET team did a ton of work to make System.Web.Cache work outside of web apps (sadly, they never updated their docs to say so). Since they’re at a lower level than we can write at, the web cache stuff makes a much, much better in memory cache than we can. It monitors overall memory usage, for example, while the best we can do is track how many items are in our cache. With the upcoming release of the cache-formerly-known-as-Velocity, Microsoft has a quality distributed cache technology available. We’d be nuts to try and replicate all that work. We did an analysis of the APIs between our CacheManager and the Velocity cache and came to the conclusion that wrapping Velocity in our interface would cost you a huge amount of capability, so we decided the best guidance was “use Velocity” rather than go through the caching block.

Finally, in .NET 4.0, there’s a new namespace called System.Runtime.Caching, which provides essentially the ASP.NET cache mechanism, but with a provider model behind it. So you can use the existing cache, but you could also plug in new implementations of caching as well.

So, here’s my thinking: given the presence of System.Runtime.Caching, does it make sense to have the caching block at all in Entlib 6? The only scenario that isn’t met by it is the persistent offline cache, and that can be done by building a new cache provider. We’ve always said that p&p’s job was to fill gaps in the platform until the platform catches up. In this case, the platform has definitely caught up, and even passed us in some ways.

Do you use the caching block? Does it do anything you couldn’t do as well or better through the framework provided mechanisms? Is it just backwards compatibility keeping you on it, or is there something I’m missing?

Let us know!


Currently rated 5.0 by 2 people

  • Currently 5/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5
Tags:
Categories: Entlib | p&p | .NET
Actions: E-mail | Permalink | Comments (9) | Comment RSSRSS comment feed

A new article in MSDN (well, kind of…)

November 5, 2009 14:25 by chris

MSDN magazine has just published my latest article. It’s part of the Inside Microsoft patterns & practices online column, and it’s about building libraries that take advantage of dependency injection. I used the work we did rearchitecting Enterprise Library 5 as a sort of case study of the goals, forces, and resolutions. I’m actually quite proud of what we did here, and thought that others might be interested.

Unfortunately, for those of you who get the printed magazine, you may see my name on the cover, but the description of the article is actually from Brian Randell’s article from October’s issue.  I will be having … words … with the magazine staff over this one tomorrow. Sorry Brian, TFS work items are something I’m very happy to let you be the expert in.

Anyway, if you’re interested in how Entlib 5 works under the hood, or want to write a library that uses a DI container without forcing your users to use a specific container, please check it out. And let me know what you think!


Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5
Tags:
Categories: p&p | Unity | Entlib | .NET
Actions: E-mail | Permalink | Comments (1) | Comment RSSRSS comment feed

Wrapping Async callbacks

June 12, 2009 16:22 by chris

We’ve started adding asynchronous support to the Data Access block. This has been a highly requested feature, so we put it fairly high on our backlog. At first glance, this looked pretty easy. Just add a couple of methods to the SqlDatabase class:

public IAsyncResult BeginExecuteReader(DbCommand command, string commandText, AsyncCallback callback, object state);
public IDataReader EndExecuteReader(IAsyncResult ar);

 

(There are another bazillion overloads, but for this discussion let’s stick to these, ok?)

Starting out

Enterprise Library does a lot of things under the hood. In particular, we have a bunch of optional instrumentation. As such, we want to track things like which command was executed, what the start time was, and a bunch of other stuff. So we decided we needed our own implementation of IAsyncResult. We implemented DaabAsyncResult to wrap around the one returned by ADO.NET, stuffed our extra stuff in there, and read it back out in EndExecuteReader. The two methods started out looking like this:

public IAsyncResult BeginExecuteReader(DbCommand command, string commandText,
    AsyncCallback callback, object state)
{
    IAsyncResult innerResult = ((SqlCommand)command)
	.BeginExecuteReader(callback, state, CommandBehavior.Default);
    return new DaabAsyncResult(innerResult, ... other stuff ...);
}

public IDataReader EndExecuteReader(IAsyncResult ar)
{
    var result = (DaabAsyncResult)ar;
    ... do other stuff ...
    
    return result.InnerResult.EndExecuteReader(result.InnerResult);
}

Pretty straightforward, really. But there are a ton of gotchas here. Let’s take a look at what’s wrong with this code.

Replacing the IAsyncResult object

The first one to pop out is in this line:

var result = (DaabAsyncResult)ar; 

This will end up throwing an InvalidCastException. Why? Because we’re not invoking the callback, ADO.NET is. And it’s not passing our async result object, it’s passing its own, unwrapped one. Obviously we need a way to get our result object in there.

There’s no way that I could find to override how this process works, so we need to cheat a little. The solution I ended up with was to pass, not the original callback, but a small wrapper (using a lambda function) so we can do the switch between the two. This lambda looks something like this:

var wrapperCallback = ar => {
    callback(GetDaabAsyncResult(ar));
}

The obvious next step is, of course to implement GetDaabAsyncResult. This is where things get tricky. Where do we get a hold of the async result? It’s the return value from BeginExecuteReader. When do we have it? After BeginExecuteReader returns, of course. When do we set up the callback? Before calling BeginExecuteReader. Uh oh…

Luckily, C#’s lexical closure features comes to our rescue. What we can do is store the async result in a local variable. Reference it from the lambda, and the value will be captured. So the body of BeginExecuteReader now looks like this:

    DaabAsyncResult result = null; // need null to satisfy definite assignment rules

    var wrapperCallback = ar => {
        callback(result);
    };

    var innerResult = command.BeginExecuteReader(wrapperCallback, state, ...);
    result = new DaabAsyncResult(innerResult, ...);
    return result;

And we’re done! Our user’s callbacks get the right IAsyncResult object, happily wrapped and ready to go. Sadly, not done yet…

Off to the races

Anyone who’s done threading is familiar with race conditions. For those who aren’t, a race condition is a section of code that will produce different results depending on which thread runs in which order. And we’ve got a whopper of a race condition here. Do you see it? I’ll give you a hint – what happens if the wrapper callback runs after BeginExecuteReader returns but before result get assigned?

Yes, it’s a small window, but it’s a statistical guarantee that you’ll hit it every time you’re doing a demo for a huge client or a VC or an Admiral or something (I speak from bitter experience here, trust me). So we need to prevent the wrapper callback from accessing the value of result until we know, beyond a shadow of a doubt, that it’s been set.

Basically, we need one thread to wait for another one to do something. My initial though was to use an Event object. But thinking about that, it didn’t seem right – Events are kernel objects, they need to be disposed, and we may have a lot of these things (one per call to BeginExecuteReader to be precise). Luckily, there’s a lighter weight solution - .NET locks.

We need to introduce a lock so that the wrapper won’t execute until we know that result has been set. Luckily, that’s pretty easy. The code looks like this now:

    DaabAsyncResult result = null;
    var padlock = new object();

    var wrapperCallback = ar => {
        lock(padlock) { }
        callback(result);
    };

    lock(padlock)
    {
        var innerResult = command.BeginExecuteReader(wrapperCallback, state, ...);
        result = new DaabAsyncResult(innerResult, ...);
    }
    return result;

Thanks to padlock, the wrapperCallback might start before result is set, but it won’t continue until that lock is released, and it won’t be until result is safely set. I know the “lock(padlock) { }” line looks a little weird, but it’s legitimate. Calling the callback doesn’t need to be within the lock, and I’ve had many years of “make your lock regions as short as possible” drilled into my brain.

So, there we go, race condition solved. Ship it! Right?

One thing at a time

Unfortunately, there’s another wrinkle in this whole thing. It is allowed for implementers of the async pattern to actually complete the entire operation, synchronously, on the same thread that called BeginExecuteReader, and call the callback before returning the async result. While I have no idea if ADO.NET can actually do that, nothing in the docs says it can’t, and even if it doesn’t now, that doesn’t mean that it won’t in future versions. So we have to handle this case.

This torpedos our design in a couple of ways. First, in synchronous completion, the locks don’t block. Instead it’s just a recursive acquisition of a lock by the same thread multiple times, a completely normal and legal thing to do. In fact, if this wasn’t allowed we’d actually deadlock here, which would be even worse. But this time, since we’re on the same thread, there’s no possible way that the callback can wait until after result has been set.

Luckily, there’s an easy way to tell if this happened – the IAsyncResult.CompletedSynchronously property will be true. We much not invoke the callback until we know result has been set. So let’s do exactly that. In our wrapper, if we completed synchronously we know result hasn’t been set, so we don’t invoke the callback. Instead, we do it manually. And we end up at this:

DaabAsyncResult result = null; 
var padlock = new object();
var wrapperCallback = ar => {
    lock(padlock) { }
    if(!ar.CompletedSynchronously)
    {
        callback(result); 
    }
};

lock(padlock) 
{
    var innerResult = command.BeginExecuteReader(wrapperCallback, state, ...);
    result = new DaabAsyncResult(innerResult, ...);
}

if(result.CompletedSynchronously)
{
    callback(result);
}

return result; 

Wrapping Up

So there you have it. This recipe should work for wrapping any async operation that implements the standard async pattern. It’s a little involved to set up, granted. But the only other approach I could think of was to put my callback on a threadpool thread, thus eating two threads for one async operation. Why do that when one will do?


Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5
Tags:
Categories:
Actions: E-mail | Permalink | Comments (5) | Comment RSSRSS comment feed

Long overdue – MiniWiki sample updated

June 2, 2009 17:40 by chris

Oy, it’s easy to fall off the blog bandwagon, isn’t it?

I got a request earlier today for my MiniWiki sample, updated to the MVC 1.0 release bits. I went “Sure, I already did that…” and then discovered I hadn’t. Luckily, the RC2 version works right out of the box. :-)

While I was in there I also fixed a minor CSS issue that was mangling my page titles. As usual, it can be downloaded from the download page.


Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5
Tags:
Categories:
Actions: E-mail | Permalink | Comments (4) | Comment RSSRSS comment feed