Software Mechanics
Why do we even have that lever?

Using Unity and the ASP.NET MVC Preview 2

March 13, 2008 07:40 by Chris

The recent release of the ASP.NET MVC Framework made a change to the IControllerFactory interface at the request of users of dependency injection containers. Instead of passing the controller factory the type of controller desired, it now gets the string from the route, and the factory can now resolve that type however it wishes.

This fits with most other DI containers, since they have to have everything pre-configured anyway. However, Unity is a little different, in that you don't have to register concrete types ahead of time. This change requires Unity users to register types ahead of time like everyone else, or do reflection to find controller types at runtime. Kind of annoying.

However, there's another answer. Recent (ahem) discussions on the net has shown an obsession with interfaces, to the point that even when there's a useful base class that will solve the problem, many will go straight to the interface and duplicate a lot of work.

In this case, there's a simple way to hook Unity up to ASP.NET MVC preview 2. Rather than implement IControllerFactory, inherit from DefaultControllerFactory instead. There's a method in there, GetControllerInstance, which is called after the name has already been resolved to a type. In other words, the DefaultControllerFactory already does the reflection for you.

Here's the code:

    public class UnityControllerFactory : DefaultControllerFactory
    {
        IUnityContainer container;

        public UnityControllerFactory(IUnityContainer container)
        {
            this.container = container;
        }

        protected override IController GetControllerInstance(Type controllerType)
        {
            if (controllerType == null)
            {
                throw new ArgumentNullException("controllerType");
            }
            if (!typeof(IController).IsAssignableFrom(controllerType))
            {
                throw new ArgumentException("Type requested is not a controller", "controllerType");
            }

            return container.Resolve(controllerType) as IController;
 }

To hook this up, in global.asax.cs, do something like this:

            IUnityContainer container = new UnityContainer();

            // Configure container here

            IControllerFactory controllerFactory = new UnityControllerFactory(container);
            ControllerBuilder.Current.SetControllerFactory(controllerFactory);

Simple and easy! Hope this helps!

 


Currently rated 1.7 by 79 people

  • Currently 1.658227/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5
Tags: ,
Categories: .NET | Unity
Actions: E-mail | Permalink | Comments (7) | Comment RSSRSS comment feed

Reconstructing ObjectBuilder - Changes to the IBuilderStrategy interface

March 9, 2008 17:50 by Chris

Reconstructing ObjectBuilder

Ok, so I'm not so imaginitive with titles.

I'd going to be posting snippets of Unity internals here, and in particular getting into the sometimes ugly details of ObjectBuilder and how we use it, and how we've changed it.

To start off, I'd like to share probably the biggest change to the original OB API. For those who read my previous series, you'll remember the definition of IBuilderStrategy looked like this:

1  public interface IBuilderStrategy
2  {
3    object BuildUp(IBuilderContext context, object buildKey, object existing);
4
5    object TearDown(IBuilderContext context, object item);
7  }

To implement a strategy, you implemented this interface. When you were done, you returned the value you were building up. To invoke the rest of the strategy change, you wrote:

base.BuildUp(context, buildKey, existing);

This call would invoke the rest of the strategy chain and return the result back to you. Many strategies simply returned this value after having done their work.

This approach has the advantage of simplicity, but in practice is a real pain. The major issue is debugging and profiling. If you've ever seen a stack trace from when ObjectBuilder goes wrong, you'll know what I'm talking about. You can easily get 35 or 40 stack frames of nothing but BuilderStrategy.BuildUp, and it becomes next to impossible to figure out where the chain went wrong.

I got motivated to do something about this when we were doing some profiling on the Web Client Software Factory library, which uses OB1. Our perf guy told us "your problem is in ObjectBuilder." I asked him how he knew that, and he showed me his profile, which had nothing but OB calls in it, in gigantic stacks 70 or 80 layers deep. It is actually turning out that OB is not the determining factor, but we could see that becuase of all the noise.

So, as a result, the IBuilderStrategy interface now looks like this:

1  public interface IBuilderStrategy
2  {
3       void PreBuildUp(IBuilderContext context);
4
5       void PostBuildUp(IBuilderContext context);
6
7       void PreTearDown(IBuilderContext context);
8
9       void PostTearDown(IBuilderContext context);
10 }

I've explicitly split the processing of a strategy into two parts, Pre and Post. This is similar to the way WCF behaviors are designed. All the Pre- methods of the strategies are called going forward down the strategy chain, then the Post- methods are called in the reverse order.

Notice that the existing and buildKey parameters are now gone. These values are now included in the IBuilderContext, and strategies can change them as the strategy chain executes.

Now, the pre and post methods do not need to explicitly invoke the rest of the chain. Instead, the StrategyChain class now has ExecuteBuildUp and ExecuteTearDown methods that call the strategies in the appropriate order. Also, if you write a strategy that needs to short-circuit the chain and return early, there's a BuildComplete flag in the context that, if set true by a strategy, will stop the chain from continuing.

The net result of this chain is that stacks are a LOT shallower; you will only get as deep as the number of recursive dependencies. We also got a single spot to wrap and handle exceptions, which was a nice benefit. It turns out to be slightly faster as well, but perf wasn't a real priority behind this change.

There are a couple of lost capabilities, unfortunately. You can't completely change the context, recursively invoke the rest of the chain, and then switch back to the original context and execute the rest of the chain again. This was a sufficiently rare scenario (none of the built-in OB strategies used this technique) that I don't feel bad about losing it.

If you've used classic OB and are wondering what's going on with the strategies, I hope this helps.


Currently rated 1.7 by 46 people

  • Currently 1.673915/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5
Tags:
Categories: Unity
Actions: E-mail | Permalink | Comments (0) | Comment RSSRSS comment feed

Working offline with TFS

February 28, 2008 18:09 by Chris

At p&p, we use Microsoft Team Foundation Server for work item tracking and source code control. This comes as a surprise to exactly zero people I'm sure. Laughing

TFS did take some getting used to, but after working with it for a while it's gotten pretty comfortable. At least while I'm in the office. There's one nagging problem with it; when I go home, or out of the office, I've got to have corpnet access to work on the project. TFS tracks on the server which files you're editing, which means that even working on a local copy without doing a checkin requires access to the TFS server.

While I do have remote access, it's a pain, it's slow (I live about three miles beyond the borders of civilization, apparently, and my home can't get real broadband service), and at times unreliable. I've resorted to turning all the read-only bits off locally, editing away, then using the Team Foundation Power Tools uu and online commands to get synced back up.

This worked, but I hate working without a net. I still want source control, even just for local projects. And the tfpt commands get really confused if you delete files, so I needed a way to track that somehow, other than a pad of paper.

I just came up with an idea that I'm trying out, and thought I'd share.

I've been playing for a few months with a new open source source control system called Bazaar. It's intended as a distributed version control system where people ship patches to each other and it help manage the merges and whatnot (kind of like GIT). It's highly portable and plays nicely on Windows (unlike GIT). You do need to be comfortable at the command line, but that's not a problem for me. One of the nicest things for my purposes is that there's no server to set up. Each working tree is it's own working repository. You just type "bzr init ." and you've got source control on that directory. No need to set up a Subversion server, for example. And sharing my branches are as easy as emailing a file. Not that I've done that yet; this was just for me.

Anyway, back to the point. Here's what I'm doing. First, I made my TFS workspace into a bazaar repository as well. This means that as I make changes in TFS, bazaar will treat them as changes to the working copy, and I can check them into bazaar too. Second, I created a separate bazaar branch for my offline work. This gives me local source control; as long as I'm working in my local branch, I don't need to be connected to TFS at all.

Here's the money shot. Bazaar supports easy merging between branches. When I'm ready to commit to TFS, I push my changes from my local branch back to the TFS bazaar branch. Then I check that into TFS. Similarly, I can do a "get latest" in TFS, check into the master bazaar branch, then merge those changes into my local copy.

It's not quite seamless (I still need to play with tfpt to get the checkouts set up right) but so far it's working out a LOT better than working completely disconnected from source code control. Having local source code control makes this work a lot more agile; I can freely delete and modify stuff locally, check in or even branch some more, and I don't have to push back up to the server until I'm ready.

So far this system has been in use for about four hours, but so far so good. I'll update later once I've got more experience with it.


Currently rated 5.0 by 1 people

  • Currently 5/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

End of the Deconstruction

February 28, 2008 18:01 by Chris

I'd like to thank everyone for the great reception for my Deconstructing ObjectBuilder series. It's good to know I was filling a gap.

However, I'm currently thinking that I'm not going to post the last segment on using OB to wire up to the event broker. I was editing the text for the post last night, and I realized that it needed serious work to update to OB2. Serious enough to take at least a couple of days. And, quite honestly, I think I've got more important things to write about.

So I think I'm going to call an end to the Deconstructing ObjectBuilder series. And intead start a new one, on Unity, extensibility, and how Unity uses OB2. The EventBroker example I've been using is actually included as one of the Unity quickstarts, so I don't even need to update that code for the container. Wink

I have uploaded all the code from the DeconstructingObjectBuilder series so far. This code even works, unlike some of the blog posts with last minute typos in it. Feel free to play with it. You'll need to download and compile the OB2 sources first. But to be honest, with Unity out there (which has made some significant changes to OB2) I'm not sure how much effort that's worth right now. You can tell me.

Thanks for reading, now on to something (sorta) new!

DeconstructingObjectBuilder.zip (37.93 kb)


Currently rated 1.5 by 35 people

  • Currently 1.45714/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Deconstruction ObjectBuilder - Wiring, part 1

February 11, 2008 17:26 by Chris

I've gotten some feedback that these OB articles are a little long. I'm going to try chopping them up into smaller pieces and see how it goes. Please let me know!

Wiring Objects Together

We've looked at using ObjectBuilder to create objects. But creating an object is only part of the job. Good OO designs use many collaborating objects. Hooking these objects together can take a lot of code, and this wiring code is often hard to maintain. ObjectBuilder can be used to automate this wiring as part of the construction process, or even on objects that already exist.

This kind of wiring is at the core of the Dependency Injection pattern, which we'll get to in just a few more installements.

Wiring Events

As an example of the kinds of object wiring you'd want to automate, I built a simple event broker object. Events are a fundamental part of the .NET object model, and are used throughout the BCL. Events are raised for everything from buttons being clicked to assembly resolution failing. They're a great tool.

But (isn't there always a but?) building event driven systems can be very complex. In order to become an event receiver, your object has to have a reference to the event source. This isn't an issue on a simple dialog box, but in a larger app this can result in spaghetti very quickly. For example, consider implementing the basic cut, copy, and paste operations on a form with a bunch of text boxes and a menu. Each text box would need to hook up to the menu items. But wait - these events also need to hook up to the clipboard keyboard shortcuts. But wait - the keyboard shortcuts are sent to the text boxes themselves, which means that the text boxes actually produce as well as respond to these events. This means (at first glance) that every text box needs to hook up to events on every other text box. Ick.

There's an old adage in computer science: "Every problem can be solved with another layer of indirection." That's what the event broker provides . Instead of registering with every possible producer of an event, you instead register with only one - the broker. The broker takes care of the details of routing events, regardless of how they're generated, to the objects that care about them.

As a side note: This event broker is intended as a demonstration, not as a production tool. For a more industrial strength implementation of this concept, check out the Composite UI Application Block and other parts of the patterns & practices client guidance.

Here's a test that demonstrates the API of the event broker:

1       [TestMethod]
2       public void ShouldCallSubscriberWhenPublisherFiresEvent()
3       {
4           EventBroker broker = new EventBroker();
5           EventSource1 publisher = new EventSource1();
6           string publishedEventName = "MyEvent";
7           bool subscriberFired = false;
8           EventHandler subscriber = delegate { subscriberFired = true;  };
9
10          broker.RegisterPublisher(publishedEventName, publisher, "Event1");
11          broker.RegisterSubscriber(publishedEventName, subscriber);
12
13          publisher.FireEvent1();
14
15          Assert.IsTrue(subscriberFired);
16      }
17
18  class EventSource1
19  {
20      public event EventHandler Event1;
21
22      public void FireEvent1()
23      {
24          OnEvent1(this, EventArgs.Empty);
25      }
26      protected virtual void OnEvent1(object sender, EventArgs e)
27      {
28          if (Event1 != null)
29          {
30              Event1(sender, e);
31          }
32      }
33
34      public int NumberOfEvent1Delegates
35      {
36          get
37          {
38              if( Event1 == null )
39              {
40                  return 0;
41              }
42              return Event1.GetInvocationList().Length;
43          }
44      }
45  }
46

At line 4, we create the broker. We create an object that exposes a .NET event on line 5. The definition of this type starts on line 18. Notice it has a public event of type EventHandler named Event1 (defined on line 20).

Line 10 is where we register the publisher. The parameters are the name that the broker will use to reference this event (MyEvent), the publishing object, and the name of the event field that actually raises this event (Event1). Note that the name the broker uses and the name the publishing type uses can, and often will, be different.

On line 8, we create an EventHandler delegate instance; this is the subscriber. I'm using the C# anonymous delegate syntax here; in a bigger case this would usually be a reference to an event handling method in a subscribing object, but for the simple case here we don't need another object.

Finally, in line 13 we raise publisher.Event1. This causes the event to fire into the broker, and the broker calls the subscribing delegate. Pretty simple to use.

At least, it's simple in this simple scenario. But going back to our clipboard example, we have three events per publisher, and we'll need to have three subscriptions to match. All those calls to RegisterPublisher and RegisterSubscriber are tedious to write and easily gotten wrong. Wouldn't it be great if we could somehow grab and object and just figure out what needs to be registerd and call it automatically?

Next time, we'll configure ObjectBuilder to do exactly that.

Event broker code download:

EventBrokerSample.zip (24.18 kb)


Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Deconstructing ObjectBuilder - Combining Strategies

February 2, 2008 19:09 by Chris

Combining Strategies

We’ve seen how ObjectBuilder can be used to construct objects. But for the simplistic case presented, the extra complexity doesn’t buy us anything. ObjectBuilder doesn’t come into its own until more complex object creation is needed. Multiple strategies can be combined into a single chain to implement arbitrarily sophisticated logic.

For example, it’s a fairly common requirement to cache objects. The first time you request something, you actually create it. Creating these objects can be expensive: reading from databases, calling web services, or whatnot. You don’t want to pay that creation cost every time, so you sock away a copy of the object, and the second and later requests use the already created copy. Let’s look at what it takes to build a simple, general purpose caching factory.

Identifying Objects to Build

Let’s start by writing a test that indicates what we want the API to look like for our new factory. We want to be able to create an object that we don’t already have, but we want to retrieve that exact same object instance the second time. Since we may have multiple Customer objects we wish to retrieve, we should be able to assign and get by an id. Our test looks like this:

1  [TestMethod]
2  public void ShouldGetCachedObjectSecondTime()
3  {
4    CachingFactory factory = new CachingFactory();
5    factory.SetCached<Customer>(true, "Jane");
6
7    Customer c1 = factory.Get<Customer>("Jane", "Jane", "Doe");
8    Customer c2 = factory.Get<Customer>("Jane");
9
10    Assert.AreSame(c1, c2);
11 }

Our factory will have a SetCached method to specify which types should be cached and which shouldn’t be, and the Get method will either create the object we want, or fetch the previously created one.

Note that both methods take a string parameter – this is the ID of the object we want to cache. Notice that the second call to Get doesn’t specify the constructor parameters, just the ID. As long as the ID’s are unique, it doesn’t really matter what the actual value is (it could even be null).

So that’s what we want the API to look like. What do we need to implement it? If you’re thinking strategies and policies, you’re right, but there’s two new pieces that needs to be introduced first.

Build Keys

So far, we've just passed an object's type as the buildKey parameter. Until now, that's all the information we needed, but not anymore. Luckily, build keys can be a lot more than just Type instances.

There's two static method on the BuilderStrategy class, GetTypeFromBuildKey and TryGetTypeFromBuildKey. Let's take a look at the latter:

1  public static bool TryGetTypeFromBuildKey(object buildKey, out Type type) {
2    type = buildKey as Type;
3
4    if(type == null) {
5        ITypeBasedBuildKey typeBasedBuildKey = buildKey as ITypeBasedBuildKey;
6        if(typeBasedBuildKey != null)
7        type = typeBasedBuildKey.Type;
8    }
9
10   return type != null;
11 }

(GetTypeFromBuildKey just calls TryGetTypeFromBuildKey and throws if it fails).

This method starts out with the basic case: if the build key object is a type, it simply returns it. However, if the build key is not a type, it still needs to get a type from it somehow. So it falls back onto the ITypeBasedBuildKey interface, defined as:

1  public interface ITypeBasedBuildKey
2  {
3    Type Type { get; }
4  }

This interface is trivial to implement, but provides a common hook for build keys so that OB can always get a type out.

We need to implement a build key that holds both the type, and the id we're requesting. This is a pretty simple little struct that implements ITypeBasedBuildKey:

1    struct TypeAndIDBuildKey : ITypeBasedBuildKey {
2        public Type TypeToBuild;
3        public string Id;
4
5        public TypeAndIDBuildKey(Type type, string id) {
6            TypeToBuild = type;
7            Id = id;
8        }
9        
10       public Type Type {
11           get { return TypeToBuild; }
12       }
13   }

It's actually important here that the build key is defined as a struct, not a class. As you'll see later, build keys generally have to have value semantics for comparison, which is automatic when using a struct.

Where’s the cache?

Our factory creates objects, but in order to return already existing objects, we’re going to need to hold onto the object references somewhere so we can look them up again later. Thankfully, ObjectBuilder provides a facility to do exactly that.

Let’s look back at the definition of (one of the overloads of) the the IBuilder.BuildUp method:

1  object BuildUp(
2    IReadWriteLocator locator,
3    ILifetimeContainer lifetime,
4    IPolicyList policies,
5    IStrategyChain strategies,
6    object buildKey,
7    object existing);

That first parameter, locator, is what does the trick. Until now, we’ve passed null for this parameter. Now let’s look at what an IReadWriteLocator can do for us. At it’s simplest, a locator is a dictionary. You put objects into it with a given key, and you can later get them back out again using that same key. The IReadWriteLocator interface (and its base interface, IReadableLocator) support a variety of methods to query the locator for its contents.

ObjectBuilder contains an implementation of the IReadWriteLocator interface named, oddly enough, Locator, which implements a weak-referenced dictionary. A weak reference is a reference to an object that does not prevent an object from being collected by the garbage collector. This is actually ideal for our cache; if we’re actively using a cached object, it’ll be available, but if memory pressure gets tight, the GC can clean up those objects that are only referenced by the Locator (i.e. not being used currently).

So, knowing that we need a locator, let’s take a first stab at implementing the CachingFactory:

1   public class CachingFactory {
2       private IReadWriteLocator cache;
3       private StagedStrategyChain<BuilderStage> strategies = 
4           new StagedStrategyChain<BuilderStage>();
5       private PolicyList policies = new PolicyList();
6
7       public CachingFactory() {
8           cache = new Locator();
9       }
10
11      public T Get<T>(string id, params object[] constructorParams) {
12          return (T)(new Builder().BuildUp(
13              cache,
14              null,
15              CreateConstructorParameterPolicy(
16              	typeof(T), id, constructorParams),
17              strategies.MakeStrategyChain(),
18              CreateKey<T>(id),
19              null));
20      }
21
22      public void SetCached<T>(bool shouldCache) {
23          SetCached<T>(shouldCache, null);    
24      }
25
26      private PolicyList 
27      CreateConstructorParameterPolicy(
28          Type typeToCreate, string id,
29          object[] parameters)
30      {
31          PolicyList policies = new PolicyList(this.policies);
32          policies.Set<ICreationParameterPolicy>(
33              new CreationParameterPolicy(parameters),
34              new TypeAndIDBuildKey(typeToCreate, id));
35          return policies;
36      }
37
38      private object CreateKey<T>(string id) {
39          return new TypeAndIDBuildKey(typeof (T), id);
40      }
41  }

We store the locator as a member variable, and pass it in on every call to BuildUp. We use the CreateKey helper method to create our build key, which combines the type and id into a single value that can be passed down to the strategies.

We’ve got the skeleton now, but our test still doesn’t run. In fact, it doesn’t even compile yet. We need to implement the SetCached method. Let’s do that next.

When we needed to pass constructor parameters to the strategies, we used a policy object. Those policies were transient, as we needed to pass different parameters every time. The caching settings, on the other hand, stick around across calls. So we need to build a persistent policy. The difference is trivial; we simply add the new policy to the member variable policy list instead of to the one we create every time.

Defining the policy is pretty simple. We saw in Chapter 1 that the PolicyList itself will map a policy object to a build key. This means our policy itself just needs to indicate if caching is on or off. Our caching policy interface looks like this:

1 	public interface ICachingPolicy : IBuilderPolicy
2 	{
3 	    bool ShouldCache { get; }
4 	}

I wrote two implementations of this interface:

5 	class ShouldCachePolicy : ICachingPolicy
6 	{
7 	    public bool ShouldCache
8 	    {
9 	        get { return true; }
10 	    }
11 	}
12 	
13 	class ShouldNotCachePolicy : ICachingPolicy
14 	{
15 	    public bool ShouldCache
16 	    {
17 	        get { return false; }
18 	    }
19 	}

With these classes in place, we can implement SetCached as follows:

1    public class CachingFactory
2    {
3        private IReadWriteLocator cache;
4        private StagedStrategyChain<BuilderStage> strategies = new StagedStrategyChain<BuilderStage>();
5        private PolicyList policies = new PolicyList();
6
7        private ICachingPolicy shouldCachePolicy = new ShouldCachePolicy();
8        private ICachingPolicy shouldNotCachePolicy = new ShouldNotCachePolicy();
9
10       ...
11
12       public void SetCached<T>(bool shouldCache, string id) {
13          ICachingPolicy cachingPolicy = shouldNotCachePolicy;
14          if(shouldCache) {
15              cachingPolicy = shouldCachePolicy;
16          }
17          policies.Set<ICachingPolicy>(cachingPolicy, CreateKey<T>(id));
18      }
19      
20      ...
21  }

The important line here is the call to Policies.Set on line 17. This sets the policy into the builder’s persistent policy list. This is automatically passed to the strategy chain on the call to BuildUp. We use the build key (which includes the type and id) to set the policy, so they can be looked up later.

Strategies, and the combination thereof

So now the builder can tell us if an object should be cached or not. Next, we need to add the strategies that actually implement the cache. The construction logic goes something like this:

  • If object should be cached:
    • If object is present in the locator, return it
    • Else:
      • Create the object
      • Store it in the locator
    • Return created object

We already implemented the “Create the object” step in Chapter 1, and I’d like to reuse that work. Let’s look at what’s required to do the look up and storage steps. We’ll implement these two steps as separate strategies. This makes sense, as the need to happen at different times in the pipeline. Looking the cached object up happens first, so let’s start there.

Our CacheRetrievalStrategy looks like this:

1   class CacheRetrievalStrategy : BuilderStrategy {
2       public override object BuildUp(
3       	IBuilderContext context, 
4       	object buildKey, 
5       	object existing)
6       {
7           ICachingPolicy cachePolicy = 
8               context.Policies.Get<ICachingPolicy>(buildKey);
9               
10          if(cachePolicy != null ) {
11              if(cachePolicy.ShouldCache) {
12                  object cached = context.Locator.Get(buildKey);
13                  if(cached != null) {
14                     return cached;
15                  }
16              }
17          }
18          return base.BuildUp(context, buildKey, existing);
19      }
20  }

Let’s walk though the implementation.

Lines 7-8 retrieve the cache policy for the currently requested build key. Not having a caching policy (if context.Policies.Get returns null) is the same as saying “don’t cache”. If we do have a caching policy, and the policy says to cache (lines 10-11) we need to look up the object in the locator.

Lines 12 uses the build key to look up the object in the current locator (as provided in the build context). By the way, this is why build keys should have value semantics: they're used as lookup keys for both policies and in the locator. If they compare by reference, later lookups will probably fail, as individual build key objects get recreated regularly.

If we find an object in the locator, we return it immediately (line 14). This short-circuits the rest of the strategy chain, which makes sense as the object is already created.

If the object is not found in the locator, then it needs to be created. Rather than do the work here, this strategy simply lets the strategy chain continue via a call to base.BuildUp (line 18).

Now that we can look stuff up in the locator, let’s look at the flip side, which is storing the created object in the locator. The implementation is equally straightforward:

1   class CacheStorageStrategy : BuilderStrategy {
2       public override object BuildUp(
3           IBuilderContext context,
4           object buildKey, 
5           object existing)
6       {
7           ICachingPolicy cachePolicy = 
8               context.Policies.Get<ICachingPolicy>(buildKey);
9           if(cachePolicy != null) {
10              if(cachePolicy.ShouldCache) {
11                  context.Locator.Add(buildKey, existing);
12              }
13          }
14          return base.BuildUp(context, buildKey, existing);
15      }
16  }

The overall skeleton of the code is identical to the CacheRetrievalStrategy – the caching policy is retrieved in the exact same way (lines 7-8). The big difference is on line 11. Here, instead of getting a value from the locator, we’re adding it. The object that’s being constructed (and therefore needs to be cached) is being passed in via the “existing” parameter. So we go ahead and put it in the locator if current policy settings say we should.

Finally, we call base.BuildUp again, so that if there are any strategies after this one they get a fair shot at the object.

We now have our strategies, so we need to add them to the builder. We can take advantage of the builder stages to make sure that the strategies are in the correct order. We’ll put the cache retrieval in the pre-creation stage, the creation strategy in the creation stage as before, and we’ll put the cache storage in the post-initialization stage. Our builder’s constructor looks like this:

1 	public CachingFactory()
2 	{
3 	  strategies.AddNew<CacheRetrievalStrategy>(BuilderStage.PreCreation);
4 	  strategies.AddNew<BasicCreationStrategy>(BuilderStage.Creation);
5 	  strategies.AddNew<CacheStorageStrategy>(BuilderStage.PostInitialization);
6 	  cache = new Locator();
7 	}

And with this, finally, that original test passes.

Where are we?

We’ve seen how to combine multiple strategies to implement more complex creation logic. We’ve also seen several of the options that ObjectBuilder provides for communication across strategies. These include:

  • Persistent policy objects so that the builder can configure how the strategies work.
  • A locator object to store objects across calls to BuildUp. Objects in the locator are typically indexed via build key (but can use any arbitrary object as long as it has compare-by-value semantics).
  • Passing the constructed object down the chain via the “existing” parameter so that later stages can work with or on the constructed object. We also make our first use of the build key to identity the objects we were creating, and look them up later.

Currently rated 5.0 by 1 people

  • Currently 5/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

What version of ObjectBuilder are we talking about here?

January 31, 2008 17:11 by Chris

I got a comment on the first installment of Deconstructing ObjectBuilder that asked an excellent question: what version of OB is it talking about?

The current production version (which I'll call OB1) is used in CAB, Entlib 2 and 3, and WCSF. The document is written against ObjectBiulder 2. This version was written by Scott Densmore and Brad Wilson, and uploaded to the ObjectBuilder Codeplex project as a sample. We (p&p) have taken this to be the underlying engine for the Unity container, primarily because it fixes some significant underlying issues with OB1.

So, if you're wonding why my code doesn't compile, that's why. You'll need to grab OB2 off of codeplex. It's checked into their source tree. We will be releasing a new (slightly tweaked) version of this codebase with the Unity container, which hopefully should have a CTP soon.


Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

We need you! Upcoming Unity workshop

January 29, 2008 09:30 by Chris

We are putting together a workshop on the Unity container and extensibility. We'll be talking primarily about container extensions: what kind of extensions people will build, the API, how the API could be improved, and similar topics. If you're interested in influencing the direction of Entlib 4, please take a look at Grigori's post for details and try to come!

 


Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5
Tags:
Categories: Entlib | p&p
Actions: E-mail | Permalink | Comments (0) | Comment RSSRSS comment feed

Lamenting the passage of a misspent youth

January 28, 2008 16:56 by Chris

I officially became old today. 

It's not that Phil Haack just had a birthday, and it turns out he's five years younger than I am. It's not that, after a long history of being the youngest guy on the project, I'm now the senior "guru". It's not even the aches and pains that are getting more and more common.

I finally got a chance to start playing Half-Life 2, after waiting for it to hit the bargain bin for several years. I was playing today, and after less than thirty minutes I got so motion sick I actually had to go lie down for an hour waiting for my stomach to stop chrurning.

When do I get to start telling kids to get off my lawn?

 

 


Currently rated 5.0 by 1 people

  • Currently 5/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5
Tags:
Categories: Personal
Actions: E-mail | Permalink | Comments (2) | Comment RSSRSS comment feed

Seattle CodeCamp - wrapup and presentation

January 27, 2008 15:49 by Chris

We finished up the latest Seattle CodeCamp today. It was, as usual, a great deal of fun. I'm not much of a conference guy, but I always enjoy speaking at a low-pressure event and hooking up with friends that I only see at these kinds of things.

I did two presentations. The first was a co-presentation with Brad Wilson on Dependency Injection and ObjectBuilder 2. Brad (who was one of the original OB authors) gave a great discussion of what OB is and where it came from. I added a small demo of the Unity container we're building for Entlib 4 and talked about where we're headed. 

My second talk was on the extensibility points built into the ASP.NET MVC framework. I'd like to thank my audience: they were a great crowd, asked good questions, and didn't seem to mind when I finished extremely early. It was a great discussion, and I'm really happy to see the excitement in the dev community about this new framework.

As promised, I've uploaded my presentation and two samples here. Please enjoy.

Extensibility and the ASP.NET MVC Framework.pptx (451.57 kb) 

BlogSample.zip (534.06 kb)

WikiSample.zip (44.40 kb)


Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5
Tags:
Categories:
Actions: E-mail | Permalink | Comments (1) | Comment RSSRSS comment feed