Posts Tagged ‘Programming’

Ampersandjs and ASP.NET MVC

Thursday, November 13th, 2014

Ampersand is a newish client JavaScript framework that helps build native web applications. It is built and maintained by &yet. Ampersand is different than other frameworks because it is not a monolithic library that solves all your problems. It is a collections of small modules that all solve a single problem that when put together make up Ampersand. This allows you to easily switch out modules that don’t work for you. To read more about Ampersand and its benefits check out the following links.

I recently wanted to know what it would take to run the Ampersand-cli demo app using ASP.NET MVC instead of node.js. So I did just that. The can be found on my github. Head over there and check it out. What follows is some the key points that is a bit different than the node.js version.

Getting Started

Node.js is required when setting up my demo code. It is not required but I used it because it is easier if you do. Node allows us to use npm and tools like gulp and browserify which we will cover in a bit.

First we need to install gulp and browserify from npm. Navigate to the WebApp root folder from the command line and run:

npm install -g gulp browserfiy

Next install all the dependencies found in the package.json from npm by running

npm install

Next run gulp.

gulp

Finally build and run the visual studio project.

Browserify and gulp 

Ampersand highly suggests to use a common JS loader like browserify. In my example, I followed this suggestion. Browserify is a commonJS loader that allows you to require(”) modules, like node, in the browser. Gulp is a streaming build system that allows you to run jobs such as pre-compiling CSS, copying files, and many other things.

When browserify is pointed at our main app.js file (Scripts\App\app.js) and run, it will bundle all of the app’s JavaScript files into one file (Scripts\app.bundle.js).

Gulp and the corresponding gulpfile.js is used to watch the JavaScript files and automatically run browserify when things change. This means that as you edit files, gulp will rebuild the app.bundle.js file. All you have to do is reload the browser to get the final results.

Routes

With any native web application frame, the odds are good you are doing routing on the client side. Which means the server must serve the same html, css, and js for any page that is request. To do this, a catch all route is created.

routes.MapRoute(
    name: "Default",
    url: "{*.}",
    defaults: new { controller = "Home", action = "Index"}
);

Base Layout and Action

Notice the _Layout.cs file is basically empty. This is because everything is loaded via Ampersand including anything in <head>. This also means that our default controller and action serve nothing. The only things our default action needs to serve is any CSS and JavaScript.

Templates

Ampersand’s demo site uses Jade templates and complies them at runtime down to a single template.js file. Meaning that all HTML is served to the user the first time the application loads. In my examples, I sort of replicated this by creating an Action that creates a single JS file with all the HTML templates. This can be better and more automated but for the purposes of this demo I stopped here to show a direction that could be taken.

See the Template Controller and the views in the Template view folder.

Templates do not need to be served this way. You could make HTTP get requests for them in Ampersand. I did it this way to keep in the spirit of the original demo.

Sample API

The simple API was created using Web API. Nothing really different going on here expect that because of C# our models are strongly typed.

CSS

The original Ampersand demo used stylizer and a CSS preprocessor. I took the output from that and put it into site.css. You could do whatever suits your needs here.

Final things to Note

In this code, the app.bundle.js and template JavaScript is not far future cached. It could be. That is something you will need to figure out. There are many different ways to do this in MVC, I did not want to suggest one. The same goes for the CSS.

Much like Nuget’s packages folder, the node_modules folder is not checked into source control. Running npm install command will repopulate this folder much like Nuget’s auto restore.

Other than what is about, the rest of the application is vanilla Ampersand, no other changes were made.

Source Code: https://github.com/Oobert/Ampersand-and-ASP.NET-MVC/tree/master

C# HttpClient integrated authentication

Monday, July 14th, 2014

HttpClient has become the standard way to make http requests in C#. This is mainly used for API calls but has other uses as well. Recently, I have had to make http requests to servers that require authentication and the documentation on how to do this is scattered. The funny part is that it is really easy to do.

var uri = new Uri("&lt;url&gt;");

var credentialCache = new CredentialCache();
credentialCache.Add(
    new Uri(uri.GetLeftPart(UriPartial.Authority)),
            "&lt;auth method&gt;",
            new NetworkCredential("&lt;user name&gt;", "&lt;password&gt;", "&lt;domain&gt;")
            );
            

HttpClientHandler handler = new HttpClientHandler();
handler .Credentials = credentialCache;
var httpClient = new HttpClient(handler );

var response = httpClient.GetAsync(uri).Result;

<auth method> can be basic, digest, ntlm, or negotiate. Then just updated the Network Credentials to that of the user you want to make the call and you are good to go.

It appears that kerberos on its own does not work. This may be because of my server configuration. However if you use negotiate, HttpClient will use kerberos if the server is configured for it otherwise it will fallback to NTLM.

So I wrote a state machine. Why? Because it sounded fun!

Friday, February 7th, 2014

I had a problem. I was tasked to make a wizard type interface for a few workflows in an web app. The workflows had 3+ steps with the current max number of steps at 10.

Options

Option 1: I could control state on the server. After every step in the workflow the data of that step would be posted to the server where the server would keep track of it in session and take the user to the next step. This is very doable, and is pretty much how web apps have functioned before ajax. The downside to this is controlling partial state on the server is hard because session management is hard. You have to account weird scenarios like what happens if the user starts the same workflow in a different browser window, you now have to somehow identify what window goes to what session. Or how do you know when to clear a workflow in progress because the user navigated to a different page in the web app. What happens if they come back? All of these question can be fixed in some form but normally involve a lot of if statements.

Option 2: Wouldn’t it be easier to keep all workflow data on the client until the workflow was completed? Yes, yes it is. However this means that the client can no longer do full page reloads between steps. No problem, there are frameworks that cover this such as Angular.js. In my solution, I am loading and unloading html templates into the DOM manually and using Knockout.js for data binding. Why did I roll my own this way? Because IE8 but that is a different blog post. By keeping all the workflow state in the browser, we have less issues to deal with but a few new ones come up. For example, do you care that the user has to start over if they hit refresh? Do you need the browsers back button to work? These were easy for my use cases, it didn’t matter at the moment because of how this will be used in production. I started down this road, things were going well. But then I noticed that my JavaScript was getting kind of cluttered with if statements such as…

if (here) do this
else if (over there) do that
else if (holy crap) I have no idea
else if (another one?) and I am lost

Option 2b: State machines! About 2 steps into the first workflow, I noticed a pattern. Every step in a workflow loaded something, waited for the user to do work, moved to the next step. The lightbulb went off and I started looking at state machines in JavaScript. I found many like machina.js and npm had many in there as well. machina.js being the first in my search results, I went with it. It looks good and probably would have solved my problem but it has(had?) a dependency on underscore.js Due to the nature of this project, introducing an external library is time consuming, introducing two is a huge pain. But, you guessed it, that is another post someday. In the end, I decided to build my own. Why? Because it sounded fun, also I didn’t need a full featured library, yet.

Code!

So I wrote a state machine. It had a few requirements there were identified upfront.

  • Know what was the current state
  • Be able to change to a new state
  • Call an unload method on the old state
  • Call a load method on the new state
  • Pass data to the new state
  • Be able to generically call methods on the current state

Over time, I am sure the requirements will grow and we will make the choice of growing this code base or moving to a more feature complete option. And here it is.

    var fsm = function (states) {
        this.current;
        this.states = states;
    };
 
    fsm.prototype.changeStateTo = function (newState, obj) {
        if (this.current &&
            this.current.unload) {
            this.current.unload();
        }
 
        
        if (this.states[newState]) {
            this.current = this.states[newState];
 
            if (this.current.load) {
                this.current.load(obj);
            }
        }
    }
 
    fsm.prototype.callAction = function (action, obj) {
        if (this.current[action])
            this.current[action](obj);
    }

As you can see, the state machine takes in an object that is the different states that it can be. A usage example is below.

The changeStateTo function will call unload on the current state, and then call load on the new state. It has some light error checking to make sure states and methods exist before continuing.

The callAction method is a generic way to call a specific action (function) on the current state. An example of this would be if there is a button that is on every screen, you could use this method to call that action when it is pressed on the current state.

And a small example of usage.


    var myFsm = new fsm({
        state1:{
            StateRelatedObject: { 
              text: "hello"  
            },
            load: function ()
            {
                //do work like load template or show/hide page elements
            },
            StateRelatedFunction: function()
            {
                //do specific work related to this state.
                //can access objects or methods on current state like...
                this.StateRelatedObject.text = "hello world";
            },
            unload: function()
            {
                //clean up after yourself here.
            }
        },
        state2:{
            load: function () { },
            StateRelatedFunctionOrObjects: function() { },
            unload: function(){ }
        }
    })
    
    myFsm.changeStateTo("state1");
    
    myFsm.callAction("StateRelatedFunction", { /* data in here */ });


The object that is passed into the state machine can get rather large. This is ok because it is segmented into it different states and is well organized.

Testing is pretty easy too!


    //setup test parms here.

    myFsm.state1.StateRelatedFunction();

    //do asserts on data here.
    //example: myFsm.state1.StateRealtedObject.text === "hello world";

Enjoy!

Edit 03/06/2014: I fixed a misspelling in code. I also posted a complete code example to github.
https://github.com/Oobert/LittleStateMachine

One man’s solution for full page web apps

Thursday, January 30th, 2014

Recently, I had a need to create a web application that functioned kind of like a native app. The design requirements were simple. There would be a header and menu stuck to the top of the page and a footer and navigation buttons stuck to the bottom of the page. Content would fill the middle.

There are many ways to do this. For example, bootstrap has an example of how to do this type of layout. This is probably the most common example of how to keep elements stuck to the bottom of the page. It has one problem, the height of the stuck element must be known. Sadly the height for me is not always the same which makes this example not ideal.

Flexbox is another option. It would work good except for its browser support. For this project I had the fun requirement of needing to support IE8. Don’t ask, long story, some day I will tell it.

So how did I solve my problem?  CSS Table Display! CSS Tables allow you to use semantically correct layout with divs (or newer tags) but have those divs act as if they were table elements. So one can read all about that what they are and how to use them all over the Googles. The remainder of this post is what I did with them.

Below is some example html and css I used. Header, menu, content, footer, and bottom stuff are all normal display block div tags. They work like you would expect. It even works will to drop in Bootstrap grid system to place things on different places horizontally.

The key to the is the div with class .trow-fill. It will push all elements below it to the bottom of the page because its display is set to table-row. Make sure to notice that the body tag is set to display as a table. What is cool about this is the content or the navigation area can change heights and everything will move correctly. The stuff that should be at the bottom of the page will stay there.

Screen shot:
layout

Example HTML and CSS

<html>
<head>

<style>

	body{
		height: 100%;
		width: 100%;
		margin: 0;
		display: table;
	}
	
	.trow-fill{
		display: table-row;
		height: 100%;
		background-color: yellow;
	}
	
	
	/*fill colors for effect*/
	.header{
		background-color: green;
	}

	.menu{
		background-color: orange;
	}
	
	.content{
		background-color: red;
		height: 150px;
	}
	
	
	.bottom-stuff{
		background-color: lightblue;
	}
	
	.footer{
		background-color: grey;
	}
</style>

</head>
<body>

	<div class="header">Header</div>
	<div class="menu">Menu</div>

	<div class="content">content</div>
	
	<div class="trow-fill" >Normally this would be empty. This will push anything elements below it to the bottom of the page.</div>
	<div class="bottom-stuff">Navigation and other stuff. This can be dynamic height!!</div>
	<div class="footer">footer</div>

</body>
</html>

Now a few things may be running through your head. Mainly something about tables and it feeling dirty. Well I was with you up until I noticed something about Bootstrap. In v2.X of bootstrap, CSS tables is exactly how it achieves the fluid grid system. Just look right here. If it is good enough for them, it is good enough for me.

Software Craftsmanship

Friday, January 3rd, 2014

On my way home from work the other day, I was listening to .NET Rocks Episode 934 with Uncle Bob on building software. Carl, Richard, and Uncle Bob had a discussion on HealthCare.gov and the issues with the site from a development stand point. At 28 mins 45 seconds in , Richard states “I am responsible to my industry first and my customers second” and this struck me as very profound.

I have never considered idea before that my actions, code, and software is a representation of my industry. That because of my actions as a developer could causes a person to view all programmers in a different way.

If we look at lawyers for example, we can see the stigma of the job. Society seems to have negative view of lawyers. That somehow you are a terrible person if you go into law. Why is this? There are terrific people that work in law. I have met them and worked with them. The negative view was probably built by the few that have acted unprofessional. The ambulance chasers and those who make frivolous lawsuit just to make a buck. My point, is that it won’t take much for our profession, software development, to get a similar stigma if projects keep failing.

I fear that the stigma of a software developer not being professional, not caring is already taking hold. The software the world uses every day is pretty terrible. Why is my password limited to 12 characters for my online credit card account. Why does the family doctor tell me all the issues he has with the EMR instead of telling me how awesome it is? Why does my cable box freeze so much? Why does Google Chrome’s spell check not fix the fucking word when I click on it? People should be excited about how software is making their lives easier not about how much it sucks. Our jobs as developers is to provide software that helps people not infuriates them.

Uncle Bob and many others created the Software Craftsmanship manifesto in 2009. The goal of this movement is to push craftsmanship and professionalism in the industry. The general idea is to promote doing what is right over getting it done. Good enough is no longer acceptable. 

Not only working software, but also well-crafted software
Not only responding to change, but also steadily adding value
Not only individuals and interactions, but also a community of professionals
Not only customer collaboration, but also productive partnerships

I have signed the manifesto as a reminder to push my industry forward. To not sit idly by. To make awesome!

Node.js And Azure: Is it a Port or a Pipe?

Monday, December 9th, 2013

When deploying a node.js application to a azure website, the node.js environment variable process.env.port is set. One would think that port, is a the listening port and write something like this:

var port = process.env.port | 8080;

The problem? In azure website, port is not an int, it is a string. More problematic, it is not a port at all. It is a pipe. It looks something like this:

\\\\.\\pipe\\2f95e604-fc02-4365-acfc-010a26242d02'

The good news, node.js can handle this. Be careful though, many modules for node.js are not expecting this to be the case and me have to be set up and initialized differently than the standard IP/Port examples. Hapi and Express for example can and have  run on azure.

Geek Motivation

Wednesday, October 16th, 2013

Physcology is something I have always been interested. If I couldn’t work in software development, I would probably be in physcology. To more specific, I am interested in geek physcology. What makes us act they way we act? What motivates us to be engaged? How are we the most productive?

Recently there seems to be shift in how people are being managed at work. More specifically how people in creative positions are being managed. Traditional styles of management seem to be less productive than newer styles of management. The reason seems to be that the newer styles of management help make a geek’s work life better by giving them freedom to complete their tasks in a way that works for them.

Michael Lopp, who blogs under the name Rands, talks a lot about soft skills. He has a posts entitled “The Nerd handbook” and “Managing Nerds”. These posts outline many of the characteristics that generally define geeks. One of the main themes throughout each post is something dubbed the high. The high is the euphoria that is felt when one understands or complete a task. Much likes drugs, this euphoria is what geeks are chasing. Todays new management trends are trying to create environments where geeks can reach this high quicker because this is when geeks create awesome.

This high is important. Without the high, geeks get frustrated, bored, and quit. Ever wonder why some geeks seem to switch jobs every few years? It is because they have an understanding of all the interesting problems and have dominated those problems. There is nothing else for them to do to reach the next high. So they move on.

Having solved all the interesting problems is not the only reason geek quit. Sometimes it is because of the environment they work in. Did you know a business can have a mindset? Humans, groups, teams all have a distinct mindset that drives the actions and culture of that collective. There are two main types of mindsets: fixed and agile. Geeks do not liking being in a group that is a fixed mindset.

A person with an agile mindset craves knowledge and is ok with failure as long they are learning. These are the people that try 10 different algorithms to sort a list to find out which one is best. These are the people that suggest cutting edge technologies because they want to learn it. They know it will be painful to implement but they don’t care. These are the people that want to be the least skilled person in a room because they know the other people in the room have knowledge they can learn.

A fixed mindset person is one that believes they are naturally smart. The people in this mindset have been typically told they are really smart. These people tend rely on their natural ability than trying to get better and learn. They are easily frustrated with failure. If possible they would prefer to be the smartest person in the room as it somehow validates what they believe is true.

Linda Rising gave a talk on subject a few years back. She explores this topic in more detail. During her talk, she suggests that a businesses can also have a mindset. I believe this to be true. Businesses show the same charistics as people do but with the side effect of this will affect their employees. A fixed mindset business will tend to not tolerate failure. They will assume the talent of all the employes will carry that business forward. An agile mindset business will allow employees to fail as long as they move forward.

Geeks prefer to not work for fixed mindset businesses. Geeks love to try new things and push themselves to be better and learn. In a fixed mindset business where failure is not an option, trying new things is also not an option. Which leads to geeks getting frustrated and leaving.

On the flipside, geeks prefer working for a business with an agile mindset. Companies like Github and Netflix are embracing this mindset and attracting highly skilled geeks. Github believes in giving their geeks almost unlimited freedom for when and how they work. For them this has worked very well. Their employees are highly motivated and engaged.

If you are a geek and want to make awesome stuff and to have an impact on the world in some small way, I would highly suggest seeking out a company that understands how geeks work and function the best. Your utopia exists but it is up to you to find it.

If you are a business that is looking for motivated geeks, I suggest that you make sure to take care of your geeks. Give them the space and opportunity to fail and learn. Given the right environment, your geeks will create awesome.

Node.js running on Azure.

Tuesday, September 10th, 2013

Recently I created a little node.js project for fun. It sole reason for existing is to display Bastard Operator from Hell quotes. I ended up deploying this little project to a free azure website because this little project used edge.js and required C#. Azure websites runs on windows and has .NET installed. Yes even node.js websites run on windows.

Time to jump right into it. I spent entirely too much time getting my little project deployed. Why? Well, the documentation for azure is incomplete. Little did I know, I was missing a key piece of info. To run node.js in azure, one needs a web.config file. If you use one of the templates it will be created for you. If you don’t use a template, you have to created. This is/was not documented.

Why does node.js require a web.config to run in azure? Well node.js in azure runs on IIS. Yeah, that IIS. If I am not mistaken, it is using iisnode. Honestly, this is ok given that azure runs windows and IIS is pretty decent at managing resources. But for the love of sanity, could this be written somewhere please?

How did I figure this out? By luck. I created another website using the template, FTPed into website and noticed a little file with the name web.config. Upon looking at, it is clear as to why it is required.

Here is an example of a web.config, the important piece is line 14. The path should be the name of app start js file.

<!--
     This configuration file is required if iisnode is used to run node processes behind
     IIS or IIS Express.  For more information, visit:

     https://github.com/tjanczuk/iisnode/blob/master/src/samples/configuration/web.config
-->

<configuration>
    <system.webServer>

        <handlers>
            <!-- indicates that the app.js file is a node.js application to be handled by the iisnode module -->
            <add name="iisnode" path="server.js" verb="*" modules="iisnode" />
        </handlers>

        <rewrite>
            <rules>
                <!-- Don't interfere with requests for logs -->
                <rule name="LogFile" patternSyntax="ECMAScript" stopProcessing="true">
                    <match url="^[a-zA-Z0-9_\-]+\.js\.logs\/\d+\.txt$" />
                </rule>

                <!-- Don't interfere with requests for node-inspector debugging -->
                <rule name="NodeInspector" patternSyntax="ECMAScript" stopProcessing="true">
                    <match url="^server.js\/debug[\/]?" />
                </rule>

                <!-- First we consider whether the incoming URL matches a physical file in the /public folder -->
                <rule name="StaticContent">
                    <action type="Rewrite" url="public{REQUEST_URI}" />
                </rule>

                <!-- All other URLs are mapped to the Node.js application entry point -->
                <rule name="DynamicContent">
                    <conditions>
                        <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="True" />
                    </conditions>
                    <action type="Rewrite" url="server.js" />
                </rule>
            </rules>
        </rewrite>

        <!-- You can control how Node is hosted within IIS using the following options -->
        <!--<iisnode
          node_env="%node_env%"
          nodeProcessCommandLine="&quot;%programfiles%\nodejs\node.exe&quot;"
          nodeProcessCountPerApplication="1"
          maxConcurrentRequestsPerProcess="1024"
          maxNamedPipeConnectionRetry="3"
          namedPipeConnectionRetryDelay="2000"
          maxNamedPipeConnectionPoolSize="512"
          maxNamedPipePooledConnectionAge="30000"
          asyncCompletionThreadCount="0"
          initialRequestBufferSize="4096"
          maxRequestBufferSize="65536"
          watchedFiles="*.js"
          uncFileChangesPollingInterval="5000"
          gracefulShutdownTimeout="60000"
          loggingEnabled="true"
          logDirectoryNameSuffix="logs"
          debuggingEnabled="true"
          debuggerPortRange="5058-6058"
          debuggerPathSegment="debug"
          maxLogFileSizeInKB="128"
          appendToExistingLog="false"
          logFileFlushInterval="5000"
          devErrorsEnabled="true"
          flushResponse="false"
          enableXFF="false"
          promoteServerVars=""
         />-->

    </system.webServer>
</configuration>

My little project: https://github.com/Oobert/BOFHAA
It running in azure: http://bofhaa.azurewebsites.net/#index=359

Update 9/11/2013: I need to clear some things up. As pointed out in the comments there are many ways for azure to auto create a web.config or iisnode.yml file. In my case, I did not use the command line to deployment and ran into a bug with github deployments which would not allow me to publish to azure from github. This bug may be fixed now. In the end, I uploaded my files to azure via FTP, which did not create a file for me. My fun ensued at this point when I couldn’t figure why nothing was happening.

This was not meant as a bash against Azure. I was just putting this out there in case someone else runs into the same issue. Happy Coding!

 

ASP.NET Dropin DLL Plugin – Part Two

Tuesday, July 9th, 2013

ASP.NET Dropin DLL Plugin – Part One

The first post was a quick intro to the project. In this post, we will cover this works in more detail. So let’s just jump right in.

Typically in ASP.NET MVC, the framework knows where to look for files based on the conventions of the framework. Views are in view folder. With a plugin, the framework needs to be told if a files exists in a plugin DLL and be given a stream to the file. This where the System.Web.Hosting.VirtualPathProvider and System.Web.Hosting.VirtualFile come into play.

In the example there are two class that inherit from the above two classes: AssembleVirtualPathProvider and AssembleVirtualFile. These are Terrible implementations because they are only looking for the one plugin dll. This is a major area for improvement as these classes should look within any DLL that is a plugin. There are many options for this but that is another topic.

The AssembleVirtualPathProvider main job is to see if a view exists within a dll. To do this a number of methods need to be overridden. These methods by default look for views following MVC default naming and file location conventions. In order to check inside of the assembles, these methods need to be overridden and code added to look for the views within the DLLs. Make sure to call the base methods! If you don’t MVC will not be able to find the files local to main MVC project.

The AssembleVirtualFile is a representation of the a file being loaded from a plugin. It has one job, to open a file stream to the file that is in a dll and return it. That is it. This class is used by the AssembleVirtualPathProvider to return a file when GetFile method is called.

Once these are created, the main MVC project must be told to use these providers. In the main MVC project’s global.asax, the new path provider needs to be registered.

System.Web.Hosting.HostingEnvironment.RegisterVirtualPathProvider(new Lib.AssembleVirtualPathProvider());

Now there is a catch to this. Plugins must be in the main MVCs project’s bin directory. I have tried multiple ways to get around this but have not had any luck. The reason is when DLLs are in the bin directory, the website’s process does a deep inspection of all the files in the bin directory. It keeps tabs on what files exist, exception model and controllers. If the DLLs are not in the bin directory, MVC will not be able to resolve the models and controller classes.

Moving on, serving static files from the DLLs. One of the goals of this was that the code in the plugins was written as close as possible to a normal MVC application. I did not want to have some special syntax for plugins vs non plugins. In order for this works, images and files need to be handled by a HTTP handler. In this case a static handler. Bring in the System.Web.StaticFileHandler. This will serve files from DLLs or the file system. It is pretty handy. In the web.config of the main MVC project an entry needs to be added for each static file type you would like to serve from the plugins.

<add name="AspNetStaticFileHandler-GIF" path="*.gif" verb="GET,HEAD" type="System.Web.StaticFileHandler" />
<add name="AspNetStaticFileHandler-JPG" path="*.jpg" verb="GET,HEAD" type="System.Web.StaticFileHandler" />
<add name="AspNetStaticFileHandler-PNG" path="*.png" verb="GET,HEAD" type="System.Web.StaticFileHandler" />
<add name="AspNetStaticFileHandler-JS" path="*.js" verb="GET,HEAD" type="System.Web.StaticFileHandler" />

On top of this, routes need to be ignored for static file extensions that are going to be handled by the StaticFileHandler.

routes.IgnoreRoute("{*staticfile}", new { staticfile = @".*\.(css|js|gif|jpg)(/.*)?" });

But wait, another catch! System.Web.StaticFileHandler does not correctly set the HTTP response headers correctly for caching when serving files from plugins. It works perfectly when serving files from the file system. In order to fix this, a http module needs to be created that looks to see if the file was served from the StaticFileHandler and set the cache headers or use a different StaticFileHandler. Super secret 3rd option (which is sort not good), is to serve all static files from the main MVC project.

Generally speaking that is it. No hidden projects, mirrors or DLL references. A bonus is the plugins will run independently from the main MVC project when doing development if needed.

Some areas that can be improved.
-Better assembly handling in the file and path providers.
-Loaded routes, filters, ect from plugins using MEF (or similar)
-Use/write a better static file handler

Github repo with example: https://github.com/Oobert/ASP.NET-MVC-Plugins

ASP.NET Dropin DLL Plugin – Part One

Wednesday, June 12th, 2013

My intro to web programming was with PHP and scripts such as phpBB and other internet forum software. I always like that installing a plugin normally just worked. You would drop a folder into some directory and the script would see it and you could install it. I have yet to really see this recreated in ASP.NET MVC.

For the past year or so I have been trying to find a way to recreate this. There are many blog posts and half tutorials on this. There is even a few libraries out there on how to do this.

And old blog post from 2008 was my starting off point. It talked about Virtual Paths. It worked but the syntax was terrible.

I knew if I was going to do this, I wanted everything to just work as easily as possible. I also wanted the plugins to be runnable on there own if the developer wanted (for dev and debug reasons).

Griffin.MvcContrib has a way to do this. It seems overly complicated to me. So I kept at it.

After some more searching I found an example of some code that implemented a virtual path provider. I don’t know from where or who but I feel bad because I would like to give them credit. I finally had enough to start tinkering on my own.

I finally have something to show for all of this. I have posted on github a working example of how to create a ASP.NET MVC plugin system. This system allows for DLLs that contain all necessary files, compiled or embedded, to be dropped into the bin directory of a main site and then those files served from the DLL.

Over the course of a few more post, I will cover how the important bits work and what the gotchas are. If you need this now, check out the github project. Currently, it has one plugin that has JavaScript and image examples.

Questions are always welcome.

ASP.NET Dropin DLL Plugin – Part Two