So I wrote a state machine. Why? Because it sounded fun!

February 7th, 2014

I had a problem. I was tasked to make a wizard type interface for a few workflows in an web app. The workflows had 3+ steps with the current max number of steps at 10.

Options

Option 1: I could control state on the server. After every step in the workflow the data of that step would be posted to the server where the server would keep track of it in session and take the user to the next step. This is very doable, and is pretty much how web apps have functioned before ajax. The downside to this is controlling partial state on the server is hard because session management is hard. You have to account weird scenarios like what happens if the user starts the same workflow in a different browser window, you now have to somehow identify what window goes to what session. Or how do you know when to clear a workflow in progress because the user navigated to a different page in the web app. What happens if they come back? All of these question can be fixed in some form but normally involve a lot of if statements.

Option 2: Wouldn’t it be easier to keep all workflow data on the client until the workflow was completed? Yes, yes it is. However this means that the client can no longer do full page reloads between steps. No problem, there are frameworks that cover this such as Angular.js. In my solution, I am loading and unloading html templates into the DOM manually and using Knockout.js for data binding. Why did I roll my own this way? Because IE8 but that is a different blog post. By keeping all the workflow state in the browser, we have less issues to deal with but a few new ones come up. For example, do you care that the user has to start over if they hit refresh? Do you need the browsers back button to work? These were easy for my use cases, it didn’t matter at the moment because of how this will be used in production. I started down this road, things were going well. But then I noticed that my JavaScript was getting kind of cluttered with if statements such as…

if (here) do this
else if (over there) do that
else if (holy crap) I have no idea
else if (another one?) and I am lost

Option 2b: State machines! About 2 steps into the first workflow, I noticed a pattern. Every step in a workflow loaded something, waited for the user to do work, moved to the next step. The lightbulb went off and I started looking at state machines in JavaScript. I found many like machina.js and npm had many in there as well. machina.js being the first in my search results, I went with it. It looks good and probably would have solved my problem but it has(had?) a dependency on underscore.js Due to the nature of this project, introducing an external library is time consuming, introducing two is a huge pain. But, you guessed it, that is another post someday. In the end, I decided to build my own. Why? Because it sounded fun, also I didn’t need a full featured library, yet.

Code!

So I wrote a state machine. It had a few requirements there were identified upfront.

  • Know what was the current state
  • Be able to change to a new state
  • Call an unload method on the old state
  • Call a load method on the new state
  • Pass data to the new state
  • Be able to generically call methods on the current state

Over time, I am sure the requirements will grow and we will make the choice of growing this code base or moving to a more feature complete option. And here it is.

    var fsm = function (states) {
        this.current;
        this.states = states;
    };
 
    fsm.prototype.changeStateTo = function (newState, obj) {
        if (this.current &&
            this.current.unload) {
            this.current.unload();
        }
 
        
        if (this.states[newState]) {
            this.current = this.states[newState];
 
            if (this.current.load) {
                this.current.load(obj);
            }
        }
    }
 
    fsm.prototype.callAction = function (action, obj) {
        if (this.current[action])
            this.current[action](obj);
    }

As you can see, the state machine takes in an object that is the different states that it can be. A usage example is below.

The changeStateTo function will call unload on the current state, and then call load on the new state. It has some light error checking to make sure states and methods exist before continuing.

The callAction method is a generic way to call a specific action (function) on the current state. An example of this would be if there is a button that is on every screen, you could use this method to call that action when it is pressed on the current state.

And a small example of usage.


    var myFsm = new fsm({
        state1:{
            StateRelatedObject: { 
              text: "hello"  
            },
            load: function ()
            {
                //do work like load template or show/hide page elements
            },
            StateRelatedFunction: function()
            {
                //do specific work related to this state.
                //can access objects or methods on current state like...
                this.StateRelatedObject.text = "hello world";
            },
            unload: function()
            {
                //clean up after yourself here.
            }
        },
        state2:{
            load: function () { },
            StateRelatedFunctionOrObjects: function() { },
            unload: function(){ }
        }
    })
    
    myFsm.changeStateTo("state1");
    
    myFsm.callAction("StateRelatedFunction", { /* data in here */ });


The object that is passed into the state machine can get rather large. This is ok because it is segmented into it different states and is well organized.

Testing is pretty easy too!


    //setup test parms here.

    myFsm.state1.StateRelatedFunction();

    //do asserts on data here.
    //example: myFsm.state1.StateRealtedObject.text === "hello world";

Enjoy!

Edit 03/06/2014: I fixed a misspelling in code. I also posted a complete code example to github.
https://github.com/Oobert/LittleStateMachine

One man’s solution for full page web apps

January 30th, 2014

Recently, I had a need to create a web application that functioned kind of like a native app. The design requirements were simple. There would be a header and menu stuck to the top of the page and a footer and navigation buttons stuck to the bottom of the page. Content would fill the middle.

There are many ways to do this. For example, bootstrap has an example of how to do this type of layout. This is probably the most common example of how to keep elements stuck to the bottom of the page. It has one problem, the height of the stuck element must be known. Sadly the height for me is not always the same which makes this example not ideal.

Flexbox is another option. It would work good except for its browser support. For this project I had the fun requirement of needing to support IE8. Don’t ask, long story, some day I will tell it.

So how did I solve my problem?  CSS Table Display! CSS Tables allow you to use semantically correct layout with divs (or newer tags) but have those divs act as if they were table elements. So one can read all about that what they are and how to use them all over the Googles. The remainder of this post is what I did with them.

Below is some example html and css I used. Header, menu, content, footer, and bottom stuff are all normal display block div tags. They work like you would expect. It even works will to drop in Bootstrap grid system to place things on different places horizontally.

The key to the is the div with class .trow-fill. It will push all elements below it to the bottom of the page because its display is set to table-row. Make sure to notice that the body tag is set to display as a table. What is cool about this is the content or the navigation area can change heights and everything will move correctly. The stuff that should be at the bottom of the page will stay there.

Screen shot:
layout

Example HTML and CSS

<html>
<head>

<style>

	body{
		height: 100%;
		width: 100%;
		margin: 0;
		display: table;
	}
	
	.trow-fill{
		display: table-row;
		height: 100%;
		background-color: yellow;
	}
	
	
	/*fill colors for effect*/
	.header{
		background-color: green;
	}

	.menu{
		background-color: orange;
	}
	
	.content{
		background-color: red;
		height: 150px;
	}
	
	
	.bottom-stuff{
		background-color: lightblue;
	}
	
	.footer{
		background-color: grey;
	}
</style>

</head>
<body>

	<div class="header">Header</div>
	<div class="menu">Menu</div>

	<div class="content">content</div>
	
	<div class="trow-fill" >Normally this would be empty. This will push anything elements below it to the bottom of the page.</div>
	<div class="bottom-stuff">Navigation and other stuff. This can be dynamic height!!</div>
	<div class="footer">footer</div>

</body>
</html>

Now a few things may be running through your head. Mainly something about tables and it feeling dirty. Well I was with you up until I noticed something about Bootstrap. In v2.X of bootstrap, CSS tables is exactly how it achieves the fluid grid system. Just look right here. If it is good enough for them, it is good enough for me.

Book Review: Rework and Remote

January 6th, 2014

I am reviewing two books: Rework and Remote by Jason Fried and David Heinemeier Hansson of 37Signals fame. Both books are laid out in the same form. They have multiple sections and each section has multiple chapters. Chapters are short: 1-3 pages normally. These short chapters are great because they are packed full of info without being overly long and boring. The minor down side is some chapters left me wanting more on the subject. Each chapter has an illustration that goes with it. Some of these are hilarious.  I had no problems finishing these books and staying engaged while reading them.

Both books are not made up theory that sounds nice either. Both books are rooted in what makes 37Signals work. The ideas and concepts in the books come straight from day to day life at 37Signals.

Rework

Rework’s tagline is “Change the way you work forever”. The general idea is to challenge the status quo of what work should be and look like. It pushes the standard norms of running a business. For example the chapter “Why Grow?” discusses the idea of the right sized business. It suggests to the reader to find the right size for them and to stay there. This is different then the status quo of: If you are not growing you’re dying.

Remote

Remote’s tagline is “Office not Required”. It could be considered a playbook for setting up and having remote employees. I would suggest this for both employees or employers that want or even are working remotely. The great part about this book is it makes it clear what the trade offs are between working remotely vs being in office. In many cases it suggests why these trade offs are invalid or how to deal with them. The chapter “The Lone Outpost” suggest that giving one employee the ability to work remote is setting up remote to fail. It states that remote will only work if multiple people feel change that is needed to make remote work.

Conclusion

I have seen my fair share of “old way” thinking while working traditional and nontraditional jobs. It pains me to see this “old way” still strong in management today. Both of these books push the idea that there is a better way. It is a new and different way, there are pitfalls, but in the end you will be happier, your employees will be happier, and your product will be better. I highly recommend these books to pretty much anyone, exceptionally if you are working in a creative job such as development or design.

Software Craftsmanship

January 3rd, 2014

On my way home from work the other day, I was listening to .NET Rocks Episode 934 with Uncle Bob on building software. Carl, Richard, and Uncle Bob had a discussion on HealthCare.gov and the issues with the site from a development stand point. At 28 mins 45 seconds in , Richard states “I am responsible to my industry first and my customers second” and this struck me as very profound.

I have never considered idea before that my actions, code, and software is a representation of my industry. That because of my actions as a developer could causes a person to view all programmers in a different way.

If we look at lawyers for example, we can see the stigma of the job. Society seems to have negative view of lawyers. That somehow you are a terrible person if you go into law. Why is this? There are terrific people that work in law. I have met them and worked with them. The negative view was probably built by the few that have acted unprofessional. The ambulance chasers and those who make frivolous lawsuit just to make a buck. My point, is that it won’t take much for our profession, software development, to get a similar stigma if projects keep failing.

I fear that the stigma of a software developer not being professional, not caring is already taking hold. The software the world uses every day is pretty terrible. Why is my password limited to 12 characters for my online credit card account. Why does the family doctor tell me all the issues he has with the EMR instead of telling me how awesome it is? Why does my cable box freeze so much? Why does Google Chrome’s spell check not fix the fucking word when I click on it? People should be excited about how software is making their lives easier not about how much it sucks. Our jobs as developers is to provide software that helps people not infuriates them.

Uncle Bob and many others created the Software Craftsmanship manifesto in 2009. The goal of this movement is to push craftsmanship and professionalism in the industry. The general idea is to promote doing what is right over getting it done. Good enough is no longer acceptable. 

Not only working software, but also well-crafted software
Not only responding to change, but also steadily adding value
Not only individuals and interactions, but also a community of professionals
Not only customer collaboration, but also productive partnerships

I have signed the manifesto as a reminder to push my industry forward. To not sit idly by. To make awesome!

Transparency in the Work Place

December 12th, 2013

Image a CEO walks in one morning and states loudly for everyone to hear “The release date has changed from 5 weeks to 2 weeks. Everything must be done.” and walks away.

The first questions from everyone is: What just happened? Why did the date move? How are we going to finish this 3 weeks early? Productivity will remain very low until answers arrive or the shock wears off. And the rumors will start. Maybe we have a client? Maybe we are being sold? Maybe we ran out of money? Maybe the CEO is a bitch?

Many of us of lived through this example or examples like it far too many times. Thinking back to the times this has happened to me, the majority of the problem wasn’t with the information I was receiving. It was with the number of questions it created. My must crippling one was Why. I, like many, will spend extremely too much time trying to understand  why changes was made or why something works

As developers, a large part of our day is understanding the whys of our software. Why does it work in this case but not that one? Why does this user click a button 5 times? Why did bob eat that? To many of us, not knowing why is like having an itch we can’t scratch. It will plague our minds until we have a suitable answer. This is also what makes us good programmers but that another post.

Transparency can solve this and so much more. Forbes agrees. There are many benefits to being transparent but the one I am most interested in is the one that bugs me the most. Answering the question of why.

Looking back to the example, if the CEO was completely transparent, good or bad, it would have allow the staff to cut through the crap and get to the point. The deadline was moved because there is a huge opportunity for the company if we can hit it. Or the deadline was moved because if we are not done in 2 weeks, we are going to run out of money and everyone is laid off. In either case, why was answered and the staff can move on to dealing with other questions like how.

I have been more loyal and understanding to a boss that was transparent even when the information was bad. I knew that they were telling me all they knew and I understood their choices more completely, and was willing to follow their direction more often.

With a boss that was less than transparent, I have been more questioning of their motives and if they really had the teams best interest in mind.

I am not alone with this way of thinking. Many of the my co-workers over the years exhibited the same tendencies.

Statements like “Something bad is happening. Why would we do that know? It doesn’t make sense.” are common place when transparency is limited. My suggests to the management of the world is to treat us like adults. We can handle bad news. If an employee can’t, you probably didn’t want them as an employee anyways.

Node.js And Azure: Is it a Port or a Pipe?

December 9th, 2013

When deploying a node.js application to a azure website, the node.js environment variable process.env.port is set. One would think that port, is a the listening port and write something like this:

var port = process.env.port | 8080;

The problem? In azure website, port is not an int, it is a string. More problematic, it is not a port at all. It is a pipe. It looks something like this:

\\\\.\\pipe\\2f95e604-fc02-4365-acfc-010a26242d02'

The good news, node.js can handle this. Be careful though, many modules for node.js are not expecting this to be the case and me have to be set up and initialized differently than the standard IP/Port examples. Hapi and Express for example can and have  run on azure.

Geek Motivation

October 16th, 2013

Physcology is something I have always been interested. If I couldn’t work in software development, I would probably be in physcology. To more specific, I am interested in geek physcology. What makes us act they way we act? What motivates us to be engaged? How are we the most productive?

Recently there seems to be shift in how people are being managed at work. More specifically how people in creative positions are being managed. Traditional styles of management seem to be less productive than newer styles of management. The reason seems to be that the newer styles of management help make a geek’s work life better by giving them freedom to complete their tasks in a way that works for them.

Michael Lopp, who blogs under the name Rands, talks a lot about soft skills. He has a posts entitled “The Nerd handbook” and “Managing Nerds”. These posts outline many of the characteristics that generally define geeks. One of the main themes throughout each post is something dubbed the high. The high is the euphoria that is felt when one understands or complete a task. Much likes drugs, this euphoria is what geeks are chasing. Todays new management trends are trying to create environments where geeks can reach this high quicker because this is when geeks create awesome.

This high is important. Without the high, geeks get frustrated, bored, and quit. Ever wonder why some geeks seem to switch jobs every few years? It is because they have an understanding of all the interesting problems and have dominated those problems. There is nothing else for them to do to reach the next high. So they move on.

Having solved all the interesting problems is not the only reason geek quit. Sometimes it is because of the environment they work in. Did you know a business can have a mindset? Humans, groups, teams all have a distinct mindset that drives the actions and culture of that collective. There are two main types of mindsets: fixed and agile. Geeks do not liking being in a group that is a fixed mindset.

A person with an agile mindset craves knowledge and is ok with failure as long they are learning. These are the people that try 10 different algorithms to sort a list to find out which one is best. These are the people that suggest cutting edge technologies because they want to learn it. They know it will be painful to implement but they don’t care. These are the people that want to be the least skilled person in a room because they know the other people in the room have knowledge they can learn.

A fixed mindset person is one that believes they are naturally smart. The people in this mindset have been typically told they are really smart. These people tend rely on their natural ability than trying to get better and learn. They are easily frustrated with failure. If possible they would prefer to be the smartest person in the room as it somehow validates what they believe is true.

Linda Rising gave a talk on subject a few years back. She explores this topic in more detail. During her talk, she suggests that a businesses can also have a mindset. I believe this to be true. Businesses show the same charistics as people do but with the side effect of this will affect their employees. A fixed mindset business will tend to not tolerate failure. They will assume the talent of all the employes will carry that business forward. An agile mindset business will allow employees to fail as long as they move forward.

Geeks prefer to not work for fixed mindset businesses. Geeks love to try new things and push themselves to be better and learn. In a fixed mindset business where failure is not an option, trying new things is also not an option. Which leads to geeks getting frustrated and leaving.

On the flipside, geeks prefer working for a business with an agile mindset. Companies like Github and Netflix are embracing this mindset and attracting highly skilled geeks. Github believes in giving their geeks almost unlimited freedom for when and how they work. For them this has worked very well. Their employees are highly motivated and engaged.

If you are a geek and want to make awesome stuff and to have an impact on the world in some small way, I would highly suggest seeking out a company that understands how geeks work and function the best. Your utopia exists but it is up to you to find it.

If you are a business that is looking for motivated geeks, I suggest that you make sure to take care of your geeks. Give them the space and opportunity to fail and learn. Given the right environment, your geeks will create awesome.

Node.js running on Azure.

September 10th, 2013

Recently I created a little node.js project for fun. It sole reason for existing is to display Bastard Operator from Hell quotes. I ended up deploying this little project to a free azure website because this little project used edge.js and required C#. Azure websites runs on windows and has .NET installed. Yes even node.js websites run on windows.

Time to jump right into it. I spent entirely too much time getting my little project deployed. Why? Well, the documentation for azure is incomplete. Little did I know, I was missing a key piece of info. To run node.js in azure, one needs a web.config file. If you use one of the templates it will be created for you. If you don’t use a template, you have to created. This is/was not documented.

Why does node.js require a web.config to run in azure? Well node.js in azure runs on IIS. Yeah, that IIS. If I am not mistaken, it is using iisnode. Honestly, this is ok given that azure runs windows and IIS is pretty decent at managing resources. But for the love of sanity, could this be written somewhere please?

How did I figure this out? By luck. I created another website using the template, FTPed into website and noticed a little file with the name web.config. Upon looking at, it is clear as to why it is required.

Here is an example of a web.config, the important piece is line 14. The path should be the name of app start js file.

<!--
     This configuration file is required if iisnode is used to run node processes behind
     IIS or IIS Express.  For more information, visit:

https://github.com/tjanczuk/iisnode/blob/master/src/samples/configuration/web.config

-->

<configuration>
    <system.webServer>

        <handlers>
            <!-- indicates that the app.js file is a node.js application to be handled by the iisnode module -->
            <add name="iisnode" path="server.js" verb="*" modules="iisnode" />
        </handlers>

        <rewrite>
            <rules>
                <!-- Don't interfere with requests for logs -->
                <rule name="LogFile" patternSyntax="ECMAScript" stopProcessing="true">
                    <match url="^[a-zA-Z0-9_\-]+\.js\.logs\/\d+\.txt$" />
                </rule>

                <!-- Don't interfere with requests for node-inspector debugging -->
                <rule name="NodeInspector" patternSyntax="ECMAScript" stopProcessing="true">
                    <match url="^server.js\/debug[\/]?" />
                </rule>

                <!-- First we consider whether the incoming URL matches a physical file in the /public folder -->
                <rule name="StaticContent">
                    <action type="Rewrite" url="public{REQUEST_URI}" />
                </rule>

                <!-- All other URLs are mapped to the Node.js application entry point -->
                <rule name="DynamicContent">
                    <conditions>
                        <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="True" />
                    </conditions>
                    <action type="Rewrite" url="server.js" />
                </rule>
            </rules>
        </rewrite>

        <!-- You can control how Node is hosted within IIS using the following options -->
        <!--<iisnode
          node_env="%node_env%"
          nodeProcessCommandLine="&quot;%programfiles%\nodejs\node.exe&quot;"
          nodeProcessCountPerApplication="1"
          maxConcurrentRequestsPerProcess="1024"
          maxNamedPipeConnectionRetry="3"
          namedPipeConnectionRetryDelay="2000"
          maxNamedPipeConnectionPoolSize="512"
          maxNamedPipePooledConnectionAge="30000"
          asyncCompletionThreadCount="0"
          initialRequestBufferSize="4096"
          maxRequestBufferSize="65536"
          watchedFiles="*.js"
          uncFileChangesPollingInterval="5000"
          gracefulShutdownTimeout="60000"
          loggingEnabled="true"
          logDirectoryNameSuffix="logs"
          debuggingEnabled="true"
          debuggerPortRange="5058-6058"
          debuggerPathSegment="debug"
          maxLogFileSizeInKB="128"
          appendToExistingLog="false"
          logFileFlushInterval="5000"
          devErrorsEnabled="true"
          flushResponse="false"
          enableXFF="false"
          promoteServerVars=""
         />-->

    </system.webServer>
</configuration>

My little project: https://github.com/Oobert/BOFHAA
It running in azure: http://bofhaa.azurewebsites.net/#index=359

Update 9/11/2013: I need to clear some things up. As pointed out in the comments there are many ways for azure to auto create a web.config or iisnode.yml file. In my case, I did not use the command line to deployment and ran into a bug with github deployments which would not allow me to publish to azure from github. This bug may be fixed now. In the end, I uploaded my files to azure via FTP, which did not create a file for me. My fun ensued at this point when I couldn’t figure why nothing was happening.

This was not meant as a bash against Azure. I was just putting this out there in case someone else runs into the same issue. Happy Coding!

 

ASP.NET Dropin DLL Plugin – Part Two

July 9th, 2013

ASP.NET Dropin DLL Plugin – Part One

The first post was a quick intro to the project. In this post, we will cover this works in more detail. So let’s just jump right in.

Typically in ASP.NET MVC, the framework knows where to look for files based on the conventions of the framework. Views are in view folder. With a plugin, the framework needs to be told if a files exists in a plugin DLL and be given a stream to the file. This where the System.Web.Hosting.VirtualPathProvider and System.Web.Hosting.VirtualFile come into play.

In the example there are two class that inherit from the above two classes: AssembleVirtualPathProvider and AssembleVirtualFile. These are Terrible implementations because they are only looking for the one plugin dll. This is a major area for improvement as these classes should look within any DLL that is a plugin. There are many options for this but that is another topic.

The AssembleVirtualPathProvider main job is to see if a view exists within a dll. To do this a number of methods need to be overridden. These methods by default look for views following MVC default naming and file location conventions. In order to check inside of the assembles, these methods need to be overridden and code added to look for the views within the DLLs. Make sure to call the base methods! If you don’t MVC will not be able to find the files local to main MVC project.

The AssembleVirtualFile is a representation of the a file being loaded from a plugin. It has one job, to open a file stream to the file that is in a dll and return it. That is it. This class is used by the AssembleVirtualPathProvider to return a file when GetFile method is called.

Once these are created, the main MVC project must be told to use these providers. In the main MVC project’s global.asax, the new path provider needs to be registered.

System.Web.Hosting.HostingEnvironment.RegisterVirtualPathProvider(new Lib.AssembleVirtualPathProvider());

Now there is a catch to this. Plugins must be in the main MVCs project’s bin directory. I have tried multiple ways to get around this but have not had any luck. The reason is when DLLs are in the bin directory, the website’s process does a deep inspection of all the files in the bin directory. It keeps tabs on what files exist, exception model and controllers. If the DLLs are not in the bin directory, MVC will not be able to resolve the models and controller classes.

Moving on, serving static files from the DLLs. One of the goals of this was that the code in the plugins was written as close as possible to a normal MVC application. I did not want to have some special syntax for plugins vs non plugins. In order for this works, images and files need to be handled by a HTTP handler. In this case a static handler. Bring in the System.Web.StaticFileHandler. This will serve files from DLLs or the file system. It is pretty handy. In the web.config of the main MVC project an entry needs to be added for each static file type you would like to serve from the plugins.

<add name="AspNetStaticFileHandler-GIF" path="*.gif" verb="GET,HEAD" type="System.Web.StaticFileHandler" />
<add name="AspNetStaticFileHandler-JPG" path="*.jpg" verb="GET,HEAD" type="System.Web.StaticFileHandler" />
<add name="AspNetStaticFileHandler-PNG" path="*.png" verb="GET,HEAD" type="System.Web.StaticFileHandler" />
<add name="AspNetStaticFileHandler-JS" path="*.js" verb="GET,HEAD" type="System.Web.StaticFileHandler" />

On top of this, routes need to be ignored for static file extensions that are going to be handled by the StaticFileHandler.

routes.IgnoreRoute("{*staticfile}", new { staticfile = @".*\.(css|js|gif|jpg)(/.*)?" });

But wait, another catch! System.Web.StaticFileHandler does not correctly set the HTTP response headers correctly for caching when serving files from plugins. It works perfectly when serving files from the file system. In order to fix this, a http module needs to be created that looks to see if the file was served from the StaticFileHandler and set the cache headers or use a different StaticFileHandler. Super secret 3rd option (which is sort not good), is to serve all static files from the main MVC project.

Generally speaking that is it. No hidden projects, mirrors or DLL references. A bonus is the plugins will run independently from the main MVC project when doing development if needed.

Some areas that can be improved.
-Better assembly handling in the file and path providers.
-Loaded routes, filters, ect from plugins using MEF (or similar)
-Use/write a better static file handler

Github repo with example: https://github.com/Oobert/ASP.NET-MVC-Plugins

Enabling Twitter cards

June 24th, 2013

Twitter cards are summary widgets that show up for links to websites that have them enabled. There are a few types of cards: summary, summary with image, photo, product, and a few others. To enable twitter cards a few things need to happen. First, the website must include meta info in the page header to tell twitter what to put in the card. Next, the website must request to have cards activated for the site.

This evening I set up twitter cards for this blog.  This blog runs on WordPress. Luckily there are a handful of plugins that take care of inserting the meta data into the page header. I installed JM Twitter Cards plugin. It had the most downloads and 5 stars. Install was painless, and setup was easy. Just filled out the forms for the summary cards in the settings JM Twitter Cards plugin.

Next I logged into Twitter’s dev site where there is a Twitter Cared Validator and validated that the plugin was working. Sure enough  everything looked good. So I clicked the submit button for approval. I filled out the form will a few easy questions about my website. Upon submission was told to expect a response within weeks.  It took about 5mins to get a response.

That is it.

 

Example: