For some time I have been interested in the new HTML5 features primarily Canvas and WebGL. For this reason I looked at implementing a very simple RPG in HTML5. Five years ago the idea of doing so would have been ludicrous. An HTML implementation would have been clunky, the obvious choice would have been flash. Fortunately most people are using up to date browsers with decent Canvas support.

My initial goal was to implement a hex grid. I looked around to see if anyone had implemented a nice hex grid library but those that did exist did not fit my requirements.

Primarily I wanted flat top hexagons, and the ability to select a specific cell via mouse click or touch. Hence Hexagon.js a library that does exactly that.

You can download Hexagon.js from the GitHub repository, or you can try the demo on this page.

Hexagon.js has no dependencies, though it does require a modern browser. The code is licensed under an MIT license. At the moment it only supports flat topped Hexagons, but I plan to eventually generalize it to support flat sided hexagons.

This code owes a great to debt to Ruslans whose well diagrammed explanations of the mathematics of hexagons proved invaluable.

To get started you simply need to embed the following in your page:

The constructor takes the ID of the canvas element along with the radius of the hexagons that will be drawn.

You then call hexagonGrid.drawHexGrid() passing it the number of rows and columns and the amount of off set. The offset is where ON the canvas the grid will be drawn. This allows you to have a single canvas for say a game where the grid itself will be drawn in the middle, while UI components will be drawn around it. Finally there is a debug flag. When enabled this draws the cells coordinates at the bottom of each hex.

Other functions that may be of interest are:

hexagonGrid.drawHexAtColRow(): Draws a hexagon at a particular row and column with a given colour.

hexagonGrid.drawHex() Draws a hex at an arbitrary x and y coordinate with an arbitrary colour.

If this was interesting you may also be interested in my Mandelbrot implementation JS.

Cleaning up Safari Books fonts

For the last several years I have had a subscription to the excellent Safari Books Online site. Safari Books is run by Oriely and is the Netflix of technical books. Most of their catalogue, and a huge swathe of other publishers books is available for a monthly fee.

While Safari is an extremely useful resource it does have two flaws. Firstly some books use a poor choice of fonts which render badly. Secondly if you select text in the book, a small in page popup appears that allows you to highlight or add notes. While that may sound useful, I find myself constantly selecting text (for no purpose) and the box is quite a distraction while reading.

To remedy both these problems I wrote a small piece of JavaScript that can be kept in a bookmarklet and run to clean up the Safari pages. You can look at the GitHub repository here.

To create the bookmarklet, simply crate a new bookmark, and set the url to:

This pulls in the script directly from GitHub and executes it on the current page.

The script itself is made of two parts. Firstly there is the scaffolding. This among other things pulls down a copy of JQuery (though to be fair this is probably overkill).

Secondly a function initStyleReplace() which does the actual work.

You can fork your own copy on GitHub and update the script to use your own choice of font, size etc. BvD.Layer is the function that spawns the tagging and highlight window, so remove that line if you enjoy tagging and highlighting.


For Several years a key tool in .Net developers toolchests has been Reflector. Originally a free tool developed by Lutz Roede, it was bought by Redgate Software in 2008. Redgate had announced that they would try and keep the tool free, but next month free support will end.

Those looking for a replacement may be interested in ILSpy. ILSpy has been in development for a few months and already has the major features required by a decompiler. It's under active development and is open source. It doesn't yet support languages other then C# and IL, and it would be great if it integrated into Visual Studio, but I think this is a tool with a bright future.

For those of you not familiar with this sort of tool, ILSpy and Reflector give you a class browser - displaying the namespaces and classes within an assembly, and listing the properties, fields, methods etc listed within. This alone is a useful way of quickly exploring the .Net Framework or third party DLLs. What makes it really great though is that you can select a method and decompile it into C# code, revealing exactly how the code works. This is extremely useful when documentation is poor or you need to have a better understanding of some library. You can easily navigate the decompiled code, clicking on classes or methods acts like clicking a hyperlink taking you to the decompiled code of whatever you click. You can also see what code uses a particular method. A very powerful tool you should investigate if you have not already.

Finally my review of the Olympus E-P1

I've had the E-P1 in my hands for a few weeks now, long enough to make some conclusions.

Lets start with the bad aspects of this camera.


Several commentators have lamented the E-P1's lack of flash. I however haven't found this to be an issue. I rarely used the flash on my Canon, and the E-P1 is vastly more capable at taking low light shots. A fill flash would be nice, but its lack is more then made up with by the other features on this terrific camera.

A more serious challenge is focus. The E-P1 uses contrast measurement rather then phase detection like most SLR's. It is noticeably slower at focusing then my old Canon but not terribly so. It is much faster at focusing then any of the compact cameras I have used. If you need a fast focusing camera, i.e shooting sports, then this is not the camera for you. I don't have any issues with the focus, and as I will discuss later find the manual focus brilliant.

Final complaints center around the lack of viewfinder. Frankly this doesn't concern me at all. I viewfinder is nice, but live view has its advantages as well. I haven't had any problem using the live view in bright sunlight.


The E-P1's greatest strength is its size. It is tiny compare to my old Canon. The Canon feels enormous now. The body is around half way in between a DSLR and a compact camera. It is not just the body that is small however. The kit lens is incredibly compact featuring a collapsing mechanism. The standard Four Thirds lenses are also a lot smaller then the usual fare from Canon/Nikon. Along with the Camera I bought a small Domke bag, the F-5XA. This bag easily fits The E-P1 with attached lens, along with a another lens. It would easily take a third pancake lens as well. This is despite being around the same size as my bag for the Canon, which takes only the Canon body and a single (non telephoto) lens.

Size was my main motivation for purchasing this camera. I wanted something smaller and lighter for traveling and the E-P1 is perfect. Its only competition is Cameras like the Canon G11 (which are smaller but less versatile).

Another feature that I love about the E-P1 is the manual focus assist. I have never been able to manually focus reliable with the APS cameras. Through a combination of smaller viewfinders and lack of focus matt, it's been a real problem. When you begin manual focus on the E-P1 however, the live view magnifies 7 or 10x, making it trivial to acquire focus, even at night. Now I've tried to focus at night with my Canon, and its just been impossible, the E-P1 delivers.

The Four Third and Micro Four Thirds system, use a fly by wire focusing system on their lenses. That is when the lens is in manual mode you do not directly control the focus, rather you rotate a dial which then controls the focusing of the lens. When I heard about this I thought that is sounded awful.After using it though, I can say that it is really well done. I find focusing smother then even some of the old manual primes that I've used. The speed that you rotate is matched by the speed that the lens focuses, so you can focus very small amounts very easily.Coupled with the live view magnification I'm totally sold on this system.

While most of the SLR manufacturers have implemented in lens image stabilization, the Olympus has gone with an in body stabilization system. I can not express in words how happy I am at this. The E-P1 features four stops worth of image stabilization, and it works with every lens from super wides to primes. This really changes the perspective of lens selection, as with my Canon deciding on whether to sacrifice image stabilization was a big part of the decision making process. Four stops of IS totally ups you ability to shoot with long lenses and in low light as well. I have successfully hand held at 1/3 of a second (with a 100mm equivalent lens), on the canon I'd be looking at more around 1/50th as the slowest shutter speed with equivalent lens.

Other Thoughts

There are several other features that should be noted about this camera. Firstly automatic mode is highly usable. Other cameras I've used almost exclusively in manual. I find myself using auto for more then half my shots. You still need to hit up manual for certain light conditions of course. The ergonomics are quite good for such a compact camera. Not as good as an SLR but given the limitations of size more then acceptable. Still if you put a heavy Four Thirds lens on it, you will need to use two hands to shoot. The flash hot-shoe came with a nice plastic protector, unfortunately it comes of to easily and after finding it at the bottom of my bag for the third time I removed it permanently.

The E-P1 is a great camera, especially for any one who wants to have a compact camera with the flexibility of interchangeable lenses. If you need high speed focusing avoid, otherwise it is worth serous consideration.

A look at the Parallel class, .Net 4.0's new functionality to simplify multi-threading.

.Net 4.0 introduces new libraries for handling and greatly simplifying multi-threaded programming. This is a welcome addition to .Net for two reasons. Firstly writing multi-threaded code tends to be complex. There are all sorts of difficulties when working with threads that simply don't exist for single threaded applications. Add to this an environment where multiple programmers are working on different parts of the code base, or where a maintenance programmer with a weak understanding of the code base needs to make adjustments years later and you have a recipe for difficult to debug code with possible errors making their way into a production environment. But at the same time computers are becoming vastly more parallel. Consumer machines are now multi core, 4 cores are common and over the next couple years we will see 6 and possible 8 core machines in the hands of everyday users. Writing your code in a multi-threaded manner has never been more important.

The new additions make it far easier to take existing code and make portions of it multi-threaded or build new code that is optimized for parallel processors. Let’s have a look at the Parallel library first.


Firstly let’s set up some skeleton code:

Here we have a test object we will be manipulating, a method to generate random strings (used in our test object) and a method to print lists of our test objects.

Parallel.For is the multithreaded version of a standard for loop. Rather than being a keyword it is a function meaning our code will unfortunately look a little clunkier. It also has the restriction that the iterator is limited to incrementing by 1 (i++), unlike the standard for where you can specify an iteration of any value (i+=2, i=i+7, i=myFunc(i), etc). To invoke you specify an initial value, a final value (rather than a test condition), and an Action that executes your desired code. Let’s look at an example:

In this trivial example we reverse the string in our test object and append a number to it. I've used three different ways of invoking the same code to provide some insight as to what the code will look like. We always have a single integer parameter (here named i) which increments between the initial value (here 0) and the final value (here _objects.Count()).

As you can see the code is very simple. We can control the number of threads that are spawned by using an overload that takes a ParallelOptions object. ParallelOptions takes a CancellationToken which allows external code to cancel the loop early, an int MaxDegreeOfParallelism that defines how many threads should be spawned and a TaskScheduler that overrides the default TaskScheduler alowing you to control the scheduling. Without using this overload .Net takes care of these details based on the number of cores in the computer along with current usage etc. There are also overloads that use doubles for the incriminator rather than an int.

While this is definitely a step up from the standard threading libraries, care must still be taken that you use Parallel.For in such a way that side effects from different threads don't interfere with each other. Order can longer be guaranteed so if say you are adding a value to a list inside each execution then the order items are added will be somewhat random. Still for many tasks this is a great addition. Tests by others seem to be showing significant speed improvements when used on multi core machines for non-trivial examples. As always optimization needs to be tested both before and after to ensure you get the best speed possible.


Parallel.ForEach works in much the same way as Parallel.For. Let’s look at some code that does the same thing as our for example:

In our example we pass a list; the Parallel.ForEach method along with an Action to be executed. The parameter to the action in this case is the current element from the IEnumerable. We have overloads taking ParallelOptions just like Parallel.For, and we have the same caveats to using this function in regards to threads interfering with each other.


Finally let us look at Parallel.Invoke. Parallel.Invoke is used for firing of a bunch of discrete pieces of code that do not have a requiremnt for order of execution or completion. Parallel.Invoke takes an of Actions and starts them of in parallel. Just like the previous methods it also has an overload that takes a ParallelOptions object. Let’s see a sample:

Again for demonstrative purposes three different ways of passing in an Action parameter are used. If you are running this code yourself you can easily change the parameter inside the Thread.Sleep methods to change the order in which the words are printed out thus demonstrating the non-sequential nature of this code.

All in all the Parallel class makes it easy to kick of multithreaded code, though care must still be taken to avoid traditional multi threading pitfalls.

.Net Code Contracts

One of the new features of .Net 4.0 is code contracts. Code contracts are a way for programmers to define how methods and classes should behave in a more complex way then simply their signature. A contract for example could specify that a methods parameters should not be null, or that the value it returns should always be positive.

Code contracts themselves are broken into two parts. The library itself which is included in .Net 4.0 and the Visual Studio addon (downloaded here). The two need to be paired together to do useful work (though no doubt third parties will introduce their own products in the future).

Code Contracts do two things. Firstly they replace existing contract code of the mode:


Obviously a slightly nicer syntax isn't going to convince many people to change their patterns. No the real meat comes with static analysis. Static analysis allows Code Contracts to examine your code at compile time and attempt to determine if your code obeys the contracts specified. Static analysis is only available with the "Premium Edition" (free but only runs on Visual Studio 2008 Team System, Visual Studio 2010 Premium Edition, or Visual Studio 2010 Ultimate Edition).

To enable static analysis you must install Code Contracts Premium, and in the Properties of your Project enable Perform Static Contract Checking. When you do a compile you will start to get warnings. Lets look at some examples.

In the above (trivial) method we want to ensure that the divisor is never a negative number, and that we never encounter a divide by zero error. If we then make the two following calls:

When we compile we get a nice blue squiggly line under our second call to divide, along with the following error:

Warning    1   CodeContracts: requires is false: divisor > 0   

This works pretty nicely, lets try something more complex:

As expected we get errors for the values 95 and 105.

Warning    1   CodeContracts: requires is false: i > 100   
Warning    1   CodeContracts: requires is false: i < 110   

Puzzlingly only one error shows, the second remains hidden until the first is resolved. What happens when rather than hardcoding an example we pass in a variable?

Warning    3   CodeContracts: requires unproven: i > 100   
Warning    5   CodeContracts: requires unproven: i < 110   

Which we can resolve by including a check:

So that's pretty solid, we can write contracts, and the static analysis will give us a good idea if there are any problems. We can also use Code Contracts to specify rules for the return value using Contract.Ensures(), as well as object invariants which are rules that should always be true for an object using Contract.Invariant().

This is all great but who is it useful for? The answer is any large team using a well-designed interface. Currently you would either use exceptions which are only useful at runtime, or documentation, which let’s face it is unlikely to be read when 4 years on the maintenance programmer makes a change 3 functions up the call stack. Care is going to be needed though. It looks like it would be very easy to begin burying business logic in your contracts, or hardcoding values. A project where the specifications are more fluid is likely to lead to a lot of pain if the contracts are constantly being revised.

It's not all roses however. The static analyser seems more like a beta product then anything. It had problems with int? and the warnings produced where often unhelpfull. Still you can use the Code Contracts library right now; undoubtedly the Static analysis will continue to evolve.

All in all this looks like a pretty nice addition to .Net 4.0. You can read far more about it here.

Using WiX to install SQL databases and execute SQL scripts

In my last post on WiX I described how to set up a simple installation project that installed four files and created a shortcut in the start menu. Today we will look at installing and running SQL against databases during installation. Installing, or configuring a database is not an unusual task during installation especially with programs that roll out with SQL Express.

When we left off we had a wxs file that looked like this:

Lets look at how we add SQL support. The first thing we need to do is add references to the WixSqlExtension and WixUtilExtension DLLs. You do this just like you would add any other DLL to a visual studio project, by using the refferences folder.

The second thing we need to do is add references the the XML schema namespaces we are going to use. Add the util and SQL references as follows:

We now need to do three things:

  • Create a User element to provide credentials to the database.
  • Create SqlDatabase and SqlScripts elements to define the database and link to the script file.
  • Create a Binary element to reference a SQL script file.

The User element is a generic element used for storing user information that is consumed by a variety of other elements (SQL, services, etc).

In the above code the Name and Password attributes are set to properties, variables that can be set by command line, the UI (which I'll be making a post about in the future) and custom actions.

The SqlDatabase element specifies the SQL server, database and user, essentially the connection string. This is where the User is linked in by specifying its ID against the User attribute. As well as this you can specify whether to create and drop the database during certain install events. Specifically the options are:

* CreateOnInstall
* CreateOnReinstall
* CreateOnUninstall
* DropOnInstall
* DropOnReinstall
* DropOnUninstall

This gives a lot of control over how the database is created or recreated during install and uninstall events. Care needs to be considered when dropping databases that users may have stored information in.

The SqlDatabase element contains one to many SqlScript elements. SqlScript simply links the SqlDatabase with a script file and specifies on which instalation events it should be run (Execute and Rollback versions of the SqlDatabase options).

Finally we need to specify the SQL file that we need to run. The Binary element allows us to embed files that can be used by the installer but are not meant to be installed.

A Binary simply has an Id (referenced by SqlScript) and a SourceFile attribute which references the local file on your machine. In this case our test.sql file contains a simple command to create a table:

Obviously this is a trivial example. We could easily invoke scripts to construct many tables, and populate them, more importantly we can use properties to conditionally select which files to run.

So lets look what we have done. We've added user credentials, created a way for WiX to communicate with a database, and specified a file to run against this database. The end result merged with our previous wxs files looks like this:

Now all we need to do is run the installer. because we haven't provided a UI we have to specify the properties manually when running the msi file. To get more control over the install we use the MSIEXEC.EXE tool. The following shows how to execute our .msi:

msiexec /i WixTest.msi /log log1.log SQLUSER="SA" SQLPASSWORD="Password" SQLSERVER="(local)\SQLExpress"

The /i flag tells the Microsoft Installer which file to run, /log specifies where to log to, and finally the properties are set. This give sys-admins a lot of power as they don't need to use a UI to install your application across a network of computers. To uninstall run the same as above but with /uninstall instead of /i.

To conclude WiX offers a powerful way to create new, and update existing databases during installation in a transactional manner.

How to use WiX to congiure XML files during installation.

One of WiX's powerful features is XML interaction. WiX can create and delete elements or change values and attributes inside XML files. To do this we use either the util:XmlFile or util:XmlConfig elements. The two elements differ in subtle ways. util:XmlFile does not let you specify to be run on uninstall though it does allow you set the Permanent flag, which will undo the action on uninstall, while util:XmlConfig only allows you to delete or create an element, not update one.

To use either you must include a reference to WixUtilExtension.dll and add the namespace as follows:

Both elements contain the action attribute. This specifies the type of XML modification to be made. For util:XmlFile:

  • createElement
  • deleteValue
  • setValue
  • bulkSetValue

util:XmlFile does not have setValue or bulkSetValue. setValue matches the first element and updates its value while bulkSetValue updates all elements that are matched.

The other important attribute is ElementPath. This is an XPath which is used for matching the element (or for util:XmlFile elements) that needs to be updated. For more information on XPath view this tutorial. I also recommend this XPath evaluator for testing your expressions.

So lets have a look at an example. In this case we will alter a app.config file using util:XmlFile. This is an app.config file that we are installing into the default installation directory.

What is happening here? Firstly the ElementPath XPath is set to match <configuration><connectionStrings><add connectionString = "" ⁄>. Secondly because the Name attribute is set it will set the value of the attribute rather then the elements inner value. If and when this node is matched the Value will be set to populate the attribute. Note that this util:XmlFile is nested inside the same component that contains the config file, it does however execute after the file is installed.

Good photography blogs that I follow.

A colleague of mine asked me what photography blogs I subscribe to. So as follows are the blogs I read regularly relating to photography.

Berlin Guide

Several times a week Grapf posts a photograph from around Berlin. A mixture of architecture and street photography.


The Big Picture

The Boston Globe's website publishes groups of photographs from the recent new several times a week. One of my favorite photography sites.


absolutely nothing

Awesome landscape photography.


The Digital Journalist

The Digital Journalist is published monthly, incorporating articles as well as galleries of photographs.


the impossible cool

Classic portraits of actors, writers, musicians, poets, photographers and other famous people.


Beyond Phototips

Tips about photography, with plenty of awesome photographs.


Beyond the Obvious

Paul Indigo's discussions regarding photography.


The Sartorialist

Photographs of people wearing cool clothes, primarily street shots.


Astronomy Picture of the Day

Photographs of astronomical phenomena.


Daily Dose of Imagery

A photograph every day, a very good photograph every day.


Damn Cool Pics

Lots and lots of interesting photos, posted regularly in sets.


Earth Shots

Landscape photography published daily. Submitted from many users this is a competition site with a winner a day.


Ed Z Studios

Photographs and discussion on photography by Ed Zawadzki.

link flickr

Hitesh Sawlani Photography

Discussion and photographs by Hitesh Sawlani.

link flickr

LeggNet's Digital Capture

Rich Legg, a photographer from Salt Lake City shares his photographs and opinions. Often backgrounds behind his stock photography shoots.



Howard Grill talks about photography, few photographs, lots of insight.


Luminous Landscape

Luminous Landscape has been around for a long time now, plenty of discussion around photography, tutorials and review's.


The Photographic World of Drew Gardner

Professional photographer discusses his work.


The Work of Daniel Hellerman

Daniel Hellerman posts his many Photographs. No discussion.


Thomas Hawk's Digital Connection

Great photographs and extensive discussion.


Stuck in Customs

Travel photography by Trey Ratcliff. Plenty of HDR.


Installing and Starting Windows Services with WiX

An important part of many application installs is configuring windows services. WiX has the ability to install/uninstall as well as start and stop services during instalation.

The first element we need to set up is the ServiceInstall element. ServiceInstall controls how the service will start and what user and authentication are to be used. Note that the ServiceInstall element does not specify a file. Rather the file that has KeyPath="yes" set in the same component is considered the executable to use as a service.

The ServiceInstall element lets you configure the following:


Type specifies how the service should run, either running as its own process or as a shared process. Note that while kernalDriver and systemDriver are allowed values they are not currently supported by Windows Installer.

  • ownProcess
  • shareProcess
  • kernelDriver
  • systemDriver


This enumeration determines what WiX should do if the service causes an error.

  • ignore
  • normal
  • critical


The start enumeration determines how the service should be started. These are the standard service start up options (as found in the services applet).

  • auto
  • demand
  • disabled
  • boot
  • system

A sample ServiceContol appears below:

Both the ServiceControl and ServiceInstall elements are nested within a *Component * element. The service is installed and started (or stopped and uninstalled) at the time that the component is installed.

To conclude WiX provides a simple way for installing and setting up Windows Services, and ensuring they are cleanly uninstalled along with the rest of the program being uninstalled.



Four and half years ago I bought my second digital camera, the Canon 300d. At this time the 300d was already a year and a half old, and along with the Nikon D70 was one of the first quality, "low cost" entry level DSLRs.

The 300d impressed me greatly, previously I had been using a 4MP point and shoot wich I had grown to loath. The SLR brought so much to the table, fast focusing, low light shooting. Paired with my favorite lens the Canon 50mm f1.4 I could shoot in low light environments, and really isolate my subject. Almost everything I loathed about the point and shoot was resolved with the DSLR.

Over time I acquired a number of lenses. When I bought the camera I also acquired the kit lens. Later I would buy the Canon 50mm 1.8 II , then the Canon 20mm f2.8 then upgrading my 50mm to the Canon 50mm 1.4. Finally I purchased a Sigma 70-300 f4.0-5.6.


Each lens had its own personality its own feel. The 50mm was by far my favorite. So much so in fact that I eventually upgraded to the faster f1.4. On the APS sized sensor of the 300d, the 50mm lens acts as a short portraiture lens. I dragged that camera and lenses all over the world with me, twice to Germany and the Netherlands, and earlier this year to the United States and Canada.

Google Chart

But lately all has not been well between me an my Canon. I've yearned to take shots in lower light. The 300d struggles past 400 iso. I have also been noticing the weight of my kit more often. Walking through Yosemite park through snow and ice for many hours, even with the well made Lowpro bag, was uncomfortable. Therefore I have been looking for a new camera. I have two requirements:

1) My new camera should offer superior low light capabilities compared to the 300d.

2) My new camera should be more compact and lighter then the 300d.

Initially my mind went to the new Canon 5d Mark II. This camera has vastly superior low light capabilities. The weight would be an issue still, so replacing my 20mm and 50mm primes with high quality 24-70mm lens seemed like it might do the trick. But when the E-P1 was announced I new I had found a winner. The high ISO range may not have the quality of the Canon 5d, but the E-P1 has 4 stops of image stabilization built into the sensor, available for all lenses.

Another option I considered was getting one of the top of the line point and shoot cameras, either the Canon G10 or the Lumix LX3. Both these cameras would offer image quality better then what I have now, and offer image stabilization. However they both fall apart from iso 400 onwards, and I really want better low light photography. The smaller size would be delicious but is offset by the restriction of not having interchangeable lenses. I think the E-P1 is small enough for my needs balancing image quality against size.

It's not just the E-P1 which is compact though, the Micro Four Thirds lenses are small to. Small lenses, a compact body, strong low light capabilities, this seems like a dream camera. I will write a review when I get my hands on it.

Scite Preferences

Scite preferences can be opened and edited by selecting Options->Open Global Options File

Some preferences you may wish to change are:
This will cause all new files opened to open in the current instance of Scite. If this line is set to equal 0 or it is commented out (#) opening a new file will start a new instance of Scite. Since I often find myself editing tens of files I prefer to keep them in a single instance.
# Indentation

There is an entire section on tabbing behavior. For some reason many programmers are very specific on exactly how their tabbing is done. The default works for me.

# Scripting

Use this setting to set a start up script that holds all your own functions. This is the file that contains the functions that are described on the rest of this site.


Setting this value to 1 or un-commenting it will make the status bar display at start up.

# Status Bar
li=$(LineNumber) co=$(ColumnNumber) $(OverType) ($(EOLMode)) $(FileAttr)
$(BufferLength) chars in $(NbOfLines) lines. Sel: $(SelLength) chars.
Now is: Date=$(CurrentDate) Time=$(CurrentTime)
$(FileNameExt) : $(FileDate) - $(FileTime) | $(FileAttr)

if PLAT_WIN"file://$(SciteDefaultHome)\SciTEDoc.html"
    command.print.*=a2ps "$(FileNameExt)" "file://$(SciteDefaultHome)/SciTEDoc.html"

You can also change what the status bar displays under the # Status Bar section. Note that on windows you can click the status bar to cycle through different displays.

This setting is found under and auto closes tags when you press ">" (there by deprecating my Tag Complete script.


Setting this to 1 and un-commenting it will cause Scite to open the previous file sit had open last session. If you start Scite by double clicking a file it will only open that file and discard the previous session state.


before a file is saved (or a language is chosen) Scite does not know what language the file will be. Use this setting to set its default language. Since I primarily use Scite for HTML/XML I set mine to HTML.

Bold/Italicise Selection

In this tutorial we are going to quickly cover using text selection in our Lua scripts. Our first example will be an italicise-selection command for HTML. Basicly if we select a region of text, we want to be able to hit Ctrl-i and insert <i> at the beginning of the selection and </i> at the end of the selection. This is really easy to do in Scite so let's get to it.

Well there you are, 4 lines of code. editor:GetSelText() returns a string containing the text currently selected. We then pass this line into editor:ReplaceSel([string]), a function that replaces the selected text with a string. In this case the string we pass is the original selection concatenated with the <i> and </i> tags. In Lua the .. operator concatenates strings.

We can of course replace the <i> with <b>, or any arbitrary string we choose to. So all you need to do now is bind a suitable keyboard shortcut to this function (see binding functions for more details.). I personally use Ctrl+Shift+i for italics and Ctrl+Shift+b for bold.

Using editor:ReplaceSel([string]) we can insert text at the start and end of the selection and replace all or part of the selected text. We still however have more tools at our disposal. editor.SelectionStart and editor.SelectionEnd give us the absolute position of the start and the end of the selection, while editor.Anchor gives the position that the selection was started (which is always eaual to either the start or end, depending on which way the selection occured.)

The above trivial example prints the positions to the output frame. A non-trivial use might envolve sending these positions to another function, or using them to calculate an ofset with witch to insert the selected text.

Lets look at another example, we will now build a ROT13 encoder. ROT13 is a Caesar ciphers that works by shifting letters by 13 positions meaning "ROT13 converter" becomes "EBG13 pbairegre" and vice versa.

As previously we use editor:GetSelText() to retrieve the selected text. Looping through the resulting string for every char if it is between a-m we add 13 to its machine value. To do so we use string.byte(tempChar) to get the current chars machine code. We then use string.char(x+13) to get the char that has the machine code plus 13. For n-z we subtract 13. If you're using most modern operating systems the machine code in question will be ASCII in which case this arithmetic works nicely. Then we simply concatenate the new char onto a temporary string and use editor:ReplaceSel(tempString) to write it back into the document.

The final example we will look at is a function that converts special characters into their XML/XHTML name code. I wrote this primarily on account of being sick of going through the example code in this document by hand and changing < to &lt; > to &gt; etc. A list of special characters can be found at webmonkey. Note the use of [[ ]] to quote the ". Lua supports three types of quotes, [[]] "" and ''. Note [[]] can contain " and ' as well as further nested [[]] brackets.

The functions listed here can be found in htmlBold.lua, rot13.lua and cleanChars.lua.

An implementation of a TrackBack listener in ASP.NET MVC.

Please note: this article was written for the original MVC CTP1 preview in 2007 and is horrifically out of date.

Trackbacks are a form of linkback a way of notify a site that your site has made a reference to it. Trackbacks are slowly being deprecated in favor of pingbacks. The spec can be found at Six Apart.

To provide trackback support we need to do 3 things:

  • Create a trackback Controller and Action
  • Set up an appropriate Route
  • Provide a trackback URL

Lets start by creating our Controller

The ProcessRequest action (part of the LinkBack Controller), processes HTTP POSTs and saves the trackback data to the LinkBack model (in this case a linkback class, the implementation of which is not relevant here). The action first creates a new model, extracts the routing data then extracts the trackback parameters from the HTTP request before saving them to back to the model. If the action is successful a success response is sent otherwise an error. the title, blog_name, title and excerpt are all optional parameters however the url parameter is required.

Successful trackback requests must return the following XML in their response:

Errors are generated as follows:

An unsuccessful request must return the following XML:

Note that in writeXmlToStream() we ensure that the Byte Order Mark is disabled to ensure that it does not cause a parsing error.

Having created an appropriate Controller and Action we now need to hook them up to be routed correctly. Inside the Global.asax file, inside RegisterRoutes() we need to add the following:

The first parameter is the name, the second is the pattern to match, and the third the anonymous type that contains the route data. See MVC URL Routing for more details. Note that trackBackController,trackBackAction and trackBackID are used inside the ProcessRequest() Action.

Our final task is to add a link to the trackback in our web page. This is a simple matter of generating an appropriate link, this page for example has a track back of MVC TrackBack Implementation. The specification states that auto discover can be used, basically a block of XML is embedded in the page so that the trackback URL can be found programmaticly. We generate the XML:

Which produces the following:

Note that the XML is enclosed in quotes to help HTML validators out. So what is the work flow for this system? Firstly a user on a website creates a new post linking to a page on your site. Their site either queries your page and extracts the trackback URL, or is provided the track back URL by the user. Their site sends and HTTP POST to your site, MVC receives the request and routes it to the ProcessRequest action in the LinkBack controller. ProcessRequest extracts the trackback data and saves it to the model.

Finally test trackbacks can be sent using either Simpletracks or Trackback Test Form at the RSS Blog(Internet Explorer only) to confirm your system is operating correctly.

Using msiexec to manipulate msi files

As I have mentioned in a previous post, the msi file generated by WiX can either be installed by double clicking them and optionally manipulating a UI or it can be run from the command line.

The Windows installer tool msiexec.ese (located in the System32 folder) can be used to install, modify and uninstall msi packages. Today we are going to look at the common ways of using this tool.


Installation is relatively simple. Simply pass a /i flag along with the msi name:

msiexec /i myApp.msi

This will install the application but will also display the UI if there is one. To suppress the UI use either /QB or /QN. /QB will display a non interactive UI (progress bars etc) while /QN completely suppresses all UI.

Properties can also be supplied at run time. You get this functionality for free with WiX. Its a good idea to provide a list of the properties in a readme file supplied along with the msi file so that admins can easily do network installs. Properties are paired with their values as follows:

msiexec /i myApp.msi  PROPERTY1="propertyValue" PROPERTY2="anotherPropertyValue"


Uninstallationis just as easy as installation. Simply use:

msiexec /x myApp.msi

Once more this will display a UI which can be suppressed with /QB or /QN. Additionally if the UI is suppressed properties may need to be supplied.


msiexec offers a logging facility. My personal experience with it has been mixed. In many cases error messages are overly cryptic. To log everything use:

msiexec /i myApp.msi /l myLog.log

In general I have found logging all message to be to verbose, especially for complicated WiX projects that install many components. You can use the following flags to reduce the amount of messages to more manageable levels:

  • i - Status messages
  • w - Nonfatal warnings
  • e - All error messages
  • a - Start up of actions
  • r - Action-specific records
  • u - User requests
  • c - Initial UI parameters
  • m - Out-of-memory or fatal exit information
  • o - Out-of-disk-space messages
  • p - Terminal properties
  • v - Verbose output
  • x - Extra debugging information
      • Append to existing log file
  • ! - Flush each line to the log
      • Log all information, except for v and x options

So you can call:

msiexec /i myApp.msi /le myLog.log

This will log only errors and the final value of all parameters.


Occasionally you will download an msi file and want to extract the files but not install them. This can be done easily with the /a flag while setting the TARGETDIR

msiexec /a myApp.msi TARGETDIR="c:\temp"

the files will be extracted to the specified directory. You should not that I have experienced errors on occasion where the TARGETDIR is set to a path that is very long.

To conclude the Windows installer tool is a useful tool for manipulating msi files. As it allows log files to be generated it is a key tool for any WiX developer.

Using WiX to create registry values

A common installation requirement is to set registry variables. WiX makes this a simple task. Lets add a RegistryKey element and nested RegistryValue element.

The RegistryKey element defines the specific key that should be created. The Action attribute is an enumeration. The allowable values are specified below:

  • create
  • createKeyAndRemoveKeyOnUninstall
  • none

RegistryKey creates a key if it does not exist or edits it if it already does.

The root attribute is an enumeration for the registry root. Allowable values are:

  • HKMU
  • HKCR
  • HKCU
  • HKLM
  • HKU

The RegistryValue represents the data that is to be written to the parent RegistryKey. The Type attribute defines the registry data type while the Value attribute stores the actual data. The Name attribute stores the name of the value.

There are also RemoveRegistryValue and RemoveRegistryKey elements that can be used to remove keys and values in the same manner that they are added.

To conclude WiX provides tools for adding/removing and editing registry values and resetting those values during uninstall.

Scite/Lua Word Count Tutorial

In this tutorial we will be creating a Lua script that will count the number of characters, words and lines in the current document.

First up let's look at some of the functions and strategies we will be using.

editor.Length This property stores the number of chars in the current document. You should be wary as on Windows a newline is stored as \r\n (carriage return, new line) making up two chars, whereas Linux uses only \n and the Mac uses only \r.

editor.LineCount This property stores the number of lines in the document.

editor.CurrentPos This property stores the position of the caret (curser). We'll be using it during the tutorial for completeness but not in the final version.

editor:LineFromPosition([char position]) This is a function that takes one argument (a char position), and returns the line number of that position. If we give it editor.CurrentPos as an argument then it will return the current line number. Once more we will be using it for completeness, but not in the final version.

Right, first some declarations.

Now we want to count the number of whitespace control characters. We aren't directly interested in this value, but we will be using it to find the number of standard characters. for m in editor:match("\n") do iterates through the entire document. Each time the text is found, the code inside the do .. end block is executed. In this case if we find a new line then we increment whiteSpace by 1.

Now we are going to calculate the number of non-empty lines, that is lines that contain at least one alphanumeric character. We iterate through the document from the first line till we reach the last line, i.e. editor.LineCount using a while loop. while itt < editor.LineCount do We then get the current line with editor:GetLine(itt); and store it in the line variable. The string.find(line,'%w'); function takes the line as input and searches for alpha numeric chars. The '%w' is a search pattern that tells the function to look for any alphanumeric char. For more on patterns consult the Lua Reference Manual. If the line contains alpha numeric characters then we increment nonEmptyLine by one.

Now we want to calculate the number of words in our document. We'll be using another while loop to iterate through the entire document. It should be noted that this loop is not necessary, the logic can be placed in the previous loop, but for the purposes of this tutorial, a second loop adds to the clarity. The Lua source file contains the merged loops. We use for word in string.gfind(line, "%w+") do, where gfind returns each word in the line. The definition of word in this case is one or more alphanumeric chars (%w+) separated by a non alphanumeric character, like a space. Each time gfind finds a word we increment wordCount by 1.

Note that in both the above loops we place the gfind and find in an if line then block. For these functions Scite will complain if you pass an empty string as an argument. The if block ensures that the function never receives an empty string as an argument. The Lua error message you will get is: "bad argument #1 to `gfind' (string expected, got nil)"

Finally all we need to do is print out the values we have calculated. In the Lua script file I only print out the number of chars, words, lines and non-empty lines but this expanded view should give a more informative idea of how Scite and Lua work.

If you have any questions in regard to this tutorial email me at A nicer version of the above code is available in this Lua file.

Canonical urls With ASP.NET MVC

Please note: this article was written for the original MVC CTP1 preview in 2007 and is horrifically out of date.

The Canonical url of a site is the url that search providers consider to be definitive. For example the sites and are technically different. Since the urls are different, search providers may consider the two sites to be duplicate content.

This issue can be resolved in ASP.NET MVC by using a 301 Moved Permanently HTTP redirect. Using Application_BeginRequest in Global.asax we can catch urls and redirect them as needed.

In the above code any urls coming in that aren't prefixed with www will have www. prepended to the domain and be redirected to the appropriate url. Of course the above code can be extended to redirect in any arbitrary manner.

A gentle introduction to the Windows Installer XML toolset.

The WiX 3.0 release candidate has just been released so lets have a look at it. After installing WiX, the linker and compiler are installed along with Visual Studio tools. Create a new WiX Project using the new WiX project type that has been installed.

A new wxs file will be created as below:

The Product section contains all the main elements for the installer, there can only be one product section in a wxs file. The Directory section is where files and other components are defined for installation. WiX doesn't specify how to install files, rather you define the structure you want, then WiX installs file based on that structure.

We will now make some changes to the wxs file and add some dummy files and a shortcut.

This code installs four files and sets a short cut too the first in the start menu.

The <Directory Id="TARGETDIR" Name="SourceDir"> line specifies a virtual directory. It contains the directory elements that defines the actual structure of the install. StartMenuFolder is defined so that the shortcut can reference it. ProgramFilesFolder is where the program will be installed. StartMenuFolder and ProgramFilesFolder are system folder properties defined by the Windows Installer engine. The system folders available are:

  • AdminToolsFolder
  • AppDataFolder
  • CommonAppDataFolder
  • CommonFiles64Folder
  • CommonFilesFolder
  • DesktopFolder
  • FavoritesFolder
  • FontsFolder
  • LocalAppDataFolder
  • MyPicturesFolder
  • PersonalFolder
  • ProgramFiles64Folder
  • ProgramFilesFolder
  • ProgramMenuFolder
  • SendToFolder
  • StartMenuFolder
  • StartupFolder
  • System16Folder
  • System64Folder
  • SystemFolder
  • TempFolder
  • TemplateFolder
  • WindowsFolder
  • WindowsVolume

Components are small groups of files/registry settings/directories/shortcuts etc. They are the smallest unit that can be conditionally installed. Each Component must be specified in a Feature for it to be installed.

Inside the Component we have defined, there are specified four Files. Each File has a name and a source. The name can be different from the source file allowing WiX to rename at install time. File elements may contain Shortcut elements which as their name suggests install shortcuts. Alternatively shortcuts rather then being nested inside a file can have the target attribute set i.e Target="[#testFile1]". This allows you to create a shortcut in one component that points to a file in another component.

If we build the WXS file now we end up with a file called WixTest.msi. When we run this msi file, the four file will be installed to C:\Program Files\WixTest, and a shortcut will be cretaed in the start menu. If you look at add/remove programs WixTest will appear, and can be uninstalled from here.

So now we have an installer that will silently install several files and a shortcut. WiX provides a simple way to quickly build an installer that gets all the benefits of the Windows Installer framework. Future posts will cover adding a UI and interacting with the registry, SQL Server and XML files.

URL Routing with the ASP.NET MVC Framework

Please note: this article was written for the original MVC CTP1 preview in 2007 and is horrifically out of date.

Unlike standard ASP.NET, MVC does not use a directory and file system for URLs. Rather it maps URLs to controller classes in a RESTfull manner. Routing is defined in the Global.asax file.

Route Mapping

Lets start by looking at the default route created when starting a new MVC project.

RouteCollection.MapRoute() is used to map a route to a controller class. MapRoute() takes a name, a parametrized URL and a parameter object. The parametrized URL represents a pattern. Each URL parameter in the pattern is separated by a constant (in this case '/' though you could use any character or characters).

A URL coming in will be matched to the pattern. So if a following URL comes in it will be matched as follows:

controller: category action: purchase id: 203

Controller and action are mapped by default to the controller class and action methods. Any other arbitrary URL parameter can be used, but must be managed manually in code. Inside the controllers the RouteData can be investigated as follows:

Thus the id of '203' can be extracted inside the controller.

Ignoring URLs

Because the MVC application intercepts and routes all URLs it is necessary to have a way to ignore certain URLs. RegisterRoutes.IgnoreRoute() can be used to achieve this.

This code tells MVC to ignore all requests for webforms apsx pages.


When the MVC application begins the RegisterRoutes() method is called by Application_Start() to build the route table. When a URL request is made MVC iterates through all the routes in the route table until a match is found. If no match is found then a 404 error is found.