I Do Imaging has been completely rewritten in the past year. Here’s what is used in the creation and production of the site.

The site as a whole is comprised of several components: the site itself is a Rails application, while the blog and wiki are separate static websites.


The I Do Imaging site is built on Rails. Not the newest technology in 2017, but still a solid performer and with a massive base of stable extensions. It runs on an Nginx server on an AWS EC2 reserved virtual instance. This does mean still having to run a Linux server, as opposed to a fully-hosted service like Heroku or a container on AWS, but this has the advantage of providing a platform to run the PACS and other services.

Initially I tried using the web framwork Foundation, just to be a little different from the default choice of Bootstrap. However I had difficulty with some of the customisations, and with the Foundation volume on Stack Overflow being only a few percent that of Bootstrap, I bowed to the inevitable and changed to Bootstrap.

Other gems used in the Rails app are CanCanCan and Devise for authorization and authentication, Kaminari for pagination, Carrierwave for uploading the static image files, the awesome Ransack for database searching, Haml for markup, and Capistrano deploys to the production site. ActiveAdmin is used for all the back-end work.

Javascript libraries used are jQuery (of course) Slick for the image carousel, and star-rating-svg for the star ratings. Icons are from Font Awesome. Fonts are Google Fonts: currently Fjalla One for titles and Open Sans for text.

The database is a small PostgreSQL RDS instance at AWS. Having the database hosted, rather than installed and running on the local server, is one less thing to go wrong, and it seems faster. RDS is continually kept up to date and has all sorts of handy features. It also means that the EC2 load is kept to a minimum - the average load is very low, but when the server is busy assembling a page, it’s nice that it’s also not having to simultaneously run dozens of database queries.

Code and text are written on Sublime Text, which after a couple of years I’m starting to get OK with (Emacs got me through the previous 20 years). Git repositories are now on AWS CodeCommit though I’m not particularly wedded to it; it just seemed sensible as the rest of the site is all on AWS. Shared code is on GitHub and I’ve had good experience with GitLab.

Previously, the old site was a collection of Perl scripts dating back to the late 1990s and subsequently hacked into an unmaintainable (though still functioning) mess. It ran on an Apache server and stored everything on a MySQL database, all of which were hosted on the local machine. The site started out hosted on a small ISP, then moved to a shared server at Pair Networks, then to a Mac Mini at Mac Mini Colo in Las Vegas, then eventually to an EC2 instance at AWS. All of which were completely fine services, but technology has moved on.


This blog is a stand-alone site for now. Perhaps later it will be neatly incorporated into the main site.

It’s built on the Jekyll framework. Jekyll produces static HTML websites from a markup language original. I chose Jekyll for its speed and simplicity, and (having used WordPress in the past), its absolute security. Not much bad you can do with a static site.

The content is written in the MarkDown markup language (actually, kramdown, a superset of MarkDown). The plain text source files are stored in a Git repo under version control.

Several Jekyll plugins help things along. The s3_website gem delivers the site contents to S3. jekyll-picture-tag generates responsive images: pre-rendered versions of images are uploaded, and the image size appropriate for the viewer window is sent to the user.

The site is hosted on AWS S3. S3 is a scalable object store service from Amazon Web Services. It’s optimized for Web delivery and can scale to any size. As with any AWS service, the initial step takes some learning, but it’s not difficult to use if you’re used to web services. I mostly use it through its API and command line tools - there is a GUI you can access for it, but S3 is designed for programmatic access. In this case the S3 bucket is configured as a static website, so no web server is needed.

Delivery is through AWS CloudFront. CloudFront provides local caches all over the world for static content, in this case the website files from the S3 bucket. Once you’ve set it up, each user gets their content from the nearest cache, significantly speeding page load.


The original I Do Imaging wiki is hosted on Tiki. I chose Tiki for its massive feature set and no-nonsense approach, and from having seen too many default MediaWiki installations. However since then I’ve developed several sites in Semantic MediaWiki and have become a big fan. Semantic Mediawiki is an extension to MediaWiki that allows semantic data to be added to each page, and subsequently used to automatically create content and links. In particular, ‘list of X’ pages are built as a few lines of code that assembles the list and keeps it up to date. The I Do Imaging wiki will be moved to SMW to take advantage of this hugely powerful feature.

Demo Programs

Most of the programs that can readily be demonstrated on the site are image viewers deployed either from an HTTP server or a PACS.