A place in Wikipedia

For years I have been reading and writing in Wikipedia. Some time ago I created a page for my home village Kilcloon. Village, or parish or maybe census town? I revisited the Wikipedia article numerous times and was keen to expand it. During my research about the history of Kilcloon it became obvious that Kilcloon could refer to many things, the most common of which is the parish of Kilcloon as stated at the beginning of the Wikipedia article.

There are other definitions, such as the postal town of Kilcloon which applies to some, but not all, of the parish. For me, growing up near the centre of the parish, the postal town was synonymous with the parish name, but apparently this is not so for everybody. Do people still identify themselves as living in Kilcloon if they have a different postal town in their address? Nowadays people moving into an area do not automatically associate themselves with the parish they are in. Parishes and parish boundaries are managed by the Catholic Church, not the state.

More definitions

So how does the Irish state define as Kilcloon? This depends on which authority you ask, and the answers are many! The postal service is run by An Post and Kilcloon is the name of the postal town covering just part of the parish, as mentioned above. A direct question to An Post about what townlands were part of the Kilcloon postal town did not provide a very satisfactory answer, but all was not lost.

Ireland has recently introduced postal codes (eircodes), unique for address, and these will replace the existing address system of townlands and postal towns, though the two systems are aligned for the time being since it is not mandatory to include an eircode when writing an address, yet. It turns out that the areas covered by each of the eircode routing keys has been published on Google Maps. Kilcloon is now part of the A85 (Dunshaughlin) routing key and actually is a very distinct appendage to this routing key as seen on the map. This I believe provides a definitive answer to what Kilcloon is from the postal service point-of-view.

Kilcloon also features in the Central Statistics Office (CSO) statistics as a “census town” or “settlement”. Kilcloon settlement can be seen clearly on the CSO Small Area Population (SAP) map. This can be compared to the Meath County Council’s definition of Kilcloon which is in the form of four physical signposts centred around Ballynare Crossroads. This is the geographically smallest definition of Kilcloon that exists and could be defined simply as the “village” of Kilcloon, which is much smaller than the census town and contains only a fraction of the people that consider themselves as living in “Kilcloon”.

Some history

And so back to the parish of Kilcloon. Historically, the parish of Kilcloon is a modern parish that comprises several smaller medieval parishes, one of which was called “Kilclone”. My research shows that the medieval parish was often referred to as “Kilcloon” and this was used to name the modern parish. Every medieval parish was comprised of townlands, one of which bore the same name as the parish, thus there exists a townland of Kilclone in the medieval parish of Kilclone. While the medieval parish names have disappeared the townlands prevail and are a central part of the postal address system mentioned above. The local post office is called Kilclone Post office precisely because it is in the townland of Kilclone for instance.

The townlands themselves have also been transformed through the ages and the modern townland boundaries differ to varying degrees from the boundaries as they were when the parishes were first formed. This is the subject of some amazing research and the results are available on townlands.ie. It has also provided the inspiration to create the maps I would use to illustrate the multitude of definitions of the place known as Kilcloon.

Maps

Based on all of this research, there were five definitions of Kilcloon that I wanted to created maps for: the parish, the townland (Kilclone), the postal town, the census town and the village!

The townlands website uses the fantastic OpenStreetMap and Leaflet JavaScript library to create maps of all of the Irish townlands, baronies and much more! The data is publicly available and I could extract the coordinates from the web page to create unique maps for the Kilcloon Wikipedia article. These first maps showed which townlands the modern parish of Kilcloon included as well as which baronies the townlands were originally part of.

Medieval parishes and their associated baronies

The Routing Key map data could also be downloaded and used to render the Kilcloon postal area. Leaflet could overlay the A85 routing key onto the parish to see how the lined up!

Leaflet naturally allows points-of-interest to be displayed, so I created several maps showing the most important features of the parish. Finally, the trickiest maps to create were the parish and census town maps. The Kilcloon census town map is available on the CSO SAP map, but not the data. Still I managed to extract the data through visual inspection. The village is defined only by physical sign posts on the roads leading into Ballynare Crossroads, but I combined the positions of the signposts with property boundaries in the area to create a theoretical village boundary and add the coordinates to a Leaflet map.

Maintenance

Creating the maps required some straightforward JavaScript to render the maps. I wanted the code to be open source since the maps must be maintained along with the Wikipedia page, so I added a simple index page to the code base that would render each map in turn and checked everything into Github.

Links

Kilcloon on Wikipedia
Kilcloon maps on GitHub

Scalable Observer Pattern

When developers talk about publish-subscribe design patterns I immediately think of the newspaper analogy. As described in Head First Design Patterns:

  1. A newspaper goes into business and begins publishing newspapers.
  2. You subscribe to a particular publisher, and every time there’s a new edition it gets delivered to you. As long as you remain a subscriber you get new newspapers.
  3. You unsubscribe when you don’t want papers anymore, and they stop being delivered.
  4. While the publisher remains in business, people, hotels, airlines, and other businesses constantly subscribe and unsubscribe to the newspaper.

As a software design pattern, this is known as the Observer Pattern. In this pattern the publisher is called the Subject and the subscribers the Observers.

Comparison of Observer and Pub-Sub patterns

The Observer Pattern has some limitations such as scalability and hard-coupling. Unlike the physical world of newspapers it is possible to build an improved subscription service that does scale and is loosely-coupled. This improved pattern is called the Publish-Subscribe Pattern (or “pub-sub”).

Now you’re wondering, why name the pattern “publish-subscribe” when it does not behave like a newspaper pattern?? This has caused a lot of consternation in my discussions with other system architects. Unless one is aware of the naming convention used for these patterns; then it has happened that one person is talking about pub-sub and the other thinks they’re talking about newspapers.

It would be have been more intuitive to have called pub-sub something like the Scalable Observer Pattern.

Information model vs. data model

As a software developer or architect you will probably have had at least one discussion about the difference between information models and data models. Why do we want to make this distinction? In practice drawing an information model is much the same as drawing a data model; both use the entity-relationship model for describing the world. ER-diagrams are easily transformed into the SQL used to create the table structure in relational databases (MySQL, MSSQL, etc.). So when do we need to create information models? Let’s look at an example.

Startup

ACME Trading has started a business selling pencils to its customers. They have set up a very basic ordering system to handle orders and ship goods to their customers. They designed a data model that will support the business software by examining the process (reality) of ordering goods and came up with the following:

The model has just two entities, one for the customer and one for the orders. These entities contain all the attributes needed to fulfil an order.

Growing

ACME is doing alright but they want to grow the business faster so they try doing some marketing. Again they build a simple application to support this business function. Examining the real world again they design the following data model.

The model contains just two entities; the customer again, this time with different attributes, and an entity called Contact Method.

Boom times

The marketing strategy is a success and ACME soon have to expand their operations and need to develop their existing systems to better handle the increased volume of customers and orders for pencils.

But now it’s becoming a hassle to have to create the customer in two systems and wouldn’t it be great if all customers created in the ordering system were also added to the marketing system automatically?

This shouldn’t be a problem as long as the two systems have compatible data models. In other words, a customer entity in the ordering system can map to a customer entity in the marketing system. But if it’s not possible, which system do we change? The ordering system is business critical so we may not want to mess with that one too much. However, ACME are thinking long-term and realise that they need a more robust representation of reality, one that the company can grow into.

At this point they go back to their view of reality and create a model that is independent of any system, a reference model if you will. This is called an information model. Or as Wikipedia explains:

An information model provides formalism to the description of a problem domain without constraining how that description is mapped to an actual implementation in software. There may be many mappings of the information model. Such mappings are called data models, irrespective of whether they are object models (e.g. using UML), entity relationship models or XML schemas.

The information model now serves two purposes. First, to aid future software design in creating robust data models, for example by supporting different customer address types. Secondly, to enforce a common terminology across the system landscape and in the documentation, e.g. a mobile phone number is to be called “Mobile number” when writing user stories, test cases, defining class names and methods, creating database tables, etc.

In order for the Ordering system and the Marketing system to be able to exchange information, they can try to map their data models to the information model. All the existing data models and information models are modelling reality so the differences really arise from how faithful or granular the data model is compared to reality.

An organisation can have many data models, usually one per system, but should only have one information model. Different parts of the organisation may only be interested in certain entities and relationships and may create an information model for the parts of reality they are interested in, but these partial information models are really all part of the same organisation-wide information model, even if a complete information model does not yet exist. In very large companies this may not be practical or desirable especially where autonomy between divisions is encouraged.

An information model is almost never implemented as-is in a system. Firstly, an information model will often contain more entities and attributes than any one system needs to implement. The reverse is also true: data models will contain application-specific artefacts as well, as entities needed to handle many-to-many relationships for instance. Secondly, data models are optimised for the specific system that utilises them, meaning the developers have combined entities and attributes in ways that improve the performance of the database. Again, information models should not constrain the implementation of the data model.

Going global

ACME have now decided to establish operations in Europe and have opened a sales and support office in Sweden. The company is now multilingual. While the reality of ordering, shipping and marketing goods is the same globally, each country uses their own language to describe it.  
So when the Swedish sales offices start sending Requests for Change back to HQ, they are using word like Kund for Customer and Beställning for Order. They are referring to the same thing but it is hard for the Swedish Sales people to discuss the changes needed with the English-speaking developers.

The different lingual groups need to agree on a common terminology, this can be neatly reflected in the information model (which also does not expose implementation details the way a data model does):

We can generalise and say that if English is the lingua franca of programming and programming languages, then there will always be a need to agree on the terminology in more than one language in non-English speaking countries. Put another way, the information model provides a useful bridge between the technical and business sides of the organisation which can often use different languages. While there are many tools that can be used to create information models, few have support for multiple languages in the same model unfortunately.

Conclusion

The difference between information models (IMs) and data models (DMs) can be summarised as follows:

  • IMs provide a formal description of the organisation’s view of reality.
  • There should only be one IM per organisation, but there can be many DMs, usually one per system.
  • IMs define the terminology that should be used in documentation and software development.
  • DMs are optimised for the application that needs them. IMs help future-proof the solution but should not constrain the DM.
  • IMs can support multilingual organisations where the business units are using another language than English.

In future articles I hope to discuss how information models can be used in integration platforms to aid the definition of canonical data formats when performing data mapping and also enforcing data access controls. Another area where information models are very important is Master Data Management and in the use of Data Standards.

Information models are also a visualisation of ubiquitous language which is an important part of Domain-driven design (DDD) and Behaviour Driven Development (BDD).

Business Process Modelling with BPMN

Having moved away from software development and design and more towards management of IT processes and services, I have found that Business Process Modelling is more applicable than UML to describing the kinds of processes I am encountering. This is not surprising, as UML is more IT-centric and I needed more flexibility to capture the realities of how things work in real life. Yes, you can use a combination of UML diagrams to capture a real-world process, but this is not as intuitive to non-IT people of which I encounter more often.

My first attempts at modelling a business process was using activity diagrams, sequence diagrams and use case models. The use case model defines all of the actors involved – both people and systems, the sequence diagram showed the message flow between them.

Figure 1 – Use Case Model Diagram
Figure 2 – UML Activity Diagram

However, this was still too low-level and I needed something that would capture the “big picture”. After all, a high-level process (e.g. a sales process) can naturally be broken down into sub-processes. Each level of detail provides meaning to the different layers of the organization as appropriate. Of course, UML is still important for helping to formally describe the resulting IT systems implementation.

The nice thing about BPMN is that you can practice it all the time. With UML you generally want to be working on something IT related, but BPM can be applied to any process. For instance, how do people get something to eat for lunch? Do they eat out or have they brought a lunch box? This process can be described using BPMN.

Figure 3 – Process for eating lunch using BPMN

If BPM interests you and you are reading this article, the chances are that you are a pioneer in in your organization. BPMN is an industry-standard notation so if you are learning BPMN then the quicker you learn the rules and follow best-practice the more rewarding will be the result. I highly recommend the following two books:

Spending time formally documenting a process may seem like a waste of time in some ways. In the real world, situations change and people adapt or take shortcuts and the process model may be out-of-date in no time, but your BPMN model should not try to capture every detail or variation. More importantly, modelling a process using BPMN is an excellent aid to understanding how a given process currently works (even if it is dysfunctional). This process analysis can be much more complete when using a comprehensive notation like BPMN – if it can’t be modelled in BPMN then there is probably some wrong assumption or something hidden in the process that needs to be investigated. BPMN gives you the confidence to pursue a process analysis to its proper conclusion.

I will finish with an example of a process model I was grappling with recently. Systems integration is often done using messaging, typical of a Service Oriented Architecture. Files are transferred from one server to another and then imported into the recipient software system. (As this is an IT-centric problem I could of course have used UML to model this.) File transfer is either push or pull, in this case push. The sender places files on the recipient’s file system. The receiver checks for new files every few seconds and if it finds any it processes them.

Modelling system interaction in BPM it is called a collaboration. The collaboration is named after the process, in the case “File transfer”, and the lanes are named after the actors. The first thing I had to figure out was whether to use events to show that a message had arrived. At the same time the recipient is busy polling the directory looking for files, and will continue to do so as long as the service is available.

The sender and receiver are modelled as two separate processes. The sender sends the file using a message activity with a message flow symbol attached.

Figure 4 – File transfer using BPMN

The message is sent to the recipient’s polling subprocess which can generate a non-interrupting escalation event (ooh!) (the little arrow in the dotted circle) to trigger the next activity that processes the files. The subprocess is looped (the little circular arrow), so it will continue to run after the escalation occurs (forever in this case).

So how did I know how to use a non-interrupting escalation? Well, the non-interrupting part is just saying that the event does not interrupt the subprocess flow, i.e. polling will still continue when files have been found. The escalation part, just means that the polling process has found files and needs someone else to deal with them, so it notifies the parent process (escalation).

The diagrams were produced using Visio Professional 2016 which includes a function to validate the diagram according to BPMN 2.0 (“Check diagram”).

The agile way to migrate from Gmail to Office 365

I was recently working on a migration from Google Apps to Office 365 and was not happy with the big bang approach for migrating email as suggested by Microsoft. This is just too big a risk since email is a critical service for communication within the company – and with customers. It also meant that everyone would start using Office 365 at the same time, which provided no opportunity to improve the migration process once it was set in motion.

So I worked out a way to do an agile migration, where users could be migrated in batches and the administrator could refine the migration process with each iteration Kaizen-style. I decided to publish a generalised procedure that hopefully could be of use to others looking for a better way. At the very least, it should provide some insights into how to plan your own Office 365 migration.

Thanks to Finn McCann for reviewing the document and providing valuable insights. Enjoy!

Retro games

So I bought a Raspberry Pi 3 and installed an OpenELEC’s implementation of Kodi, the media centre application. This would finally replace my Windows Media Center (WMC) PC that I’d mothballed some time ago. Back then I had decided to convert my DVDs into ISOs in order to capture any extra stuff that came with the film, and (apart from WMC) Kodi was the only mainstream app I could find that could play back ISOs.

I have had a Synology DS412+ for a while now to back up files, photos and home videos, and I had also transferred my ISO collection to it. The Synology does have DLNA support and I can navigate the video/music libraries on it from my Samsung TV. However, the DS412+ with its four bays is more for business users, and has limited transcoding support compared to the Synology “Play” variants. But even the Play devices cannot compare to Kodi’s transcoding capabilities, and Synology cannot play back ISOs. Converting to some other container format seemed like the wrong way to solve the problem.

Kodi

Once the OpenELEC bundle was installed on the SanDisk 32GB micro SD card and the Pi was connected to the TV, Kodi started up automatically. Kodi can be navigated using the TV’s remote control thanks to HDMI-CEC eliminating the need for an extra remote control. The setup was fairly straightforward, I needed to do the following:

  1. Make my Synology media available in Kodi. There are some default sources set up in Kodi that point to the local filesystem, I edited these to point to the relevant folders on the NAS using the Synology’s NFS service.
  2. Get Kodi to fit properly on the screen. On larger screens Kodi can be too big but there is an option to resize it to fit the screen called Zoom. I set this to -4% which was perfect.
  3. Display the time and date correctly. Firstly, Kodi needs to be synced with an NTP server so that it displays the correct time and date. Then I also wanted it to display both the time and the date in the correct format. I navigate to System -> OpenELEC -> Network and added the standard three NTP servers to the list of Timeservers:
0.pool.ntp.org
1.pool.ntp.org
2.pool.ntp.org

After that everything setup and ready to play.

Arcade console

The Raspberry Pi is a general purpose computer and a media centre is just one of the uses it can be put to. I had played old 80’s arcade games on MAME about 15 years ago on my PC and thought why not use the Pi now.

There are a couple of methods to turning a Pi into an arcade game emulator. One way is to use RetroPie, a dedicated arcade game Linux setup, however that would mean replacing OpenELEC which I didn’t want to do for obvious reasons. The other option is to use RetroArch which plugs nicely into Kodi. In fact RetroPie is built on RetroArch. RetroArch works as a launcher for many different emulators including MAME. The emulators are including in the RetroArch distribution but not the game ROMs themselves.

RetroArch

I installed RetroArch and tested the one game that was included (a Sega Genesis game) which worked fine. To start a game, go to Program -> Advanced Launcher -> Default and select an emulator and then a game to play. Before we go any further, I will explain the parts of the RetroArch filesystem that were most relevant to my setup:

/storage/emulators/RetroArch/config/retroarch.cfg

This is where all of the many configuration options of RetroArch are stored. There is also a GUI (called RGUI) which can be used to edit these settings. More on that later.

/storage/emulators/RetroArch/roms

This is where the ROMs go. In Kodi select the emulator you want to use to run the new game(s) and use the context menu to “Add items”. I use the option to scan for new items which are then automatically added to the list of games under the emulator. The scan will also remove items whose ROMs have been deleted.

/storage/.kodi/addons/emulator.tools.retroarch/lib/libretro

Here is the list of emulators that ship with RetroArch. Only some of them are preconfigured in the Kodi Advanced Launcher menu. Setup more of these emulators in Kodi as needed.

/storage/.kodi/addons/emulator.tools.retroarch/config/retroarch.cfg

Here is the reference configuration. This is a handy cheatsheet that explains what each setting in retroarch.cfg does, as well as showing you the default value.

First ROM: Hardhat

On the MAME website there a few free ROMs to download. So I installed Hardhat in the ROMs directory using WinSCP. Then I added the game to “MAME / iMame4All” in Kodi and that ran fine too.

When RetroArch starts from Kodi, Kodi is replaced with the emulator and the TV remote control can no longer be used. So I plugged in a USB keyboard which was all I had available. RetroArch uses default bindings for keyboards out of the box. Here are the basics:

  • Right shift: Insert coins
  • Enter: Start game
  • Left/Right arrow keys: Move left/right
  • Space: Shoot

Once I could use the keyboard to play games, I started looking for a pair of SNES joypads to make the experience more authentic. These USB joypads were a small investment, of course RetroArch can bind to all kinds of game controllers, but for most of the early arcade games, the SNES joypads have sufficient functionality. I plugged the first one in and fired up Hardhat. Retroarch found the joypad but complained that the “controller not configured”. What to do?

RetroArch does of course have a (very large) configuration file which includes the settings for binding game controllers. RetroArch also provides a GUI (called RGUI) for editing the same settings. There is no obvious way to start RGUI from Kodi but I accidently stumbled across it when I renamed the the “hardhat.zip” ROM to “Hardhat.zip” (Linux is case-senstive). When Kodi tried to launch the emulator using “hardhat.zip” it failed and the RGUI started instead (which is the default behaviour I assume).

In RGUI I used the keyboard to navigate the menus. Here are the most relevant bindings:

  • Up/Down arrow keys: Move up and down the menus
  • Left/Right arrow keys: Hop up and down the menus
  • x: Enter submenu or edit value
  • z: Leave submenu or stop editing
  • Esc: Quit RGUI

SNES controller

So I navigated to Settings->Input->Input User 1 binds and bound the joypad to each control field. There were 10 in all: Left, Right, Up, Down, A, B, X, Y, Start and Select.

Super Nintendo controller

My plan was only to have the joypads plugged in the Pi; I wanted to avoid having a keyboard lying around just so I could press “Esc” to return to Kodi. This is where the RetroArch Hotkeys comes in. The SNES controller includes the “L” and “R” shoulder buttons which are not needed for most early arcade games. So I bound “L” as the RetroArch HotKey enabler (Settings->Input->Input Hotkey Binds->Enable hotkeys) and “R” as the “Quit RetroArch” hotkey (…->Input Hotkey Binds->Quit RetroArch). So now when I press “L” and “R” together the game exits and Kodi is restored. Bye bye keyboard.

input_enable_hotkey_btn = "4"
input_exit_emulator_btn = "5"

When I plugged in the second SNES joypad RetroArch automatically applied the same bindings to it which was nice.

The last problem was the games themselves were too big for the TV screen. The top and bottom were not visible which meant I couldn’t see vital information like the score and the number of lives left. RetroArch solved this too. This was fixed by changing the setting Settings->Video->Integer Scale to ON.

Finally, I changed the setting on the Advanced Laucher to Activate “Launching Application” notification. This is so that I could see the Kodi was responding even if it took a few seconds for RetroArch to warm up.

iMame4All

MAME is built for PCs which means it expects the user to be seating in front of the keyboard and to be able to type in commands or use hotkeys. iMame4All is built on MAME (currently MAME version 0.37b) and is aimed at mobile phone and other touchscreen platforms and is therefore better suited to a media center platform like Kodi.

RetroArch ships with MAME, iMame4All and lots of other emulators but only a handful are preconfigured in Kodi. The “MAME / iMame4All” menu item is preconfigured to run the iMame4All emulator but can be changed to run one of the MAME emulators included with RetroArch if desired.

MAME 0.37b is a very old version of MAME from 2000, so finding ROMs that work with that version of the emulator via the normal ROM websites was not going to be easy. So I searched for “mame 0.37b5 roms download” instead.

Once I had a few games up and running, I added a thumbnail to each game, usually a screenshot, to give a visual clue about what type of game it is. Of course you can add more metadata to the Kodi menu items to aid filtering if you have a lot of ROMs.

And that’s it. Just got a find the time to play now.

Big Data and the new EU regulations

On Tuesday, the new EU regulations regarding Big Data went into force. This affects all companies and authorities who are registering and storing personal data. This replaces the patchwork of rules and regulations that exist today:

On 4 May 2016, the official texts of the Regulation and the Directive have been published in the EU Official Journal in all the official languages. While the Regulation will enter into force on 24 May 2016, it shall apply from 25 May 2018. The Directive enters into force on 5 May 2016 and EU Member States have to transpose it into their national law by 6 May 2018. ( Read more)

The major points of the legislation are (source Wikipedia) :

  1. Responsibility and accountability: controllers have much more responsibility for the proper management of personal data.
  2. Consent: Valid consent must be explicit for data collected. Consent for children under 16 must be given by child’s parent or custodian.
  3. Data Protection Officer: A person with expert knowledge of data protection law and practices should assist the controller.
  4. Data breaches: Breaches must be reported to the Supervisory Authority as soon as they become aware of the data breach.
  5. Right to erasure: The data subject has the right to request erasure of personal data related to him.
  6. Data portability: A person shall be able to transfer their personal data from one electronic processing system to and into another.

Further reading: The EU Data Protection Reform and Big Data Factsheet (PDF)

With regards to exporting data outside the EU, the now invalid Safe Harbour agreement has been replaced with the new EU-U.S Privacy Shield which is promises to improve the handling of EU citizens data by U.S. authorities and companies.

Further reading:EU-U.S. Privacy Shield (PDF)

What is an IT Manager?

About five months ago, just before Christmas I started looking for a new job. I was working as IT Manager and find this type of role very enjoyable with the combination of strategic and operative responsibilities. I have a very broad IT background from smaller companies (<100 employees) with a focus on process development and automation.

For the first month or so I focused purely on applying for IT Manager jobs. Later I broadened my horizons to include System Architect roles, Requirement Analyst roles and Project Manager roles. This was for several reasons, one was that there are only a few IT Manager roles advertised at any one time – and that matched my salary expectations, travel time limits and required experience. Applying for other types of roles also meant more interview practice but more importantly that I would rather be in a job sooner and with the potential to advance, rather than later.

It turned out that IT Manager can be so many different things depending on the industry or just the company in question; from purely internal back-office IT management, to a more organisational development role, to product development. Managing the IT systems of a retail company does not exploit much of my experience from IT product management and delivery for instance, and the salary was, accordingly, not that exciting. But then again, I was more interested in moving away from tech stuff and working with a larger organisation that wanted to leverage outsourced, offshored and cloud services. More “what can IT do for you” rather than “what I can do with IT”.

So I applied for all types of advanced IT roles like architect and analyst, usually in bigger companies where it would be at least as challenging as IT manager in a smaller company (for which I also applied). Some sectors were just a no-go it seemed, the banking industry requires financial systems experience, government agencies want experience of working in the public sector and with public tenders. In short, the path to my next IT job was tough going. Every time I applied for jobs that were not exactly a match, there were always other candidates better suited and I never got an interview.

A changing role

The problem as I see it is that the traditional IT Manager role is changing radically. This is mainly due to outsourcing and pay-as-you-go cloud services. IT Managers need less technical skills and more business knowledge skills nowadays.  So either the IT Manager has to adapt or become diminished, as strategic IT decisions happen elsewhere. Either way this affects IT Manager salaries negatively.

At the same time there were oodles of consultancies looking for architects, analysts and project managers. So there is obviously work in this are and with the chance of a decent salary too.

Is there a connection between the two? I speculate that companies have access to more high-quality IT products than ever in a pay-as-you-go model that requires more analysis/architecture/integration expertise that classical nuts-and-bolts IT department know-how. That’s not to say that the IT Manager couldn’t do the job, it just means that as the use of IT grows, it is not increasing the status of and resources available to the IT Manager, but more the opposite.

Go with the flow

One of my principles as system integrator and IT Manager has always been to phase myself out by helping the organisation to help itself. It is the job of IT to help the organisation to become more efficient and to scale. Well maybe that is just what happened, so about two months ago I started contacting consultancy companies.

(In Sweden the consultancy market is very well developed. This is because Sweden has very strict employment laws but companies still need/want to be flexible. In my home country Ireland a consultant was always a specialist, someone you called in to do a specific job. In Sweden consultants (“konsulter”) are mostly manpower but there are of course still consultants that are specialists. More and more larger companies now have frame agreements with consultancy firms to provide resources at pre-negotiated rates. And sometimes it is hard for companies to understand why they must pay more for consultants because they are actually specialists.)

So, being a consultant will give me the chance to find out what the market for IT competence looks like nowadays, and to find out what my market-worth is. Consultancy companies can work in specific niches that are good to be familiar with. For instance ework and ZeroChaos function as de facto recruitment departments for some companies and are very good at pressing prices for consultants. Nox on the other hand work as an umbrella organisation for small consultancies or independent consultants and are working for them instead.

Polar Cape

In the end I ended up working for Polar Cape who rang me up and made me feel right at home. It is a small company but with colleagues with a similar level of IT industry experience. This is not as daring a being an independent consultant, but I feel I have a lot to learn about marketing/promoting myself and getting assignments as well as building my network. So now I have a chance to work with interesting IT projects in different industries while leveraging my broad technical experience and observing the rapid transformation of the IT landscape.

At a recent CIO Excellence conference, the final debate was about IT management’s role. My argument was that once you strip away all the back office IT management and maintenance activities, the company will still need IT governance regardless of whether IT services are provided internally or externally. Specifically, IT security will be a central part of IT governance in this future scenario and I am working towards a CISSP certification.

So what of the IT Manager? Well, as a consultant that has helped companies with their IT transformation process, I will be in a position to see whether this role still exists in 5-10 years time. Interesting times indeed.

A year with OneDrive for Business

As a completely cloud-based organisation, there was no backup service in place, but instead an ad hoc Dropbox solution was used to store files off-site. Each user simply created a free personal account which usually had sufficient capacity for most users. It was time to migrate to something better: OneDrive for Business, the final piece in the Microsoft’s cloud puzzle that is Office 365.

We were really looking forward to rolling out OneDrive and we started with a few pilot users. Here are some of the use cases that came up as part of the general rollout.

Backup

At a minimum, OneDrive functions as a backup for files that otherwise only exist on the employee’s computers and laptops. All business related files were moved from My Documents or the Dropbox folder to the new OneDrive folder. The OneDrive application then automatically uploads the files to the user’s 1TB personal storage space in the corporate Office 365 environment. This storage is part of the SharePoint Online file system and version control can be enabled to provide even more security in case of accidental changes or deletion.

Now, OneDrive maintains synchronisation in the background, it is completely transparent to the user, and Dropbox works the same. When users were using Dropbox they were working on local copies of their Word and Excel files. When the file was saved it was synced automatically to the cloud. (This is similar to classic version control systems like Subversion and CVS, but where synchronisation (“checking in”) is done manually.)

However, Microsoft Office turns this principle on its head. When the user opens an Office file, such as a Word file that is located in the OneDrive folder, what really happens is that Word fetches the server (cloud) copy of the file and opens that instead. When the file is saved, it is saved to the cloud and then the OneDrive client updates the local copy. In other words rather than letting OneDrive do its job, Office is also getting involved. Paul Thurott’s blog describes the behaviour more exactly and how to work around some of the excesses.

Normally all this does not concern the user as it is all completely transparent. Unfortunately, OneDrive for Business turned out to be not so robust and there were frequent problems with files being stuck out-of-sync and other generic “server errors” that defied analysis. Our road warriors could be in a 3G brown spot and the slow network connection could play havoc with the OneDrive/Office acrobatics described above. From an administration point-of-view this was difficult to troubleshoot until we understood what was happening with the files. But for most users who have never worked with version control-like systems before it was almost impossible to explain.

These recurring problems and the lack of understanding of what was happening caused a real crisis of confidence with some users. File synchronisation just cannot fail to work or it is worse than useless. There were calls to roll back to Dropbox. I explained that Microsoft just has to fix these stability issues since OneDrive is an essential component in the Office 365 service suite, and also that we gain so much functionality with OneDrive, such as integration with SharePoint.

Syncing other libraries

Once we got going with synchronising personal files using OneDrive, it was possible to start leveraging all the other features. SharePoint document libraries can also be synchronised to the local computer – a big step up from the limited file management functions in the SharePoint library web view. However, there are some limitations on the libraries that could be synced which we managed to work around.

Publishing on SharePoint

Now that any SharePoint document library could be synchronised, users could also update local copies of documents that were used in some embedded fashion in a SharePoint web part or webpage. For example the user could update a local copy of a spreadsheet in a synchronised Excel file and the web part or webpage would immediately be updated with the new table values or graphs.

Project collaboration

SharePoint websites are a great way to manage projects and a document library is often used to store the project documents. With OneDrive, all of the team members can synchronise with a common document library for the project. That way when one member adds or updates a file, the local copy for all the other team members is updated as well. No more emailing documents! There is even a OneDrive feature to allow multiple users to simultaneously edit the same file if needed.

External collaboration

Customers and suppliers can also use OneDrive for Business to access document libraries in the corporate SharePoint Online. This is an extension of the project collaboration use case above where the project team comprises both employees and external users. Just be very sure to restrict the privileges of the external users to just the document library or at most the project sub-site.

Mobility

There is of course a mobile app that is handy for viewing your OneDrive files. However, it only shows files from the user’s personal OneDrive space and not any other SharePoint document libraries that were synced to the user’s computer.

Document templates

This is one of my favourite applications for OneDrive. The company has many document templates for various types of Word and PowerPoint documents. Normally in Office it is possible to configure a location for custom templates; this had to be a folder on the computer or a file share and this is still true for the Office 365 apps.

With OneDrive, it was possible to create a document library in SharePoint dedicated to storing document templates. This library was set up with a folder hierarchy for categorising the templates. Then every employee could simply synchronise the document library containing the templates. The local copy of the library could then be set as the custom template location in Office. So now, users can start Word or PowerPoint and select the correct template from within the application as normal.

Finally, using the SharePoint library permission settings, write access could be restricted to the template administrators, and all other employees were given read-only access which allowed them to use the templates but not to be able to change or delete them. Furthermore, when the administrators made updates to the templates, OneDrive would automatically sync the changes to every user’s local copy so that new documents would always be created using the latest templates.

Summary

OneDrive for Business is an essential tool for all Office 365 customers. It still has some robustness issues but it will delivery huge productivity benefits in project collaboration, web publishing and template portfolio management.

Thinking about migrating to SaaS

In my last article we looked at what factors influence a company’s choice of IaaS solution. A more advanced strategy would be to migrate to a SaaS with the potential for even bigger savings.

If a company is looking to upgrade an existing IT system then there should always be some research done into what cloud alternatives are available. More are more companies are offering cloud versions of their services, or someone else is offering an equivalent competing SaaS.

Compared to IaaS, SaaS takes management of the IT systems completely out of the hands of the IT department. So much so, that any executive with purchasing power can start paying-as-they-go for a cloud service. It requires no IT expertise to register for a Salesforce subscription for instance. 

However, this is a flawed approach for two reasons, the first is that it undermines whatever IT strategy the company may have and can lead to a proliferation of SaaS subscriptions that provide overlapping functionality and are difficult to integrate. What we are talking about is IT governance. While SaaS simplifies the business case for using a new IT system (i.e. zero CAPEX aka “pay-as-you-go”) it still needs to be done in coordination with the IT function (e.g. the CIO). The second aspect of IT governance is security. While any decent SaaS provides good security functionality, it still requires the application of a security posture that is in line with the organisation’s security policies and standards.

To rephrase, if a company uses only SaaS solutions for its IT needs, then IT governance is reduced to managing the SaaS portfolio (which functionality is available where and how they could or should be integrated) and maintaining the organisation’s IT security posture.

Subscribing to a new SaaS is easy as it should be. The pay-as-you-go model simplifies testing a service, and the setup and roll-out of the service in the organisation is not under the same time-pressure as one that has required a huge upfront CAPEX. However, it is a different proposition if the company needs to migrate from an existing legacy system.

There are two types of migration, one from an on-premise product to the cloud version of the same product. This can happen because it is cheaper and/or the vendor has phased out the server version for the cloud version on the product. The other type of migration is to a cloud service based on a different product.

Regardless of which type of migration is being performed, there are some challenges (I hesitate to say limitations, read on) with leaving a legacy server-based solution. When a company owns and manages its own copy of a product, it has complete control over how it is deployed and integrated into the corporate IT environment. The product may provide APIs (or not) and there is the possibility to customise the product to meet the organisation’s needs. But the same product delivered as a SaaS will not allow the same customisation. And here is where SaaS really comes into its own I believe.

SaaS is very attractive from a licensing and management point-of-view, provided the company does not want to do a lot of customisations. Vendors, however, understand that one-size-fits-all will limit the number of customers they will have, so vendors invest heavily in providing lots of configuration possibilities. In the extreme, they can provide layers of abstraction and deliver what is essentially a toolbox of functionality that the customer can use to build their equivalent proprietary functionality. Jira Cloud is an example of a service that provides enormous flexibility when building issue-tracking workflows for instance.

Vendors will provide this toolbox-like functionality as long as there is a market willing to pay for it. However, this may still not be enough for customers with very specific needs. But cloud vendors are not done yet. They can also provide APIs such as REST to allow the customer to fulfil its requirements by encapsulating the custom functionality in a separate service. Jira Cloud and Salesforce Force.com provide this type of integration for instance.

And so here it is, the customer can migrate to the cloud using a standardised, configurable SaaS with an integration to a company-specific service that meets all of their requirements. Now, suddenly you have cost visibility! On one side you have a standard SaaS that probably provides 95% of the functionality for a very reasonable monthly cost, and on the other side the customisations that deliver 5% of the functionality but probably cost more per month.

But the whole point is not for the customer to have to migrate to the cloud in this way. SaaS makes the real cost of maintaining proprietary solutions painfully visible to management, with the result that there is more incentive to analyse why they are needed in the first place. And guess what, the organisation can often adapt their business processes to behave in a more standard fashion; after all cloud services exist because they are a successful way for lots of companies to leverage IT in their businesses.

In summary, when an organisation has to choose between making a work process change and making proprietary changes to on-premise IT-system, then its IT that most often gets the job. This creates a legacy that is dragged into the light when the company wants to leverage the benefits of very economical pay-as-you-go services. These customisations will acquire a very real maintenance cost and companies will only retain those that are essential. The IT department’s budget will start to correlate more with maintaining these customisations.

What does tomorrow’s IT department look like? I will explore this topic in another article.