Tuesday, April 21, 2015

Theory of Everything

These days I watch more movies on a plane than anywhere else. One of those I saw yesterday on a trip to Boston was The Theory of Everything. Stephen Hawking has been a hero of mine from before I started at university doing my physics degree. I read a number of his papers and about him before I read his seminal book A Brief History Of Time. Several times.

Despite moving more into computing over the years I've continued to track physics, with books and papers by the likes of Penrose, Hawking and others, but obviously time gets in the way, which if you've read some of the same could be said to be quite ironic! Anyway, when I heard they were doing a film of Hawking's life I was a little nervous and decided it probably wasn't going to be something I'd watch. However, stuck on a 7 hour flight with not much else to do (laptop couldn't function in cattle-class due to person in front reclining) I decided to give it a go.

It's brilliant! Worth watching several times. The acting, the dialogue, the story, all come together. There's a sufficient mid of the human and physics to make to appeal to a wide audience. A singular film! (Sorry, a really bad pun!)

I also managed to watch The Imitation Game, which I think is probably just as good but for other reasons.

Saturday, April 11, 2015

Transactions and microservices

Sometimes I think I have too many blogs to cover with articles. However, this time what I was writing about really did belong more in the JBoss Transactions team blog, so if you're interested in my thoughts on where transactions (ACID, compensation etc.) fit into the world of microservices, then check it out.

Thursday, April 09, 2015

Containerless Microservices?

A few weeks ago I wrote about containerless development within Java. I said at the time that I don't really believe there's ever the absence of a container in one form or another, e.g., your operating system is a container. Then I wrote about containers and microservices. Different containers from those we're used to in the Java world, but definitely something more and more Java developers are becoming interested in. However, what I wanted to make clear here is that in no way are microservices (aka SOA) tied to (Linux) containers, whether Docker-based or some other variant.

Yes, those containers can make your life easier, particularly if you're in that DevOps mode and working in the Cloud. But just as not all software needs to run in the Cloud, neither do all services (whether micro or macro) need to be deployed into a (Linux) container. What I'm going to say will be in terms of Java, but it's really language agnostic: if you're looking to develop a service within Java then you already have great tools and 3rd party components to help you get there. The JVM is a wonderful container for all of your services that has had many years of skilled development within it to help make it reliable and perform. It also runs on a pretty much every operating system you can mention today and hardware ranging from constrained devices such as the Raspberry Pi, up to and including mainframes.

In the Java world we have a rich heritage of integration tools, such as Apache Camel and HornetQ. The community has built some of the best SOA tools and frameworks anywhere, so of course Java should be one of the top languages in which to develop microservices. If you're thinking about developing microservices in Java then you don't have to worry about using Linux containers: your unit of failure is the JVM. Start there and built upward. Don't throw in things (software components) that aren't strictly necessary. Remember what Einstein said: "Everything should be made as simple as possible, but not simpler."

However, as I said earlier, the Linux container can help you once you've got your microservice(s) up and running. Developing locally on your laptop, say, is unlikely to be the place where you want to start with something like Docker, especially if you're just learning how to create microservices in the first place. (Keep the number of variables to a minimum). But once you've got the service tested and working, Docker and its like represent a great way of packaging them up and deploying them elsewhere.

Monday, April 06, 2015

Microservices and state

In a previous entry I was talking about the natural unit of failure for a microservice(s) being the container (Docker or some other implementation). I touched briefly on state but only to say that we should assume stateless instances for the purpose of that article and we'd come back to state later. Well it's later and I've a few thoughts on the topic. First it's worth noting that if the container is the unit of failure, such that all servers within an image fail together, then it can also be the unit of replication. Again let's ignore durable state for now but it still makes sense to spin up multiple instances of the same image to handle load and provide fault tolerance (increased availability). In fact this is how cluster technologies such as Kubernetes work.

I've spent a lot of time over the years working in the areas of fault tolerance, replication and transactions; it doesn't matter whether we're talking about replicating objects, services, or containers, the principles are the same. This got me to thinking that something I wrote over 22 years ago might have some applicability today. Back then we were looking at distributed objects and strongly consistent replication using transactions. Work on weakly consistent replication protocols was in its infancy and despite the fact that today everyone seems to be offering them in one form or another and trying to use them within their applications, their general applicability is not as wide as you might believe; and pushing the problem of resolving replica inconsistencies up to the application developer isn't the best thing to do! However, once again this is getting off topic a bit and perhaps something else I'll come back to in a different article. For now let's assume if there are replicas then their states will be in sync (that doesn't require transactions, but they're a good approach).

In order to support a wide range of replication strategies, ranging from primary-copy (passive) through to available copies (active), we created a system whereby the object methods (code) was replicated separately from the object state. In this way the binaries representing the code was immutable and when activated they'd read the state from elsewhere; in fact because the methods could be replicated to different degress from the state, it was possible for multiple replicated methods (servers) to read their state from an unreplicated object state (store). I won't spoil the plot too much so the interested reader can take a look at the original material. However, I will add that there was a mechanism for naming and locating these various server and state replicas; we also investigated how you could place the replicas (dynamically) on various machines to obtain a desired level of availability, since availability isn't necessarily proportional to the number of replicas you've got.

If you're familiar with Kubernetes (other clustering implementations are possible) then hopefully this sounds familiar. There's a strong equivalency between the components and approaches that Kubernetes uses and what we had in 1993; of course other groups and systems are also similar and for good reasons - there are some fundamental requirements that must be met. Let's get back to how this all fits in with microservices, if that wasn't already obvious. As before, I'll talk about Kubernetes but if you're looking at some other implementation it should be possible to do a mental substitution.

Kubernetes assumes that the images it can spin up as replicas are immutable and identical, so it can pull an instance from any repository and place it on any node (machine) without having to worry about inconsistencies between replicas. Docker doesn't prevent you making changes to the state within a specific image but this results in a different image instance. Therefore, if your microservice(s) within an image maintain their state locally (within the image), you would have to ensure that this new image instance was replicated in the repositories that something like Kubernetes has access to when it creates the clusters of your microservices. That's not an impossible task, of course, but it does present some challenges, including how to distribute the updated image amongst the repositories in a timely manner - you wouldn't want a strongly consistent cluster to be created with different versions of the image because that means different states and hence not consistent, and how to ensure that state changes that happen at each Docker instance and result in a new image being created are in lock-step - one of the downsides of active replication is that it assumes determinism for the replica, i.e., given the same start state and the same set of messages in the same order, the same end state will result; not always possible if you have non-deterministic elements in your code, such as the time of day. There are a number of ways in which you can ensure consistency of state, but we're not just talking about the state of your service now, it's also got to include the entire Docker image.

Therefore, overall it can be a lot simpler to factor the binary that implements your algorithms for your microservices (aka the Docker image or 1993 object) from the state and consider the images within which the microservices reside to be immutable; any state changes that do occur must be saved (made durable) "off image" or be lost when the image is passivated, which could be fine for your services of course, if there's no need for state durability. Of course if you're using active replication then you still have to worry about determinism, but we're only considering state here and not the actual entire Docker image binary too. The way in which the states are kept consistent is covered by a range of protocols, which are well documented in the literature. Where the state is actually saved (the state store, object store, or whatever you want to call it) will depend upon your requirements for the microservice. There are the usual suspects, such as RDBMS, file system, NoSQL store, or even highly available (replicated) in memory data stores which have no persistent backup and rely upon the slim chance that a catastrophic failure will occur to wipe out all of the replicas (even persistent stores have a finite probability that they'll fail and you will lose your data). And of course the RDBMS, file system etc. should be replicated or you'll be putting all of your eggs in the same basket!

One final note (for now): so far we've been making the implicit assumption that each Docker image that contains your microservices in a cluster is identical and immutable. What if we relaxed the identical aspect slightly and allowed different implementations of the same service, written by different teams and potentially in different languages? Of course for simplicitly we should assume that these implementations can all read and write the same state (though even that limitation could be relaxed with sufficient thought). Each microservice in an image could be performing the same task, written against the same algorithm, but with the hopes that bugs or inaccuracies produced by one team were not replicated by others, i.e., this is n-versioning programming. Because these Docker images contain microservices that can deal with each other's states, all we have to do is ensure that Kubernetes (for instance) spins up sufficient versions of these heterogeneous images to give us a desired level of availability in the event of coding errors. That shouldn't be too hard to do since it's something research groups were doing back in the 1990's.

Saturday, April 04, 2015

Microservices and the unit of failure

I've seen and heard people fixating on the "micro" bit of "microservices". Some people believe that a microservice should be no larger than "a few lines of code" or a "few megabytes" and there has been at least one discussion about nanoservices! I don't think we should fixate on the size but rather that old Unix addage from Doug McIlroy: "write programs that do one thing and do it well". Replace "programs" with "service". It doesn't matter if that takes 100 lines of code or 1000 (or more or less).

As I've said several times, I think the principles behind microservices aren't that far removed from "traditional" SOA, but what is driving the former is a significant change in the way we develop and deploy applications, aka DevOps, or even NoOps if you're tracking Netflix and others. Hand in hand with these changes come new processes, tools, frameworks and other software components, many of which are rapidly becoming part of the microservices toolkit. In some ways it's good to see SOA evolve in this way and we need to make sure we don't forget all of the good practices that we've learnt over the years - but that's a different issue.

Anyway, chief amongst those tools is the rapid evolution of container technolgoies, such as Docker (other implementations are available, of course!) For simplicity I'll talk about Docker in the rest of this article, but if you're using something else then you should be able to do a global substitution and have the same result. Docker is great at creating stable deployment instances for pretty much anything (as long as it runs in Linux, at the moment). For instance, you can distribute your product or project as a Docker image and the user can be sure it'll work as you intended because you went to the effort to ensure that any third party dependencies were taken care of at the point you built it; so even if that version of Foobar no longer exists in the world, if you had it and needed it when you built your image then that image will run just fine.

So it should be fairly obvious why container images, such as those based on Docker, make good deployment mechanisms for (micro) services. In just the same way as technologies such as OSGi did (and still do), you can package up your service and be sure it will run first time. But if you've ever looked at a Docker image you'll know that they're not exactly small; depending upon what's in them, they can range from 100s of megabytes of gigabytes in size. Now of course if you're creating microservices and are focussing on the size of the service, then you could be worried about this. However, as I mentioned before, I don't think size is the right metric on which to base the conclusion of whether a service fits into the "microservice" category. Furthermore, you've got to realise that there's a lot more in that image than the service you created, which could in fact be only a few 100s of lines of code: you've got the entire operating system, for a start!

Finally there's one very important reason why I think that despite the size of Docker images being rather large, you should still consider them for your (micro) service deployments: they make a great unit of failure. We rarely build and deploy a single service when creating applications. Typically an application will be built from a range of services, some built by different teams. These services will have differing levels of availability and reliability. They'll also have different levels of dependency between one another. Crucially there will be groupings of services which should fail together, or at least if one of them fails the others may as well fail because they can't be useful to the application (clients or other services) until the failed service has recovered.

In previous decades, and even today, we've looked at middleware systems that would automatically deploy related services on to the same machine and, where possible, into the same process instance, such that the failure of the process or machine would fail the unit. Furthermore, if you didn't know or understand these interdependencies a priori, some implementations could dynamically track them and migrate services closer to each other and maybe even on to the same machine eventually. Now this kind of dynamism is still useful in some environments, but with containers such as Docker you can now create those units of failures from the start. If you are building multiple microservices, or using them from other groups and organisations, within your applications or composite service(s), then do some thinking about how they are related and if they should fail as a unit then pull them together into a single image.

Note I haven't said anything about state here. Where is state stored? How does it remain consistent across failures? I'm assuming statelessness at the moment, so technologies such as Kubernetes can manage the failure and recovery of immutable (Docker) images. Once you inject state then some things may change, but let's cover that at another date and time.

Saturday, March 21, 2015

Six Years as JBoss CTO

It's coming up to 6 years since Sacha asked me to take over from him as CTO. It was a privilege and it remains so today. Despite the facts that Arjuna was involved with JBoss for many years prior to my officially joining in 2005 and that my good friend/colleague Bob Bickel was a key member of JBoss, I didn't join JBoss officially until 2005. I found it to be a great company and a lot more than I ever expected! A decade on and now part of Red Hat, it remains an exciting place to work. I wake up each morning wondering what's in store for me as CTO and I still love it! Thanks Marc, Sacha and Bob!

Sunday, March 15, 2015

Restrospecting: Sun slowly setting

I remember writing this entry a few years back (2010), but it appears I forgot to hit submit. Therefore, in the interests in completeness I'm going to post it even though it's 5 years old!

--

So the deal is done and Sun Microsystems is no more and I'm filled with mixed emotions. Let's ignore my role within Red Hat/JBoss for now, as there are certainly some emotions tied up in that. It's sad to see Sun go and yet I'm pleased they haven't gone bust. I have a long history with Sun as a user and, through Arjuna, as a prospective partner. When I started my PhD back in the mid 1980's my first computer was a Whitechapel, the UK equivalent to the Sun 360 at the time. But within the Arjuna project the Sun workstation was king and to have one was certainly a status symbol. I remember each year when the new Sun catalogue came out or we had a new grant, we'd all look around for the next shiney new machine from Sun. We had 380s, Sparc, UltraSparc and others all the way through the 1990's (moving from SunOS through to the Solaris years). The Sun workstation and associated software was what you asipired to get, either directly or when someone left the project! In fact the Arjuna project was renowned within the Computing Department for always having the latest and greatest Sun kit.

The lustre started to dim in the 1990's when Linus put out the first Linux distribution and we got Pentium 133s for a research grant into distributed/parallel computing (what today people might call Grid or Cloud). When a P133 running Linux was faster than the latest Sun we knew the writing was on the wall. By the end of the decade most Sun equipment we had was at least 5 years old and there were no signs of it being replaced by more Sun machines.

Throughout those years we were also in touch with Sun around a variety of topics. For instance we talked with Jim Waldo about distributed systems and transactions, trying to persuade them not to develop their own transction implementation for Jini but to use ours. We also got the very first Spring operating system drop along with associated papers. We had invitations to speak at various Sun sites as well as the usual job offers. Once again, in those days Sun was the cool place to be seen and work.

But that all started to change as the hardware dominance waned. It was soon after Java came along, though I think that is coincidental. Although Solaris was still the best Unix variant around, Linux and NetBSD were good enough, at least in academia. Plus they were a heck of a lot cheaper and offered easier routes to do some interesting research and development, e.g., we started to look at reworking the Newcastle Connection in Linux, which would have been extremely difficult to do within Solaris.

Looking back I can safely say that I owe a lot to Sun. They were the best hardware and OS vendor in the late 1980's and early 1990's, providing me and others in our project with a great base on which to develop Arjuna and do our PhDs. In those days before Java came along they were already the de facto standard for academic research and develpment, at least with the international communities with which we worked. I came to X11, Interviews, network programming, C++ etc. all through Sun, and eventually Java of course. So as the Sun sinks slowly in the west I have to say a thanks to Sun for 20 years of pleasant memories.

Chromebook?

I travel a lot and until a couple of years ago I lugged around a 17" laptop. It was heavy, but it was also the only machine I used and I don't use an external monitor at home, just the office, so I needed the desktop space. But it was heavy and almost unusable when on a plane, especially if the person in front decided to recline their seat! Try coding when you can't see the entire screen!

In the past I'd tried to have a second smaller laptop for travelling, but that didn't work for me. Essentially the problem was once of synchronisation: making sure my files (docs, source, email etc.) which mainly resided on my 17" (main) machine were copied across to the smaller (typically 13") machine and back again once I was home. Obviously it's possible to do - I did it for a couple of years. But it's a PITA, even when automated. So eventually I gave up and went back to having just one machine.

Enter the tablet age. I decided to try to use a tablet for doing things when on a plane, but still just have a single machine and travel with it. That worked, but it was still limiting, not least because I don't like typing quickly on a touch screen - too many mistakes. Then I considered getting a keyboard for the tablet and suddenly thought about a Chromebook, which was fortunate because I happened to have one that was languishing unused on a shelf.

Originally I wasn't so convinced about the Chromebook. I'd lived through the JavaStation era and we had one in the office - one of the very first ever in the UK. Despite being heavy users of Java, the JavaStation idea was never appealing to me and it didn't take off despite a lot of marketing from Sun. There were a number of problems, not least of which were the lack of applications and the fact that the networks at the time really weren't up to the job. However, today things are very different - I use Google docs a lot from my main development machine as well as from my phone.

Therefore, although the idea of using a Chromebook hadn't appealed initially, it started to grown on me. What also helped push me over the tipping point was that I grew more and more disinterested in my tablet(s). I don't use them for playing games or social media; typically they are (were) used for email or editing documents, both of which I can do far better (for me at least) on a Chromebook.

So I decided that I could probably get away with a Chromebook on a plane, especially since it has offline capabilities - not many planes have wifi yet. But then I began to think that perhaps for short trips I could get away with it for everything, i.e., leave the main machine at home. Despite the fact I don't get a lot of time to spend coding, I still do it as much as I can. But for short trips (two or three days) I tend to only be working on documents or reading email. If I didn't take my main machine I wouldn't be able to code, but it wouldn't be such a problem. I also wouldn't need to sync code or email between the Chromebook and my main machine.

Therefore, I decided to give it a go earlier this year. I'd used the Chromebook at home quite extensively but never away from there. This meant there were a few teething problems, such as ensuring the work VPN worked smoothly and installing a few more applications that allowed for offline editing or video watching instead of real-time streaming. It wasn't a perfect first effort, but I think it was successful enough for me to give it another go next time I travel and know I won't need to code. I've also pretty much replaced my tablets with the Chromebook.