Let me start by saying I’m a big fan of Docker, but a conversation I was having with another developer who was advocating the benefits of unified development environments started me thinking if they are always such a good idea.
On our project, we have a 4 developer team. Three of the devs use a centralised VM (Debian wheezy) to work on. In this setup they share common core libraries (Python, Django, Matplotlib etc) while all other code and dependencies are installed locally via Fabric & virtualenv. Every day they ssh into this box (some mount remote drives locally) and work in their separate spaces. The fourth, myself, develops code locally on my PC (Linux Mint). We all use the same PostgreSQL instance to host our databases but when running tests we can either use PostgreSQL or SQLite (mostly because using PostgreSQL via the Django testing framework is incredibly slow when running isolated tests). Due to firewalls and other issues, when working on my laptop at home I create Docker images to host our dependencies.
The case for homogeneity
The arguments for unified development environments are that it’s a lot easier for all the developers to get up to speed – no unexpected issues can arise from inconsistencies between setups and so time is therefore saved. In addition you are presumably developing on an infrastructure identical (or as near to as possible) to that of your production environment – again minimising any potential issues that might arise when deploying your code. I’m not arguing against these points. For instance, in our setup the development VM is a near copy of our production VM. In addition we are also all using *nix systems so our underlying platforms are all very similar.
A little bit of diversity is a good thing
However we still have some diversity, and its in those gaps between different environments that we’ve discovered useful things. For instance, what I’ve noticed in our development is that occasionally I’ll hit a version compatibility issue before the other devs because my OS is pulling down the latest and greatest packages. Although it’s frustrating when this happens this lets us identify potential version conflicts earlier and adapt to them ahead of time (i.e. before they hit our prod machines).
Another area where this has proved useful is that using SQLite for testing alongside PostgreSQL also means that we are forced to provide SQLite-specific alternatives whenever we’ve used some non-uniform SQL. Without another database to test against it becomes all too easy to find bits of SQL ending up in your code (even using a database abstraction layer in your application) and if this happens it’s usually to leverage some db-specific performance query which instantly ties you into a specific db vendor (and possibly even a specific version).
Does this extend to your IDE?
I think the heterogeneity argument applies to IDEs equally (although your mileage might vary depending on your programming language & IDE). Amongst our team we have used Eclipse+PyDev, Sublime, EMACS and KomodoIDE. I’m all for developer choice – allowing individuals to use whichever tools they feel most comfortable with (and therefore ensure maximum productivity). One example of this working in our favour is that because we have an EMACS user on our team, this encourages the others devs to adhere to 80 char line limits. It’s also the case that using a CI server ensures that your code can be easily built independently from any IDE which in turns frees developers up to choose any IDE they like.
I personally believe that a little diversity (in moderation) is a good thing to have in your development environments. Encountering inconsistencies or edge cases earlier in the development life-cycle can help you produce more adaptable, resilient code.