Quit Covering Up Your Toxic Hellstew with Docker

  • I read the opening paragraph and thought 'ah well, another boring didactic angry developer rant.' Just as I was about to close the tab, my eye caught the start of the second paragraph:

    This reminds me of my days in the Space Shuttle program.

    Which, to put it mildly, is something of a credibility boost. So I finished the article.

  • Yes, simplicity leads to understanding and I don't understand why more people don't get this simple concept. I've dealt with codebases with such a horrendous build process that it doesn't matter what kind of sugar you sprinkle on top because making any change is practically impossible. That complexity has to live somewhere and if you offload it to the docker buildfile it's still in the buildfile. The problem at the end of the day comes down to the fact that most developers either don't understand enough to build proper build pipelines or they are lazy or they don't think complexity in the build pipeline is anything to worry about. Docker does not change those things.

  • This reminds me of the "Every dev should be senior" mindset that is all too common in this industry.

    I don't see how supposing every devOps specialist is replaceable by average engineers is a real solution.

  • The article is essentially discussing frustration at the hiding - by abstraction - of technical debt incurred in adopting poor software architecture and/or development process.

    Some potentially relevant quotes[1]:

    Zymurgy's First Law of Evolving Systems Dynamics: Once you open a can of worms, the only way to recan them is to use a larger can.

    Ducharm's Axiom: If you view your problem closely enough you will recognize yourself as part of the problem.

    The organization of the software and the organization of the software team will be congruent. (paraphrasing of Conway's Law)

    Separation of concerns ... a necessary consequence of loss of resolution due to scale ... a strategy for staying sane. (Mark Burgess, In Search of Certainty: The Science of Our Information Infrastructure, 2013)

    [1] Taken from my fortune clone https://github.com/globalcitizen/taoup

  • Actually the more complex it is, the more beneficial encapsulation can be.

    I think maybe I know what his actual problem is. It didn't detect a change to the requirements.txt, or it is _always_ detecting a change.

    In the Dockerfile you want to ADD your requirements.txt first, then RUN the pip install, then ADD .

    http://stackoverflow.com/questions/25305788/how-to-avoid-rei...

    Also check for a --no-cache

  • I have been working recently with a PHP application whose installation instructions are "Run this VirtualBox image inside your network, and without a proxy in front of it because we are not properly configured".

    This PHP application is not trivial, but also not very complex. Yet its developers do not provide this application as a PEAR package, and replied that this kind of things are superfluous these days, VM are simpler.

    People like these developers fail to understand that there is a lot more in a VM than just their software (from the kernel to all the exposed services) and that by distributing a VM they are also becoming the maintainers of a very complex set of dependencies. Not that they care: it took them two months to release a VM not vulnerable to heartbleed. Let's see how long before they release a VM not vulnerable to shellshock.

  • this makes sense, i work in a pretty convoluted SharePoint environment. It is completely impossible to spin up a development environment without dozens of scripts and knowing exactly what lists to manually create and what data must be present inside of them.

    This means that new-hires are handed a cryptic and seriously out-of-date document with instructions on how to setup a proper VM environment.

    They check out their code.

    They deploy...but wait the deploy fails because of missing data, missing document types, missing lists...

    Open up SharePoint, enable some doc types not turned on by default, turn on some more features, add a list. Deploy again, no wait it died again, oh now it's a different doc type and a different feature that the deployment doesn't turn on by default ...etc

    The end result is a mess that requires days to get up and running, not hours.

  • Very valuable concept regurgitated with [flavour–of–the–year] Docker as the focus.

    The article seems to portray Docker as the _cause_ of this anti–pattern without any sort of context for why Docker and not xyz or why not your implementation of xyz.

  • I realize a lot of people use docker for a lot of different reasons.. But the technologies seem more geared toward(and seeing as dotcloud and not docker are PaaS companies) solving deployment concerns..

  • I don't know much about docker. What is he saying I makes easier?

  • The problem is the carpenter, not the hammer. Please stop blaming the hammers in your headlines.

    The tool was used incorrectly and it resulted it badness. Bringing Docker into it is pointless. This would have happened with Salt, Fabric, Chef, Puppet, etc. with the same team.

  • Testing of environments is needed on / with docker as well...