<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>DevOps &amp;mdash; StealthyCoder</title>
    <link>https://stealthycoder.writeas.com/tag:DevOps</link>
    <description>Making code ninjas out of everyone</description>
    <pubDate>Wed, 29 Apr 2026 05:23:10 +0000</pubDate>
    <item>
      <title>It depends...</title>
      <link>https://stealthycoder.writeas.com/it-depends?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[  You manage the dependencies or the dependencies will manage you.&#xA;&#xA;So I had a recent discussion on what dependency management is. It turns out there are a lot of dependencies, or in other words you are dependent on a lot of different things. First we defined what a dependency is. If you like it to a machine then any moving part big or small is a dependency. From the operating system to the version of a library to version of tools being used. As you see that is quite the number of moving parts. !--more--&#xA;&#xA;The goal therefore is to minimize moving parts. Fewer parts is less to manage. Now I am going into software development but I think you can apply the lesson to any discipline.&#xA;&#xA;Start with using kubernetes or Docker. Then have the entire Docker image be ready to run your application. This might entail getting one for cache, one for a database etc. All that is missing is the source code of the application you are developing and maybe some seed data. When that is the case you vastly reduced the moving parts already. Now all developers need is Docker tooling to get it up and running locally. Also when using Docker wherever you will run it you know it will work if it also works locally. &#xA;&#xA;This is all and well but what if you do not do this? What happens then? Let us make it so you do not use Docker but install everything locally. A certain runtime and some framework. In a few months a new developer joins. He has to install the runtime and framework but runs into a problem. The version the original developer has running is outdated and no longer available. So a new version is installed and everything still runs fine. Mind you we already have a schism. We add a new framework and a couple of libraries and in another few months two new developers start. At this time another runtime is introduced and some libraries and frameworks for this runtime. Now all developers have to install two runtimes and their respective libraries and frameworks. At this moment some developers cannot install the new runtime and the existing runtime has a new version again and that new version does not play well with what has been developed so far. You can quickly see this getting out of hand. &#xA;&#xA;Therefore if you had two Docker images each with one of the runtimes and its respective libraries and frameworks, all developers only need is Docker tooling. &#xA;&#xA;It might also occur that you never update and then you will become more and more insecure. Furthermore the time , effort and infeasibility all grow exponentially towards infinity. &#xA;&#xA;Try to keep up to date once a month. Do audits, upgrades and all of that goodness to not sway too far from the normal line. That way if correction is needed it is never too invasive. &#xA;&#xA;\ Certain mistakes and errors aside that can occur. &#xA;&#xA;\\ Certain special situations aside&#xA;&#xA;devops]]&gt;</description>
      <content:encoded><![CDATA[<blockquote><p>You manage the dependencies or the dependencies will manage you.</p></blockquote>

<p>So I had a recent discussion on what dependency management is. It turns out there are a lot of dependencies, or in other words you are dependent on a lot of different things. First we defined what a dependency is. If you like it to a machine then any moving part big or small is a dependency. From the operating system to the version of a library to version of tools being used. As you see that is quite the number of moving parts. </p>

<p>The goal therefore is to minimize moving parts. Fewer parts is less to manage. Now I am going into software development but I think you can apply the lesson to any discipline.</p>

<p>Start with using kubernetes or Docker. Then have the entire Docker image be ready to run your application. This might entail getting one for cache, one for a database etc. All that is missing is the source code of the application you are developing and maybe some seed data. When that is the case you vastly reduced the moving parts already. Now all developers need is Docker tooling to get it up and running locally. Also when using Docker wherever you will run it you know it will work if it also works locally. *</p>

<p>This is all and well but what if you do not do this? What happens then? Let us make it so you do not use Docker but install everything locally. A certain runtime and some framework. In a few months a new developer joins. He has to install the runtime and framework but runs into a problem. The version the original developer has running is outdated and no longer available. So a new version is installed and everything still runs fine. Mind you we already have a schism. We add a new framework and a couple of libraries and in another few months two new developers start. At this time another runtime is introduced and some libraries and frameworks for this runtime. Now all developers have to install two runtimes and their respective libraries and frameworks. At this moment some developers cannot install the new runtime and the existing runtime has a new version again and that new version does not play well with what has been developed so far. You can quickly see this getting out of hand.</p>

<p>Therefore if you had two Docker images each with one of the runtimes and its respective libraries and frameworks, all developers only need is Docker tooling.</p>

<p>It might also occur that you never update and then you will become more and more insecure. Furthermore the time , effort and infeasibility all grow exponentially towards infinity.</p>

<p>Try to keep up to date once a month. Do audits, upgrades and all of that goodness to not sway too far from the normal line. That way if correction is needed it is never too invasive. **</p>

<p>* Certain mistakes and errors aside that can occur.</p>

<p>** Certain special situations aside</p>

<p><a href="https://stealthycoder.writeas.com/tag:devops" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">devops</span></a></p>
]]></content:encoded>
      <guid>https://stealthycoder.writeas.com/it-depends</guid>
      <pubDate>Mon, 26 Nov 2018 17:54:50 +0000</pubDate>
    </item>
    <item>
      <title>Being wrapped feels good</title>
      <link>https://stealthycoder.writeas.com/being-wrapped-feels-good?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[So why am I a big pro on using Docker or in larger projects/companies even Kubernetes? Well the biggest advantage I see and experience is that you take away dependencies from developers and non developers alike. Can you run Docker, if yes you can run the apps and the database and the frontend. What is left to be done by you? Write the code. That is it. !--more--&#xA;&#xA;My strategy is to have your own images. Built upon a solid base image. For example Alpine Linux Docker image with nodejs and npm installed. Maybe even pnpm and Parcel. Then next image might be Angular, React or Vue. It will install the CLI tooling and make sure the package.json stays fresh. You mount the source code at a consistent and predictable location inside the Docker container. This means any person can now run the frontend as long as they have Docker. &#xA;&#xA;This can be done for any language. PHP I am looking at you with your weird images. Also the community of PHP please embrace this practice. Ditch Vagrant.&#xA;&#xA;Why?!?! Well the following reason. Vagrant gives you a full Virtual Machine. This means that Virtual Machine needs to be the exact same as the server that will be running the code. Which it won&#39;t. It lives on your developers&#39; machines. So a MacOSX virtualized it slightly different from a Linux. You get errors. Also drifts in versions of applications. System tooling. You name it. &#xA;&#xA;With Docker it will be run the same everywhere. Added benefit is that Docker is way easier to run for non tech people. &#xA;&#xA;Is there a cool hip term for a non tech person? &#xA;&#xA;devops]]&gt;</description>
      <content:encoded><![CDATA[<p>So why am I a big pro on using Docker or in larger projects/companies even Kubernetes? Well the biggest advantage I see and experience is that you take away dependencies from developers and non developers alike. Can you run Docker, if yes you can run the apps and the database and the frontend. What is left to be done by you? Write the code. <em>That is it.</em> </p>

<p>My strategy is to have your own images. Built upon a solid base image. For example Alpine Linux Docker image with nodejs and npm installed. Maybe even pnpm and Parcel. Then next image might be Angular, React or Vue. It will install the CLI tooling and make sure the <code>package.json</code> stays fresh. You mount the source code at a consistent and predictable location inside the Docker container. This means any person can now run the frontend as long as they have Docker.</p>

<p>This can be done for any language. PHP I am looking at you with your weird images. Also the community of PHP please embrace this practice. <strong>Ditch Vagrant.</strong></p>

<p>Why?!?! Well the following reason. Vagrant gives you a full Virtual Machine. This means that Virtual Machine needs to be the exact same as the server that will be running the code. Which it won&#39;t. It lives on your developers&#39; machines. So a MacOSX virtualized it slightly different from a Linux. You get errors. Also drifts in versions of applications. System tooling. You name it.</p>

<p>With Docker it will be run the same everywhere. Added benefit is that Docker is way easier to run for non tech people.</p>

<p>Is there a cool hip term for a non tech person?</p>

<p><a href="https://stealthycoder.writeas.com/tag:devops" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">devops</span></a></p>
]]></content:encoded>
      <guid>https://stealthycoder.writeas.com/being-wrapped-feels-good</guid>
      <pubDate>Tue, 18 Dec 2018 18:58:02 +0000</pubDate>
    </item>
    <item>
      <title>CRLF</title>
      <link>https://stealthycoder.writeas.com/crlf?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[What now?! Another abbreviation in this gotta type fast with fewest characters possible but still convey the most information possible? Well for us developers the chance is high that you had to search something with those four chars and an Operating System as the search term. !--more--&#xA;&#xA;Meaning&#xA;The 4 chars are actually two pairs of two chars. The first is CR and the second is LF. CR stands for Carriage Return and LF stands for Line Feed.&#xA;&#xA;Typewriter&#xA;Remember those old physical machines where you put whitened wood pulp in and using keys ran into that pulp some iron oxide on a ribbon. Yes, typewriters indeed. Wonderful mechanical machines that made life easier for humans. Now on those machines the sheet of paper would move side to side and up and down. The act of moving back to the beginning is called Carriage Return. The act of moving the sheet of paper is called Line Feeding. See how the terms translate well into modern computers and the terminals they are being run in. &#xA;&#xA;Side note&#xA;So on modern Operating Systems there exists a number of terminals or shells. Underwater they are called tty devices, or TeleTypewriter devices. That was because the first type of digital typewriters had a monitor and a typewriter interface. So even more evidence of history and how it all ties together.&#xA;&#xA;CRLF cont.&#xA;Ok sorry for the detour but it serves a purpose. Operating Systems interpret those characters differently. On Windows you have to do the same action as on a physical typewriter. Return the carriage to the beginning and feed in a new line. Therefore they use CRLF as the ending. On GNU/Linux and derivatives only feeding a line is necessary. Therefore they have LF only line endings. What does it mean then when you run CRLF file on GNU/Linux and co? Well the shell will do the following. Move the cursor back to the beginning and then execute that line and go to the next but there is no more line endings at the end there remains your command as a garbage input. Garbage in means garbage out. So the scripts usually fail. On Windows the same can happen but the system itself is very good at making sure there never is a singular LF if it can help it. &#xA;&#xA;Docker&#xA;What has Docker got to do with this I hear you ask? Ever noticed how most Dockerfile files have got all kinds of RUN commands and not just copy in a shell script and run that? I thought it was much simpler to maintain and read and so I did it. Thinking of how clever I was I suddenly had a team member with Windows and Docker did not work?!?! NANI?!?! After debugging the crap out of it I realized it was CRLF related. Solution is fairly simple though, not use shell scripts unless you copy them in already from a GNU/Linux system or download then from the web. So fellow human software engineers, when using Docker use RUN commands and maybe some staged builds. More on that in a future post/ future posts.&#xA;&#xA;#thoughts #devops ]]&gt;</description>
      <content:encoded><![CDATA[<p>What now?! Another abbreviation in this gotta type fast with fewest characters possible but still convey the most information possible? Well for us developers the chance is high that you had to search something with those four chars and an Operating System as the search term. </p>

<h2 id="meaning" id="meaning">Meaning</h2>

<p>The 4 chars are actually two pairs of two chars. The first is <em>CR</em> and the second is <em>LF</em>. <em>CR</em> stands for <em>Carriage Return</em> and <em>LF</em> stands for <em>Line Feed</em>.</p>

<h2 id="typewriter" id="typewriter">Typewriter</h2>

<p>Remember those old physical machines where you put whitened wood pulp in and using keys ran into that pulp some iron oxide on a ribbon. Yes, typewriters indeed. Wonderful mechanical machines that made life easier for humans. Now on those machines the sheet of paper would move side to side and up and down. The act of moving back to the beginning is called Carriage Return. The act of moving the sheet of paper is called Line Feeding. See how the terms translate well into modern computers and the terminals they are being run in.</p>

<h3 id="side-note" id="side-note">Side note</h3>

<p>So on modern Operating Systems there exists a number of terminals or shells. Underwater they are called tty devices, or TeleTypewriter devices. That was because the first type of digital typewriters had a monitor and a typewriter interface. So even more evidence of history and how it all ties together.</p>

<h2 id="crlf-cont" id="crlf-cont">CRLF cont.</h2>

<p>Ok sorry for the detour but it serves a purpose. Operating Systems interpret those characters differently. On Windows you have to do the same action as on a physical typewriter. Return the carriage to the beginning and feed in a new line. Therefore they use CRLF as the ending. On GNU/Linux and derivatives only feeding a line is necessary. Therefore they have LF only line endings. What does it mean then when you run CRLF file on GNU/Linux and co? Well the shell will do the following. Move the cursor back to the beginning and then execute that line and go to the next but there is no more line endings at the end there remains your command as a garbage input. Garbage in means garbage out. So the scripts usually fail. On Windows the same can happen but the system itself is very good at making sure there never is a singular LF if it can help it.</p>

<h2 id="docker" id="docker">Docker</h2>

<p>What has Docker got to do with this I hear you ask? Ever noticed how most Dockerfile files have got all kinds of RUN commands and not just copy in a shell script and run that? I thought it was much simpler to maintain and read and so I did it. Thinking of how clever I was I suddenly had a team member with Windows and Docker did not work?!?! NANI?!?! After debugging the crap out of it I realized it was CRLF related. Solution is fairly simple though, not use shell scripts unless you copy them in already from a GNU/Linux system or download then from the web. So fellow human software engineers, when using Docker use RUN commands and maybe some staged builds. More on that in a future post/ future posts.</p>

<p><a href="https://stealthycoder.writeas.com/tag:thoughts" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">thoughts</span></a> <a href="https://stealthycoder.writeas.com/tag:devops" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">devops</span></a></p>
]]></content:encoded>
      <guid>https://stealthycoder.writeas.com/crlf</guid>
      <pubDate>Thu, 10 Jan 2019 09:23:21 +0000</pubDate>
    </item>
    <item>
      <title>Windows Docker === headache</title>
      <link>https://stealthycoder.writeas.com/windows-docker-headache?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[To start off this adventure I reiterate that making your applications run in docker everywhere is a good thing. So the process of getting applications in docker is called dockerisation or to dockerise your apps. Or for you other English speakers dockerization and to dockerize. The reason for this is that in principle if you can run docker you can run the application. Now this is where Windows comes in. !--more--&#xA;&#xA;This is where Windows comes in. Windows being the host and it being so different from OSX and GNU/Linux systems it brings it&#39;s own quirks with it when running docker on Windows. One of the things I explored in a previous post where you copy in files from Windows into the docker container and the line endings being different resulting in not working scripts. Now I will explore a small problem running a front-end application in docker and Windows.&#xA;&#xA;Mounting&#xA;The concept of mounting  means you have a device or folder and you want to make it available at a specific location. In most cases you would mount a hard disk or maybe a USB stick to a location you can access the folder and inspect the contents. Well you can also mount files onto a location or a whole filesystem even. When using docker you can also mount directories or files on your host system into the docker application. This creates a nice two way street of manipulating files and directories. Whatever you change on the host system will be reflected inside the docker container and vice versa. &#xA;&#xA;Dependencies&#xA;In the front-end world of development dependencies are generally managed through a package.json and a package-lock.json, for you Yarn fanboys a yarn.lock file. Meaning when you run your installer of choice command to install the packages it will use these files as input and put everything in directory nodemodules . This is all fine and dandy but what I discovered is that if you mount your application&#39;s source code in the docker and therefore have docker create the nodemodules directory synced with Windows host it will go awry. &#xA;&#xA;Solution&#xA;What seems to be the problem is that Windows cannot handle the vast amount of writes and operations inside the nodemodules directory. Solution is to create a volume using docker volume create and then using that named volume as the mount for your nodemodules . Actual example for those who are using docker-compose :&#xA;&#xA;volumes:&#xA;  nodemodules:&#xA;&#xA;services:&#xA;  volumes:&#xA;     nodemodules:/var/www/html/node_modules&#xA;Also place this solution in a file called docker-compose.windows.yaml. Then you only have the override for Windows and not for all other users. &#xA;&#xA;devops]]&gt;</description>
      <content:encoded><![CDATA[<p>To start off this adventure I reiterate that making your applications run in docker everywhere is a good thing. So the process of getting applications in docker is called <em>dockerisation</em> or to <em>dockerise</em> your apps. Or for you other English speakers <em>dockerization</em> and to <em>dockerize</em>. The reason for this is that in principle if you can run docker you can run the application. Now this is where Windows comes in. </p>

<p>This is where Windows comes in. Windows being the host and it being so different from OSX and GNU/Linux systems it brings it&#39;s own quirks with it when running docker on Windows. One of the things I explored in a previous post where you copy in files from Windows into the docker container and the line endings being different resulting in not working scripts. Now I will explore a small problem running a front-end application in docker and Windows.</p>

<h1 id="mounting" id="mounting">Mounting</h1>

<p>The concept of mounting  means you have a device or folder and you want to make it available at a specific location. In most cases you would mount a hard disk or maybe a USB stick to a location you can access the folder and inspect the contents. Well you can also mount files onto a location or a whole filesystem even. When using docker you can also mount directories or files on your host system into the docker application. This creates a nice two way street of manipulating files and directories. Whatever you change on the host system will be reflected inside the docker container and vice versa.</p>

<h1 id="dependencies" id="dependencies">Dependencies</h1>

<p>In the front-end world of development dependencies are generally managed through a <code>package.json</code> and a <code>package-lock.json</code>, for you Yarn fanboys a <code>yarn.lock</code> file. Meaning when you run your installer of choice command to <code>install</code> the packages it will use these files as input and put everything in directory <code>node_modules</code> . This is all fine and dandy but what I discovered is that if you mount your application&#39;s source code in the docker and therefore have docker create the <code>node_modules</code> directory synced with Windows host it will go awry.</p>

<h1 id="solution" id="solution">Solution</h1>

<p>What seems to be the problem is that Windows cannot handle the vast amount of writes and operations inside the <code>node_modules</code> directory. Solution is to create a <em>volume</em> using <code>docker volume create</code> and then using that <em>named volume</em> as the mount for your <code>node_modules</code> . Actual example for those who are using docker-compose :</p>

<pre><code class="language-yaml">volumes:
  node_modules:

services:
  volumes:
     - node_modules:/var/www/html/node_modules
</code></pre>

<p>Also place this solution in a file called <code>docker-compose.windows.yaml</code>. Then you only have the override for Windows and not for all other users.</p>

<p><a href="https://stealthycoder.writeas.com/tag:devops" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">devops</span></a></p>
]]></content:encoded>
      <guid>https://stealthycoder.writeas.com/windows-docker-headache</guid>
      <pubDate>Sun, 10 Mar 2019 15:09:13 +0000</pubDate>
    </item>
    <item>
      <title>Communication is key</title>
      <link>https://stealthycoder.writeas.com/communication-is-key?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[In our lives we generally suck at communication or in active sense communicating our hopes, dreams, wishes, and desires to other human beings. This is because it is difficult to get communication right. Not to mention how to do it right with software. !--more--&#xA;&#xA;Social&#xA;You talk and speak and neither of those are communication. You can say words but that does not mean you communicate your idea or end result to the other party. The thing that makes it so tricky is two or more parties are involved each with their own central processing unit that has a language center that each has its own interpreter. That means the same message has different results. Take a simple sentence like : I am happy to write this article . People can read it and go well he sure is not happy to write that article or go he wrote this article not being happy first but the act of writing made him happy. Or maybe some other interpretation all together. &#xA;&#xA;The thing is that the receiver dictates how the message is received and thus how the communication will continue.&#xA;&#xA;In software this means pick a correct protocol and a correct message structure and potentially mime type. For protocol not really a lot of choice except UDP and TCP. Then message structure and mine type can be essential. The structure is not whether it is JSON or XML or something else. It is what is inside. How is it ordered. What does each property contain. Do both parties agree on it? That is essential in communication . &#xA;&#xA;Language&#xA;The language used can also be a barrier. The Lingua Franca these days is English everywhere. Yet that still poses problems because not everyone is a native speaker and even if you are you still run into dialects or other subtle nuances that allow for miscommunication. &#xA;I myself worked in a team with 9 different nationalities and that means 9 different cultures and 9 different people all trying to speak English and communicate their thoughts and ideas. Needless to say this caused some confusion and miscommunication was rampant throughout. We had to do even more communication and trying different ways of explaining the same concept because it had the best way of making sure everyone was talking and understanding the same thing. In this case it also helps to have the receiver echo back his understanding to you or the rest of the group to get a consensus. &#xA;&#xA;In this case for software you have to choose how you want to communicate the structure. So will it be JSON, XML or ProtoBuff? This will dictate what will be send and how to process it. I think in general don&#39;t go with XML anymore because it is so verbose. &#xA;&#xA;Context&#xA;The context matters greatly in communication. Are you communicating about Apple or the fruit? Are you trying to convince someone to go somewhere or just sharing information? The context dictates how the receivers will process the information. Filters will be applied. Information will be enhanced or enriched in certain cases even. &#xA;&#xA;In software you want to make sure the context is agreed upon as well. Is it a REST API ? Or is there a state in the backend or will there be something else going on? How will you authenticate to become a known entity ? How will you authorize accessing information? How do you forward this information? What do you let the end user know about itself ? All these questions you can define from the start and sometimes as you go along as well. &#xA;&#xA;In the end communication is hard and difficult and error prone but important to get right. Try in your conversations with people to make sure they definitely understand you. In software write down how you want to communicate aforehand and eliminate misconceptions. &#xA;&#xA;#thoughts #devops]]&gt;</description>
      <content:encoded><![CDATA[<p>In our lives we generally suck at communication or in active sense communicating our hopes, dreams, wishes, and desires to other human beings. This is because it is difficult to get communication right. Not to mention how to do it right with software. </p>

<h2 id="social" id="social">Social</h2>

<p>You talk and speak and neither of those are communication. You can say words but that does not mean you communicate your idea or end result to the other party. The thing that makes it so tricky is two or more parties are involved each with their own central processing unit that has a language center that each has its own interpreter. That means the same message has different results. Take a simple sentence like : <em>I am happy to write this article</em> . People can read it and go well he sure is not happy to write that article or go he wrote this article not being happy first but the act of writing made him happy. Or maybe some other interpretation all together.</p>

<p>The thing is that the receiver dictates how the message is received and thus how the communication will continue.</p>

<p>In software this means pick a correct protocol and a correct message structure and potentially mime type. For protocol not really a lot of choice except UDP and TCP. Then message structure and mine type can be essential. The structure is not whether it is JSON or XML or something else. It is what is inside. How is it ordered. What does each property contain. Do both parties agree on it? That is essential in communication .</p>

<h2 id="language" id="language">Language</h2>

<p>The language used can also be a barrier. The Lingua Franca these days is English everywhere. Yet that still poses problems because not everyone is a native speaker and even if you are you still run into dialects or other subtle nuances that allow for miscommunication.
I myself worked in a team with 9 different nationalities and that means 9 different cultures and 9 different people all trying to speak English and communicate their thoughts and ideas. Needless to say this caused some confusion and miscommunication was rampant throughout. We had to do even more communication and trying different ways of explaining the same concept because it had the best way of making sure everyone was talking and understanding the same thing. In this case it also helps to have the receiver echo back his understanding to you or the rest of the group to get a consensus.</p>

<p>In this case for software you have to choose how you want to communicate the structure. So will it be JSON, XML or ProtoBuff? This will dictate what will be send and how to process it. I think in general don&#39;t go with XML anymore because it is so verbose.</p>

<h2 id="context" id="context">Context</h2>

<p>The context matters greatly in communication. Are you communicating about Apple or the fruit? Are you trying to convince someone to go somewhere or just sharing information? The context dictates how the receivers will process the information. Filters will be applied. Information will be enhanced or enriched in certain cases even.</p>

<p>In software you want to make sure the context is agreed upon as well. Is it a REST API ? Or is there a state in the backend or will there be something else going on? How will you authenticate to become a known entity ? How will you authorize accessing information? How do you forward this information? What do you let the end user know about itself ? All these questions you can define from the start and sometimes as you go along as well.</p>

<p>In the end communication is hard and difficult and error prone but important to get right. Try in your conversations with people to make sure they definitely understand you. In software write down how you want to communicate aforehand and eliminate misconceptions.</p>

<p><a href="https://stealthycoder.writeas.com/tag:thoughts" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">thoughts</span></a> <a href="https://stealthycoder.writeas.com/tag:devops" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">devops</span></a></p>
]]></content:encoded>
      <guid>https://stealthycoder.writeas.com/communication-is-key</guid>
      <pubDate>Mon, 29 Apr 2019 18:05:18 +0000</pubDate>
    </item>
    <item>
      <title>Get your Thanos on</title>
      <link>https://stealthycoder.writeas.com/get-your-thanos-on?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[So once in a while you feel like decluttering your life or maybe just in technical areas remove unneeded apps and libraries and clean up projects of unwanted cruft. When talking about the cloud I generally have the most experience with AWS. !--more--&#xA;&#xA;On AWS we created an account for all our developers to play around and experiment. That goes excellent. People are learning and loving it and everyone knows more and more each time. Like children though they don&#39;t clean up after themselves so I made use of a little helper called aws-nuke. &#xA;&#xA;This little helper has an excellent sane default of not actually applying anything. It only does so called dry runs. You give it a config file where you specify what to keep and what to target and so on. Then however if you are satisfied with the results you let it loose and snap your fingers for added dramatical effect and gone are your AWS resources. So you safe costs and be a bad ass while doing it. Every time someone will hear your snap your fingers they will fear that their cloud environment is gone. &#xA;&#xA;One little thing though in the config file please specify for each region that you don&#39;t want to delete the default VPC. I forgot and all of a sudden all cloudformation and terraform actions did not work anymore. &#xA;&#xA;devops]]&gt;</description>
      <content:encoded><![CDATA[<p>So once in a while you feel like decluttering your life or maybe just in technical areas remove unneeded apps and libraries and clean up projects of unwanted cruft. When talking about the cloud I generally have the most experience with AWS. </p>

<p>On AWS we created an account for all our developers to play around and experiment. That goes excellent. People are learning and loving it and everyone knows more and more each time. Like children though they don&#39;t clean up after themselves so I made use of a little helper called <em>aws-nuke</em>.</p>

<p>This little helper has an excellent sane default of not actually applying anything. It only does so called dry runs. You give it a config file where you specify what to keep and what to target and so on. Then however if you are satisfied with the results you let it loose and snap your fingers for added dramatical effect and gone are your AWS resources. So you safe costs and be a bad ass while doing it. Every time someone will hear your snap your fingers they will fear that their cloud environment is gone.</p>

<p>One little thing though in the config file please specify for each region that you don&#39;t want to delete the default VPC. I forgot and all of a sudden all cloudformation and terraform actions did not work anymore.</p>

<p><a href="https://stealthycoder.writeas.com/tag:devops" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">devops</span></a></p>
]]></content:encoded>
      <guid>https://stealthycoder.writeas.com/get-your-thanos-on</guid>
      <pubDate>Fri, 07 Jun 2019 06:35:54 +0000</pubDate>
    </item>
    <item>
      <title>Environmentally friendly programming</title>
      <link>https://stealthycoder.writeas.com/environmentally-friendly-programming?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[So in line with previous post, there is something to be said about thinking about the environment when developing any piece of software. Most of the time we all take for granted the nearly limitless amount of resources to our disposal. Memory, CPU cycles and so on. The thing is that all costs energy to use. Energy has to come from somewhere and in general is not good for the environment, not to mention the heat you produce. !--more--&#xA;&#xA;When talking about deploying somewhere , let us say AWS. Then it makes sense to carefully think about autoscaling, initial size of the instances, and seriously think about reserved instances or even spot instances. To lower costs but also to lower used resources. &#xA;&#xA;When programming there is a balance between pre mature optimization and effective resource usage. There is the advice don&#39;t optimize too early because you might throw away the piece of code you wrote or you only use it once in the program and then it does not matter either. If you have the code in and even though you call it only once. You still need to make that pretty. Not only to avoid broken window theory but also to not use more resources than necessary. Even if it is once at bootup, you might start or boot the application a considerable amount of times before the lifetime of the software is over. All wasted energy if you optimised it. Large numbers apply of course in general but it is the same for response times. You want the shortest response time possible because the user is waiting too. All wasted energy. &#xA;&#xA;In a cynical way you could say the servers are running anyway so what does it matter. It matters because if we effectively consume resources we can achieve more with the same. Minimal amount of material and maximum results.&#xA;&#xA;devops]]&gt;</description>
      <content:encoded><![CDATA[<p>So in line with previous post, there is something to be said about thinking about the environment when developing any piece of software. Most of the time we all take for granted the nearly limitless amount of resources to our disposal. Memory, CPU cycles and so on. The thing is that all costs energy to use. Energy has to come from somewhere and in general is not good for the environment, not to mention the heat you produce. </p>

<p>When talking about deploying somewhere , let us say AWS. Then it makes sense to carefully think about autoscaling, initial size of the instances, and seriously think about reserved instances or even spot instances. To lower costs but also to lower used resources.</p>

<p>When programming there is a balance between pre mature optimization and effective resource usage. There is the advice don&#39;t optimize too early because you might throw away the piece of code you wrote or you only use it once in the program and then it does not matter either. If you have the code in and even though you call it only once. You still need to make that pretty. Not only to avoid broken window theory but also to not use more resources than necessary. Even if it is once at bootup, you might start or boot the application a considerable amount of times before the lifetime of the software is over. All wasted energy if you optimised it. Large numbers apply of course in general but it is the same for response times. You want the shortest response time possible because the user is waiting too. All wasted energy.</p>

<p>In a cynical way you could say the servers are running anyway so what does it matter. It matters because if we effectively consume resources we can achieve more with the same. Minimal amount of material and maximum results.</p>

<p><a href="https://stealthycoder.writeas.com/tag:devops" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">devops</span></a></p>
]]></content:encoded>
      <guid>https://stealthycoder.writeas.com/environmentally-friendly-programming</guid>
      <pubDate>Fri, 07 Jun 2019 07:03:01 +0000</pubDate>
    </item>
    <item>
      <title>Priorities should be explicit</title>
      <link>https://stealthycoder.writeas.com/priorities-should-be-explicit?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[You might think what will this post be about? Well it is inspired by a tool I used called terraform . What happened was we configured it a specific way and then it still did not work in one environment but it did in another environment. Same config file but different environments? It must be doing something else under the hood. What did we do to help? Crank up the log level to the max of course. !--more--&#xA;&#xA;Configuration should be higher priority&#xA;So turns out, the tool still read the environment variables first even though we explicitly configured another credentials provider. That is annoying to say the least. &#xA;&#xA;Please remember people, whenever you write a tool or application with a configuration. The configuration takes precedence. If there is nothing in the configuration you are welcome to use so called sane defaults . This means you take the option that makes the most sense or is the best for the developer. Examples are that certain security features are on by default instead of off . Features like encryption, or good random entropy or password protected. Other features can be the need for  an explicit parameter to not execute a so called dry run . In a dry run nothing happens but you see what would happen.&#xA;&#xA;Issues&#xA;It seems the terraform tool has a lot of issues. I do not mean it does not work well and every other second we run into a problem. I mean their GitHub page has 1500+ issues reported (at the time of writing 1560240633 UTC ). If anyone of HashiCorp reads this, first thank you for reading this and your name reminds me of BlastCorps for the N64 somehow, and secondly please manage those issues. It is very difficult to find stuff that will be fixed, won&#39;t be fixed, is being fixed and any other state in between. &#xA;&#xA;devops]]&gt;</description>
      <content:encoded><![CDATA[<p>You might think what will this post be about? Well it is inspired by a tool I used called <em>terraform</em> . What happened was we configured it a specific way and then it still did not work in one environment but it did in another environment. Same config file but different environments? It must be doing something else under the hood. What did we do to help? Crank up the log level to the max of course. </p>

<h1 id="configuration-should-be-higher-priority" id="configuration-should-be-higher-priority">Configuration should be higher priority</h1>

<p>So turns out, the tool still read the environment variables first even though we explicitly configured another credentials provider. That is annoying to say the least.</p>

<p>Please remember people, whenever you write a tool or application with a configuration. The configuration takes precedence. If there is nothing in the configuration you are welcome to use so called <em>sane defaults</em> . This means you take the option that makes the most sense or is the best for the developer. Examples are that certain security features are <strong>on</strong> by default instead of <strong>off</strong> . Features like encryption, or good random entropy or password protected. Other features can be the need for  an explicit parameter to not execute a so called <em>dry run</em> . In a <em>dry run</em> nothing happens but you see what <em>would</em> happen.</p>

<h1 id="issues" id="issues">Issues</h1>

<p>It seems the terraform tool has a lot of issues. I do not mean it does not work well and every other second we run into a problem. I mean their GitHub page has 1500+ issues reported (at the time of writing 1560240633 UTC ). If anyone of HashiCorp reads this, first thank you for reading this and your name reminds me of BlastCorps for the N64 somehow, and secondly please manage those issues. It is very difficult to find stuff that will be fixed, won&#39;t be fixed, is being fixed and any other state in between.</p>

<p><a href="https://stealthycoder.writeas.com/tag:devops" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">devops</span></a></p>
]]></content:encoded>
      <guid>https://stealthycoder.writeas.com/priorities-should-be-explicit</guid>
      <pubDate>Tue, 11 Jun 2019 08:09:01 +0000</pubDate>
    </item>
    <item>
      <title>No, .I. don&#39;t overReact </title>
      <link>https://stealthycoder.writeas.com/no-i?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[This post is about something that is trending and me and a close friend of mine who were discussing this were both baffled by this trend. We simply don&#39;t get it. There was once the web in all it&#39;s glory in the 90s. It was only HTML. That was it. There was a markup language that showed you what needed to be put where. Then there was some styling but not a lot really. !--more--&#xA;&#xA;So that had to change and in came CSS. Now you could better style the webpages. HTML for structure and CSS for look and feel. Perfect. However, this all was still static. It would be nice to add something to it that could dynamically update a page without reloading or just fetch data dynamically. In comes JavaScript. So we got the holy trifecta, HTML, CSS and JS. &#xA;&#xA;Nowadays though, they feel everything should be able to be done from within JS ?!?!?!?! That is what React feels like to us.&#xA;&#xA;render( {}, { todos, text} ) {&#xA;    return (&#xA;         form onSubmit={this.addTodo} action=&#34;javascript:&#34;&#xA;             input value={text} onInput={this.setText} /&#xA;            button type=&#34;submit&#34;Add/button&#xA;            ul&#xA;                { todos.map( todo =  ( li{todo.text}/li ) ) }&#xA;            /ul&#xA;        /form&#xA;  ); &#xA;}&#xA;&#xA;So we just wrote JS that contains HTML that contains JS that contains HTML. That has got to be the ugliest way to write HTML. &#xA;&#xA;In addition to all this, is the fact that React is a library . It is not a framework . Always a fun question to ask during interviews, what is the difference between them? Well a library is code you use, a framework uses your code. That might sound vague, but in essence this is correct. You would use code from a library that does something specific. For example query DuckDuckGo and return the first result. A framework is something that is built for you to place your specific code in but it handles everything else. Angular is a framework in contrast to React. &#xA;&#xA;What we like about Angular is it splits everything up nicely. You have HTML files for the structure, (S)CSS for the styling and then JS for making it dynamic. Although you only provide your code, Angular makes sure it works. There is a whole bunch of extra layers around it you don&#39;t see and know about in order to make it all work. That is beauty. &#xA;&#xA;In the similar line exists the Polymer framework, from Google. It is based around the idea of Web Components and uses the template and slot tags from the HTML5 spec. Definitely worth a look.&#xA;&#xA;The other issue that arises from using a library in contrast to a framework is that you have no clear same structure. You are left to choose your structure. However you want, as long it works it will be okay. In Angular you don&#39;t have that luxury as much and that is why we like it more. It is less gun-ho and cowboy style compared to React. You have to follow the structure they dictate in order to make it work. That means Angular codebases over all are predictable, uniform and that in turn makes it so Angular developers can switch projects relatively easily. In React we don&#39;t have that guarantee, so it is wildly different. Unless you had a disciplined dev, the odds are some corners were cut or some breaking away of the mold was done in order to make something work. &#xA;&#xA;So our advice, write HTML, CSS and JS separately and do not try to smoosh them all together. Along a similar vein I will leave you with this.&#xA;&#xA;#devlife #devops #thoughts]]&gt;</description>
      <content:encoded><![CDATA[<p>This post is about something that is trending and me and a close friend of mine who were discussing this were both baffled by this trend. We simply don&#39;t get it. There was once the web in all it&#39;s glory in the 90s. It was only HTML. That was it. There was a markup language that showed you what needed to be put where. Then there was some styling but not a lot really. </p>

<p>So that had to change and in came CSS. Now you could better style the webpages. HTML for structure and CSS for look and feel. Perfect. However, this all was still static. It would be nice to add something to it that could dynamically update a page without reloading or just fetch data dynamically. In comes JavaScript. So we got the holy trifecta, HTML, CSS and JS.</p>

<p>Nowadays though, they feel everything should be able to be done from within JS ?!?!?!?! That is what React feels like to us.</p>

<pre><code class="language-js">render( {}, { todos, text} ) {
    return (
         &lt;form onSubmit={this.addTodo} action=&#34;javascript:&#34;&gt;
             &lt;input value={text} onInput={this.setText} /&gt;
            &lt;button type=&#34;submit&#34;&gt;Add&lt;/button&gt;
            &lt;ul&gt;
                { todos.map( todo =&gt; ( &lt;li&gt;{todo.text}&lt;/li&gt; ) ) }
            &lt;/ul&gt;
        &lt;/form&gt;
  ); 
}
</code></pre>

<p>So we just wrote JS that contains HTML that contains JS that contains HTML. That has got to be the ugliest way to write HTML.</p>

<p>In addition to all this, is the fact that React is a <em>library</em> . It is not a <em>framework</em> . Always a fun question to ask during interviews, what is the difference between them? Well a library is code you use, a framework uses your code. That might sound vague, but in essence this is correct. You would use code from a library that does something specific. For example query DuckDuckGo and return the first result. A framework is something that is built for you to place your specific code in but it handles everything else. Angular is a framework in contrast to React.</p>

<p>What we like about Angular is it splits everything up nicely. You have HTML files for the structure, (S)CSS for the styling and then JS for making it dynamic. Although you only provide your code, Angular makes sure it works. There is a whole bunch of extra layers around it you don&#39;t see and know about in order to make it all work. That is beauty.</p>

<p>In the similar line exists the Polymer framework, from Google. It is based around the idea of Web Components and uses the <code>&lt;template&gt;</code> and <code>&lt;slot&gt;</code> tags from the HTML5 spec. Definitely worth a look.</p>

<p>The other issue that arises from using a library in contrast to a framework is that you have no clear same structure. You are left to choose your structure. However you want, as long it works it will be okay. In Angular you don&#39;t have that luxury as much and that is why we like it more. It is less gun-ho and cowboy style compared to React. You have to follow the structure they dictate in order to make it work. That means Angular codebases over all are predictable, uniform and that in turn makes it so Angular developers can switch projects relatively easily. In React we don&#39;t have that guarantee, so it is wildly different. Unless you had a disciplined dev, the odds are some corners were cut or some breaking away of the mold was done in order to make something work.</p>

<p>So our advice, write HTML, CSS and JS separately and do not try to smoosh them all together. Along a similar vein I will leave you with <a href="http://www.commitstrip.com/en/2019/03/15/css-css-everywhere/" rel="nofollow">this</a>.</p>

<p><a href="https://stealthycoder.writeas.com/tag:devlife" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">devlife</span></a> <a href="https://stealthycoder.writeas.com/tag:devops" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">devops</span></a> <a href="https://stealthycoder.writeas.com/tag:thoughts" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">thoughts</span></a></p>
]]></content:encoded>
      <guid>https://stealthycoder.writeas.com/no-i</guid>
      <pubDate>Mon, 01 Jul 2019 21:17:27 +0000</pubDate>
    </item>
    <item>
      <title>Downloading the internet</title>
      <link>https://stealthycoder.writeas.com/downloading-the-internet?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Just a quick post that used to be about something only applicable to Java ecosystem. Nowadays it seems to be prevalent everywhere. What it is I am referring to, well we dubbed it “Downloading the internet”. It is all the dependencies and libraries and the dependencies and libraries of those dependencies and libraries ad nauseum when you are making an application that every developer seems to put in there. !--more--&#xA;&#xA;Across the board we can look at the frontend eco system with their NPM that just gets the world in packages. Then there is Maven in Java that just seems to go on endlessly, or SBT in Scala. I recently discovered another one, in the golang eco system. Wow... It downloaded so many dependencies and they are all globally stored as well. Fantastic. No virtual environments for golang, oh no. Of course you can set the GOPATH environment variable to sort of install them in different locations, but really come on.&#xA;&#xA;I feel Ruby and PHP also suffer from this but depends on the framework of choice. For example in PHP using Phalcon it is a compiled module and therefore you install it once and that is it. No extra stuff needed. Then in C# you usually have everything in the .NET framework already. In Python the standard library is so vast it most likely is in there and otherwise you have a couple of specific dependencies that are fairly contained.&#xA;&#xA;devops]]&gt;</description>
      <content:encoded><![CDATA[<p>Just a quick post that used to be about something only applicable to Java ecosystem. Nowadays it seems to be prevalent everywhere. What it is I am referring to, well we dubbed it “Downloading the internet”. It is all the dependencies and libraries and the dependencies and libraries of those dependencies and libraries ad nauseum when you are making an application that every developer seems to put in there. </p>

<p>Across the board we can look at the frontend eco system with their NPM that just gets the world in packages. Then there is Maven in Java that just seems to go on endlessly, or SBT in Scala. I recently discovered another one, in the golang eco system. Wow... It downloaded so many dependencies and they are all globally stored as well. Fantastic. No virtual environments for golang, oh no. Of course you can set the GOPATH environment variable to sort of install them in different locations, but really come on.</p>

<p>I feel Ruby and PHP also suffer from this but depends on the framework of choice. For example in PHP using Phalcon it is a compiled module and therefore you install it once and that is it. No extra stuff needed. Then in C# you usually have everything in the .NET framework already. In Python the standard library is so vast it most likely is in there and otherwise you have a couple of specific dependencies that are fairly contained.</p>

<p><a href="https://stealthycoder.writeas.com/tag:devops" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">devops</span></a></p>
]]></content:encoded>
      <guid>https://stealthycoder.writeas.com/downloading-the-internet</guid>
      <pubDate>Thu, 11 Jul 2019 09:16:19 +0000</pubDate>
    </item>
  </channel>
</rss>