<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>100DaysToOffload &amp;mdash; StealthyCoder</title>
    <link>https://stealthycoder.writeas.com/tag:100DaysToOffload</link>
    <description>Making code ninjas out of everyone</description>
    <pubDate>Wed, 29 Apr 2026 03:03:51 +0000</pubDate>
    <item>
      <title>What a fantastic ride</title>
      <link>https://stealthycoder.writeas.com/what-a-fantastic-ride?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I was put in charge to write some extra tests in our framework covering our Docker registry endpoints. We created a framework around Locust. !--more-- Naturally, I first started learning the framework and it is pretty nice to use. You create simple classes that house the flow of the requests you want to execute and you call them one by one, stating what should be the success and what the failure. &#xA;&#xA;The goal was to prove that our Python code was horrible and needed to be switched to Golang implementation ASAP. &#xA;&#xA;Rough start&#xA;&#xA;I could not even start our Docker container because some Werkzeug, Flask and Locust combo made it all not work anymore. So I first had to untangle that mess. It turned out that some older code of Flask used a specific call to a function that does not exist anymore at the provided location. &#xA;&#xA;  For all who are interested, the actual error is: cannot import name &#39;BaseResponse&#39; from &#39;werkzeug.wrappers&#39;. &#xA;&#xA;After that initial rough start I started out by mapping out how Docker actually works. What happens when you do docker pull or docker login for example. Turns out they are all just HTTP calls to a REST API backend. That returns some data and with that data we continue onward to more calls until all the data has been gotten for docker to actually create the containers and/or images. &#xA;&#xA;Docker API&#xA;&#xA;I wrote the simple Python PoC code for the DockerAPI client. In principle I can use that code now to get any image I want, but I do not use that. So I included that whole code into our Locust framework to make sure the test was always set up correctly, and that subsequent images were deleted. &#xA;&#xA;I ran into the second problem. Images cannot be removed from a Docker registry by default. You have to enable that feature. So when I started talking to our devs, they said just forget about it. Do the setup code once, so that the image exists that is needed in a shared test repository and continue onward.&#xA;&#xA;So I scrapped the entire code out of Locust and began again anew.&#xA;&#xA;Concurrent issues&#xA;&#xA;Next up came the problem that I wanted to only get credentials once, and share those credentials amongst the distributed workers. There were several hosts that each run multiple workers as separate processes. I wanted on each of those hosts, that one call got made by the worker process to get a nice token and share that token in memory with the rest. In comes SharedMemory by Python. I got it to finally work after fixing all my concurrent race condition failures, where there was no synchronised flag to make sure everybody waited on each other. &#xA;&#xA;After all that code, the rest of the devs were that is cool but we do not need it. Just call the login at each start of the flow, it will create credentials and if there are already credentials it will return them. So again rip out the code written so far and start anew.&#xA;&#xA;Finally on my way&#xA;&#xA;Started again with the new flow and now I got a nice test up and running. The data returned was a bit baffling and showed our Python code was not the bottleneck as previously thought, hoped for. It was our Nginx reverse proxy setup. Split out the nginx pods unto their own and updated the config to handle things a bit better and give more threads and workers basically. &#xA;&#xA;Okay after fixing the nginx pods, then ran the tests again and it turned out the Docker registry itself was a bottleneck. It just could not cope in terms of memory usage and freeing up stuff. We use Redis as our cache layer and Google Cloud Storage (GCS) as our bucket to actually store the data retrieved by Docker registry. &#xA;&#xA;Breathing room&#xA;&#xA;We had so much services jammed together in one pod it was crazy. Basically one pod ran the following services:&#xA;Nginx&#xA;Redis&#xA;Docker registry&#xA;Flask app&#xA;&#xA;Then there was no control of what pod ran what services, so it could be that one pod ran 2 nginx + redis + docker registry + flask, whilst another ran only docker registry + flask. So back to basics, get one service per pod and split off the docker registry unto it&#39;s own node. Now we have the following setup:&#xA;&#xA;Nodepool A:&#xA;   Three nodes&#xA;        running one pod each of Nginx&#xA;        running one pod each of Flask&#xA;        running one pod total of Redis&#xA;Nodepool B:&#xA;   Three nodes&#xA;        running one pod each of Docker Registry&#xA;&#xA;Now that that was cleared up, the next bottleneck seemed to be Redis? So I turned to Redis and it&#39;s config and found out we actually were not using the staging Redis but the production Redis ?!?!?!&#xA;&#xA;I quickly changed that config and made it so there was one node running a dedicated Redis. So the full situation becomes:&#xA;&#xA;Nodepool A:&#xA;   Three nodes&#xA;        running one pod each of Nginx&#xA;        running one pod each of Flask&#xA;Nodepool B:&#xA;   Three nodes&#xA;        running one pod each of Docker Registry&#xA;Nodepool C:&#xA;   One node&#xA;        running one pod total of Redis&#xA;&#xA;Okay, now can we finally move onward to find out that the Python code itself is so slow?&#xA;&#xA;gunicorn&#xA;&#xA;Well not so fast. Turns out that gunicorn was behaving badly and might do with some optimisation. gunicorn uses different worker classes and if we do not feed it the right ones with the right parameters it might actually be blocking. The reason I started looking down this rabbit hole was because of the gunicorn logs stating they ran out of workers. &#xA;&#xA;After much experimenting on what parameters work best, turns out the best one that worked for us was the following:&#xA;&#xA;CONCURRENCYSETTING=$(python3 -c &#39;import multiprocessing as mp; print(mp.cpucount() * 2)&#39;)&#xA;exec /usr/local/bin/gunicorn -n internalauthsecret -w${CONCURRENCYSETTING} -k gevent --worker-connections=1000 -b 0.0.0.0:8000 internalauthsecret:app -t 180&#xA;Meaning use the gevent type worker class, with 1000 worker connections. Also use a total amount of workers to twice the amount of cores available to us in whatever host we are running as. This also meant it is dynamic to the point where if we would ever upgrade the hardware of the node underlying the pod it will grow with it automatically without us having to make sure we also update the amount of workers. &#xA;&#xA;Conclusion&#xA;&#xA;After fixing all the infrastructure setup of correctly allocating memory and CPU to each of the services, coupled with separating them out to make sure each of them gets the appropriate amount needed. Making sure our nginx was configured correctly. Followed by actually configuring the services in staging correctly to point at services in staging rather than production, followed by configuring the gunicorn service and fine-tuning it, there was still a slight bottleneck. &#xA;&#xA;Yeey, finally Python code is slow and dumb and move on to Golang. Hold on, let us first see what is being the bottleneck. I made some call graphs using the following module https://github.com/daneads/pycallgraph2. It showed that the bottleneck was partly in our shared library code that handled authentication and also the way we were determining when to call that particular function. Finally the culprit has been located.&#xA;&#xA;To fix the shared library code was easy, just improved the for loops and small optimizations in terms of what to store so we do not do a constant looking up of the same values. Cache more in Redis, then also use a Redis connection pool rather than starting up a new connection every time for each query. &#xA;&#xA;To fix the problem of knowing when to call the function in the shared code was a literal one if else statement added to the previously declaring of the variable logic. It was a code fix of 44 characters that resulted in an improvement of the total time spent. The longest before this fix was 465ms on the shared library code path. After fixing both it was only around 60ms. So instead of the code being able to handle roughly 2 per second we could now handle roughly 15 per second per worker per worker_connection. &#xA;&#xA;After that roller-coaster of a ride, I made sure we could handle millions of requests coming in rather than just couple of hundred. The next optimisations lie in Network I/O and other factors. Even if we would move towards Golang implementation it might gain us 1ms max in terms of code maybe, that is even highly optimistic and probably not even realistic. The rest lies in the fact that we have a nginx going to a docker registry talking to another service running somewhere else again on the network that talks to Redis. Those round trip times are starting to add up.&#xA;&#xA;However that is for another time. Right now we got enough to make sure we can get through the next years of running our service. If we need more, just scale the entire setup to include more nodes, until the bottleneck is network throughput/bandwith. Then we will revisit this. &#xA;&#xA;#100DaysToOffload #DevOps #python ]]&gt;</description>
      <content:encoded><![CDATA[<p>I was put in charge to write some extra tests in our framework covering our Docker registry endpoints. We created a framework around <a href="https://locust.io/" rel="nofollow">Locust</a>.  Naturally, I first started learning the framework and it is pretty nice to use. You create simple classes that house the flow of the requests you want to execute and you call them one by one, stating what should be the success and what the failure.</p>

<p>The goal was to prove that our Python code was horrible and needed to be switched to Golang implementation ASAP.</p>

<h2 id="rough-start" id="rough-start">Rough start</h2>

<p>I could not even start our Docker container because some Werkzeug, Flask and Locust combo made it all not work anymore. So I first had to untangle that mess. It turned out that some older code of Flask used a specific call to a function that does not exist anymore at the provided location.</p>

<blockquote><p>For all who are interested, the actual error is: <code>cannot import name &#39;BaseResponse&#39; from &#39;werkzeug.wrappers&#39;</code>.</p></blockquote>

<p>After that initial rough start I started out by mapping out how Docker actually works. What happens when you do <code>docker pull</code> or <code>docker login</code> for example. Turns out they are all just HTTP calls to a REST API backend. That returns some data and with that data we continue onward to more calls until all the data has been gotten for <code>docker</code> to actually create the containers and/or images.</p>

<h2 id="docker-api" id="docker-api">Docker API</h2>

<p>I wrote the simple Python PoC code for the DockerAPI client. In principle I can use that code now to get any image I want, but I do not use that. So I included that whole code into our Locust framework to make sure the test was always set up correctly, and that subsequent images were deleted.</p>

<p>I ran into the second problem. Images cannot be removed from a Docker registry by default. You have to enable that feature. So when I started talking to our devs, they said just forget about it. Do the setup code once, so that the image exists that is needed in a shared test repository and continue onward.</p>

<p>So I scrapped the entire code out of Locust and began again anew.</p>

<h2 id="concurrent-issues" id="concurrent-issues">Concurrent issues</h2>

<p>Next up came the problem that I wanted to only get credentials once, and share those credentials amongst the distributed workers. There were several hosts that each run multiple workers as separate processes. I wanted on each of those hosts, that one call got made by the worker process to get a nice token and share that token in memory with the rest. In comes <a href="https://docs.python.org/3/library/multiprocessing.shared_memory.html" rel="nofollow">SharedMemory</a> by Python. I got it to finally work after fixing all my concurrent race condition failures, where there was no synchronised flag to make sure everybody waited on each other.</p>

<p>After all that code, the rest of the devs were that is cool but we do not need it. Just call the login at each start of the flow, it will create credentials and if there are already credentials it will return them. So again rip out the code written so far and start anew.</p>

<h2 id="finally-on-my-way" id="finally-on-my-way">Finally on my way</h2>

<p>Started again with the new flow and now I got a nice test up and running. The data returned was a bit baffling and showed our Python code was not the bottleneck as previously thought, hoped for. It was our Nginx reverse proxy setup. Split out the nginx pods unto their own and updated the config to handle things a bit better and give more threads and workers basically.</p>

<p>Okay after fixing the nginx pods, then ran the tests again and it turned out the Docker registry itself was a bottleneck. It just could not cope in terms of memory usage and freeing up stuff. We use Redis as our cache layer and Google Cloud Storage (GCS) as our bucket to actually store the data retrieved by Docker registry.</p>

<h2 id="breathing-room" id="breathing-room">Breathing room</h2>

<p>We had so much services jammed together in one pod it was crazy. Basically one pod ran the following services:
– Nginx
– Redis
– Docker registry
– Flask app</p>

<p>Then there was no control of what pod ran what services, so it could be that one pod ran 2 nginx + redis + docker registry + flask, whilst another ran only docker registry + flask. So back to basics, get one service per pod and split off the docker registry unto it&#39;s own node. Now we have the following setup:</p>
<ul><li>Nodepool A:
<ul><li>Three nodes
<ul><li>running one pod each of Nginx</li>
<li>running one pod each of Flask</li>
<li>running one pod total of Redis</li></ul></li></ul></li>
<li>Nodepool B:
<ul><li>Three nodes
<ul><li>running one pod each of Docker Registry</li></ul></li></ul></li></ul>

<p>Now that that was cleared up, the next bottleneck seemed to be Redis? So I turned to Redis and it&#39;s config and found out we actually were not using the staging Redis but the <strong>production</strong> Redis ?!?!?!</p>

<p>I quickly changed that config and made it so there was one node running a dedicated Redis. So the full situation becomes:</p>
<ul><li>Nodepool A:
<ul><li>Three nodes
<ul><li>running one pod each of Nginx</li>
<li>running one pod each of Flask</li></ul></li></ul></li>
<li>Nodepool B:
<ul><li>Three nodes
<ul><li>running one pod each of Docker Registry</li></ul></li></ul></li>
<li>Nodepool C:
<ul><li>One node
<ul><li>running one pod total of Redis</li></ul></li></ul></li></ul>

<p>Okay, now can we finally move onward to find out that the Python code itself is so slow?</p>

<h2 id="gunicorn" id="gunicorn">gunicorn</h2>

<p>Well not so fast. Turns out that <code>gunicorn</code> was behaving badly and might do with some optimisation. <code>gunicorn</code> uses different worker classes and if we do not feed it the right ones with the right parameters it might actually be blocking. The reason I started looking down this rabbit hole was because of the <code>gunicorn</code> logs stating they ran out of workers.</p>

<p>After much experimenting on what parameters work best, turns out the best one that worked for us was the following:</p>

<pre><code class="language-bash">CONCURRENCY_SETTING=$(python3 -c &#39;import multiprocessing as mp; print(mp.cpu_count() * 2)&#39;)
exec /usr/local/bin/gunicorn -n internal_auth_secret -w${CONCURRENCY_SETTING} -k gevent --worker-connections=1000 -b 0.0.0.0:8000 internal_auth_secret:app -t 180
</code></pre>

<p>Meaning use the <code>gevent</code> type worker class, with 1000 worker connections. Also use a total amount of workers to twice the amount of cores available to us in whatever host we are running as. This also meant it is dynamic to the point where if we would ever upgrade the hardware of the node underlying the pod it will grow with it automatically without us having to make sure we also update the amount of workers.</p>

<h2 id="conclusion" id="conclusion">Conclusion</h2>

<p>After fixing all the infrastructure setup of correctly allocating memory and CPU to each of the services, coupled with separating them out to make sure each of them gets the appropriate amount needed. Making sure our nginx was configured correctly. Followed by actually configuring the services in staging correctly to point at services in staging rather than production, followed by configuring the <code>gunicorn</code> service and fine-tuning it, there was still a slight bottleneck.</p>

<p>Yeey, finally Python code is slow and dumb and move on to Golang. Hold on, let us first see what is being the bottleneck. I made some call graphs using the following module <a href="https://github.com/daneads/pycallgraph2" rel="nofollow">https://github.com/daneads/pycallgraph2</a>. It showed that the bottleneck was partly in our shared library code that handled authentication and also the way we were determining when to call that particular function. Finally the culprit has been located.</p>

<p>To fix the shared library code was easy, just improved the for loops and small optimizations in terms of what to store so we do not do a constant looking up of the same values. Cache more in Redis, then also use a Redis connection pool rather than starting up a new connection every time for each query.</p>

<p>To fix the problem of knowing when to call the function in the shared code was a literal one if else statement added to the previously declaring of the variable logic. It was a code fix of 44 characters that resulted in an improvement of the total time spent. The longest before this fix was 465ms on the shared library code path. After fixing both it was only around 60ms. So instead of the code being able to handle roughly 2 per second we could now handle roughly 15 per second per worker per worker_connection.</p>

<p>After that roller-coaster of a ride, I made sure we could handle millions of requests coming in rather than just couple of hundred. The next optimisations lie in Network I/O and other factors. Even if we would move towards Golang implementation it might gain us 1ms max in terms of code maybe, that is even highly optimistic and probably not even realistic. The rest lies in the fact that we have a nginx going to a docker registry talking to another service running somewhere else again on the network that talks to Redis. Those round trip times are starting to add up.</p>

<p>However that is for another time. Right now we got enough to make sure we can get through the next years of running our service. If we need more, just scale the entire setup to include more nodes, until the bottleneck is network throughput/bandwith. Then we will revisit this.</p>

<p><a href="https://stealthycoder.writeas.com/tag:100DaysToOffload" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">100DaysToOffload</span></a> <a href="https://stealthycoder.writeas.com/tag:DevOps" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">DevOps</span></a> <a href="https://stealthycoder.writeas.com/tag:python" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">python</span></a></p>
]]></content:encoded>
      <guid>https://stealthycoder.writeas.com/what-a-fantastic-ride</guid>
      <pubDate>Mon, 02 Jan 2023 21:07:24 +0000</pubDate>
    </item>
    <item>
      <title>Twisted firestarter</title>
      <link>https://stealthycoder.writeas.com/twisted-firestarter?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I was going down an avenue of seeing if we could implement a better and easier caching for Docker registry utilizing Google Firestore. !--more--&#xA;&#xA;Getting an ember&#xA;&#xA;Everyone that teaches to make/create your own fire will tell you that one of the most difficult things is getting an ember. A small hot enough material that will ignite the rest of the fuel. Well for me that was getting the source code for Docker registry (aka distribution) and getting it to compile and making sure I had the dependencies. They use an old school way using this tool called vndr which I am not familiar with since I am not an old school Golang developer. &#xA;&#xA;After making sure the vendor.conf and vndr could play nicely and getting sucked down the rabbit hole of GO111MODULE to be switched off and what that does, in order to get my development environment to be able to follow the imports. So finally I have a somewhat working thing, but I cannot make changes yet. This actually brought back a memory of long ago trying to do some early Golang development. That is I like to separate out my dependencies and code. I think having a workspace of sorts that is your Git checkout and a whole separate other static dependency location is nice. For example, let us say you are working on two projects in Python. &#xA;&#xA;One is your library you want to use and the other is a project utilizing that library. Then if you want to make a change to your library, you do not want to do it directly in the project utilizing the library. You would want to have changes being arbitrarily worked on for the library separate from the project utilizing it. On your machine you might have two Git folders, one the library and the other the project. Then the project might have a dependency file that imports the library. You could even specify the Git branch in there. So you make the changes to your library, commit and push to a new branch. Checkout a new branch in the project and update the dependency file to the new branch and see if it all works. &#xA;&#xA;Then just having virtual environments makes the most sense to me as you do not want everyone to use the same version of something or have to be force to use the same version. I digress.&#xA;&#xA;So since I have one folder that has my Git code, and another that is the dependency I thought I would do the naive thing and just clone the repo into where the dependency is currently held and work on the code with symlinks. Nope. That did not work. Then I tried to work directly in the dependency and that did not work. &#xA;&#xA;At some point I just gave up on this and started to work without autocomplete, syntax highlighting and any IDE features whatsoever and just a glorified text editor is all I had.&#xA;&#xA;Fanning the ember&#xA;&#xA;However small the ember, I needed to fan it in order to get the flame. Now I wanted to introduce the Firebase to our codebase that is sharing code with the Docker registry codebase. In essence we use it as a library as well. So in my project I just added the Firebase, no problem there. That took like 5 minutes. Then however came the problem that the Docker codebase had an old dependency on the GCP stuff. That messed things up. It caused a conflict I could not fix in our project codebase alone. So I had to update the GCP stuff for Docker registry (distribution codebase). &#xA;&#xA;That meant just updating the reference right? Nope. I had to refactor the GCS storage layer as well with the newer calls and make them as close as I could to being backwards compatible/feature parity. Thinking I have done so, I try to recompile my code but it still does not use the new dependency I laid out. I just did a stupid thing and forked the code into my own Github, then changed all references to point to my Github instead of Docker. I since learned you can remap this in go.mod and probably also in vendor.conf but yeah. My hair already looked liked the Prodigy at this point so I might as well stay committed. &#xA;&#xA;So I got a nice code base to work off of, and add my Firestore to our project using that augmented Docker codebase. Done, there is a flame going, starting to get bigger.&#xA;&#xA;Fire, fire&#xA;&#xA;Then I check the differences between this Firestore cache layer and our Redis one. It is tremendous, huge and insanely obvious what we should do after I answered the question if it is actually faster. Every call to the Firestore API to get a response takes a minimal of 1 second since that is the rate limit. So yeah, get the bucket of water and a bucket of sand to cover up this fire to put it out immediately. &#xA;&#xA;Now I will say, we could improve our algorithm in the codebase and bypass how Docker gets tags by making one giant query and sending it to Firestore making it that we only pay a cost of 1 second once to get everything and it would be blazing fast afterwards since we would have everything and it would need to be kept in memory though. So it is still faster to query and store things in Redis. &#xA;&#xA;#100DaysToOffload #docker #golang]]&gt;</description>
      <content:encoded><![CDATA[<p>I was going down an avenue of seeing if we could implement a better and easier caching for Docker registry utilizing Google Firestore. </p>

<h2 id="getting-an-ember" id="getting-an-ember">Getting an ember</h2>

<p>Everyone that teaches to make/create your own fire will tell you that one of the most difficult things is getting an ember. A small hot enough material that will ignite the rest of the fuel. Well for me that was getting the source code for Docker registry (aka distribution) and getting it to compile and making sure I had the dependencies. They use an old school way using this tool called <a href="https://github.com/LK4D4/vndr" rel="nofollow">vndr</a> which I am not familiar with since I am not an old school Golang developer.</p>

<p>After making sure the <code>vendor.conf</code> and <code>vndr</code> could play nicely and getting sucked down the rabbit hole of <code>GO111MODULE</code> to be switched off and what that does, in order to get my development environment to be able to follow the imports. So finally I have a somewhat working thing, but I cannot make changes yet. This actually brought back a memory of long ago trying to do some early Golang development. That is I like to separate out my dependencies and code. I think having a workspace of sorts that is your Git checkout and a whole separate other static dependency location is nice. For example, let us say you are working on two projects in Python.</p>

<p>One is your library you want to use and the other is a project utilizing that library. Then if you want to make a change to your library, you do not want to do it directly in the project utilizing the library. You would want to have changes being arbitrarily worked on for the library separate from the project utilizing it. On your machine you might have two Git folders, one the library and the other the project. Then the project might have a dependency file that imports the library. You could even specify the Git branch in there. So you make the changes to your library, commit and push to a new branch. Checkout a new branch in the project and update the dependency file to the new branch and see if it all works.</p>

<p>Then just having virtual environments makes the most sense to me as you do not want everyone to use the same version of something or have to be force to use the same version. I digress.</p>

<p>So since I have one folder that has my Git code, and another that is the dependency I thought I would do the naive thing and just clone the repo into where the dependency is currently held and work on the code with symlinks. Nope. That did not work. Then I tried to work directly in the dependency and that did not work.</p>

<p>At some point I just gave up on this and started to work without autocomplete, syntax highlighting and any IDE features whatsoever and just a glorified text editor is all I had.</p>

<h2 id="fanning-the-ember" id="fanning-the-ember">Fanning the ember</h2>

<p>However small the ember, I needed to fan it in order to get the flame. Now I wanted to introduce the Firebase to our codebase that is sharing code with the Docker registry codebase. In essence we use it as a library as well. So in my project I just added the Firebase, no problem there. That took like 5 minutes. Then however came the problem that the Docker codebase had an old dependency on the GCP stuff. That messed things up. It caused a conflict I could not fix in our project codebase alone. So I had to update the GCP stuff for Docker registry (distribution codebase).</p>

<p>That meant just updating the reference right? Nope. I had to refactor the GCS storage layer as well with the newer calls and make them as close as I could to being backwards compatible/feature parity. Thinking I have done so, I try to recompile my code but it still does not use the new dependency I laid out. I just did a stupid thing and forked the code into my own Github, then changed all references to point to my Github instead of Docker. I since learned you can remap this in <code>go.mod</code> and probably also in <code>vendor.conf</code> but yeah. My hair already looked liked the Prodigy at this point so I might as well stay committed.</p>

<p>So I got a nice code base to work off of, and add my Firestore to our project using that augmented Docker codebase. Done, there is a flame going, starting to get bigger.</p>

<h2 id="fire-fire" id="fire-fire">Fire, fire</h2>

<p>Then I check the differences between this Firestore cache layer and our Redis one. It is tremendous, huge and insanely obvious what we should do after I answered the question if it is actually faster. Every call to the Firestore API to get a response takes a minimal of 1 second since that is the rate limit. So yeah, get the bucket of water and a bucket of sand to cover up this fire to put it out immediately.</p>

<p>Now I will say, we could improve our algorithm in the codebase and bypass how Docker gets tags by making one giant query and sending it to Firestore making it that we only pay a cost of 1 second once to get everything and it would be blazing fast afterwards since we would have everything and it would need to be kept in memory though. So it is still faster to query and store things in Redis.</p>

<p><a href="https://stealthycoder.writeas.com/tag:100DaysToOffload" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">100DaysToOffload</span></a> <a href="https://stealthycoder.writeas.com/tag:docker" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">docker</span></a> <a href="https://stealthycoder.writeas.com/tag:golang" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">golang</span></a></p>
]]></content:encoded>
      <guid>https://stealthycoder.writeas.com/twisted-firestarter</guid>
      <pubDate>Tue, 03 Jan 2023 09:12:02 +0000</pubDate>
    </item>
    <item>
      <title>But can I write to it?</title>
      <link>https://stealthycoder.writeas.com/but-can-i-write-to-it?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I just wanted to find out if a directory was writable for the user, and it turns out it is quite difficult to get that information in Golang. How difficult could it be? !--more--&#xA;&#xA;Stat has that information&#xA;&#xA;So my first inclination was that the os.Stat call has that information. It sort of does, in a way, but only for Linux. There is a Sys() method on the fs.FileInfo which returns a specific struct on Linux. &#xA;&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;os&#34;&#xA;&#x9;&#34;syscall&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;info, err := os.Stat(&#34;/tmp/some.file&#34;)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Printf(&#34;%s&#34;, err)&#xA;&#x9;}&#xA;&#x9;if data, ok := info.Sys().(*syscall.Statt); ok {&#xA;&#x9;&#x9;fmt.Printf(&#34;UID: %d\n&#34;, data.Uid)&#xA;&#x9;}&#xA;}&#xA;Is an example of how you get to that part and if you call syscall.Getuid() you can check it if they are the same and therefore if you at least own the resource. However that does not mean it is writable yet. &#xA;&#xA;Permission bit logic&#xA;&#xA;I tried to finagle some bitwise logic with the permission bits, but again they only work on Linux and truth be told I never trusted myself that I got it to work. &#xA;&#xA;Sidetrack to Java&#xA;&#xA;So in Java there has been this thing) since forever. You give it a path, and it tells you if it is writable. It is a static method and easy to use. Why does this not exist in Golang?!?!?&#xA;&#xA;Solution&#xA;&#xA;So I did finally create a solution that was tailored for Unix and Windows separately. &#xA;&#xA;Windows&#xA;&#xA;The Windows solution made me go down a rabbit hole, read up on Win32 API structs and methods on the Microsoft docs and dig deep down in the source of Go itself to figure out what I have access to. I will just show you the code:&#xA;&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;syscall&#34;&#xA;)&#xA;&#xA;// https://learn.microsoft.com/en-us/windows/win32/fileio/file-access-rights-constants&#xA;const FILEAPPENDFILE = 0x00000002&#xA;&#xA;// https://learn.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-createfilew#parameters at dwShareMode&#xA;const FILELOCK = 0x00000000&#xA;&#xA;func main() {&#xA;&#x9;// Checks if directory is writable&#xA;&#x9;if hwnd, err := syscall.CreateFile(syscall.StringToUTF16Ptr(&#34;C:\\Windows\\system32&#34;), FILEAPPENDFILE, FILELOCK, nil, syscall.OPENEXISTING, syscall.FILEFLAGBACKUPSEMANTICS|syscall.FILEFLAGOPENREPARSEPOINT, 0); err == nil {&#xA;&#x9;&#x9;if err = syscall.CloseHandle(hwnd); err != nil {&#xA;&#x9;&#x9;&#x9;fmt.Printf(&#34;%s\n&#34;, err)&#xA;&#x9;&#x9;}&#xA;&#x9;&#x9;fmt.Printf(&#34;This directory is writable&#34;)&#xA;&#x9;}&#xA;}&#xA;So yeah, a syscall to CreateFile to open a handle to a directory. I cannot figure out why you need to create a file to get a handle to be able to tell if you can write to a directory, even more so because CreateDirectoryW is also an actual call in the Win32 API, which is the only one that can actually create a directory. This is so confusing. &#xA;&#xA;Linux / MacOSX&#xA;&#xA;The other solution was sort of similar but much easier. There is a nice syscall to Access. &#xA;&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;syscall&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;// Checks if directory is writable&#xA;&#x9;if err := syscall.Access(&#34;/opt/&#34;, syscall.ORDWR); err == nil {&#xA;&#x9;&#x9;fmt.Printf(&#34;This directory is writable&#34;)&#xA;&#x9;}&#xA;}&#xA;&#xA;Conclusion&#xA;&#xA;I feel like all of this code can be hidden away in the Golang standard library and give us a nice os.IsWritable(path string) bool function signature for it. &#xA;&#xA;#100DaysToOffload #devlife #golang]]&gt;</description>
      <content:encoded><![CDATA[<p>I just wanted to find out if a directory was writable for the user, and it turns out it is quite difficult to get that information in Golang. How difficult could it be? </p>

<h2 id="stat-has-that-information" id="stat-has-that-information">Stat has that information</h2>

<p>So my first inclination was that the <code>os.Stat</code> call has that information. It sort of does, in a way, but only for Linux. There is a <code>Sys()</code> method on the <code>fs.FileInfo</code> which returns a specific struct on Linux.</p>

<pre><code class="language-golang">package main

import (
	&#34;fmt&#34;
	&#34;os&#34;
	&#34;syscall&#34;
)

func main() {
	info, err := os.Stat(&#34;/tmp/some.file&#34;)
	if err != nil {
		fmt.Printf(&#34;%s&#34;, err)
	}
	if data, ok := info.Sys().(*syscall.Stat_t); ok {
		fmt.Printf(&#34;UID: %d\n&#34;, data.Uid)
	}
}
</code></pre>

<p>Is an example of how you get to that part and if you call <code>syscall.Getuid()</code> you can check it if they are the same and therefore if you at least own the resource. However that does not mean it is writable yet.</p>

<h2 id="permission-bit-logic" id="permission-bit-logic">Permission bit logic</h2>

<p>I tried to finagle some bitwise logic with the permission bits, but again they only work on Linux and truth be told I never trusted myself that I got it to work.</p>

<h2 id="sidetrack-to-java" id="sidetrack-to-java">Sidetrack to Java</h2>

<p>So in Java there has been this <a href="https://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html#isWritable(java.nio.file.Path)" rel="nofollow">thing</a> since forever. You give it a path, and it tells you if it is writable. It is a static method and easy to use. Why does this not exist in Golang?!?!?</p>

<h2 id="solution" id="solution">Solution</h2>

<p>So I did finally create a solution that was tailored for Unix and Windows separately.</p>

<h3 id="windows" id="windows">Windows</h3>

<p>The Windows solution made me go down a rabbit hole, read up on Win32 API structs and methods on the Microsoft docs and dig deep down in the source of Go itself to figure out what I have access to. I will just show you the code:</p>

<pre><code class="language-golang">package main

import (
	&#34;fmt&#34;
	&#34;syscall&#34;
)

// https://learn.microsoft.com/en-us/windows/win32/fileio/file-access-rights-constants
const FILE_APPEND_FILE = 0x00000002

// https://learn.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-createfilew#parameters at dwShareMode
const FILE_LOCK = 0x00000000

func main() {
	// Checks if directory is writable
	if hwnd, err := syscall.CreateFile(syscall.StringToUTF16Ptr(&#34;C:\\Windows\\system32&#34;), FILE_APPEND_FILE, FILE_LOCK, nil, syscall.OPEN_EXISTING, syscall.FILE_FLAG_BACKUP_SEMANTICS|syscall.FILE_FLAG_OPEN_REPARSE_POINT, 0); err == nil {
		if err = syscall.CloseHandle(hwnd); err != nil {
			fmt.Printf(&#34;%s\n&#34;, err)
		}
		fmt.Printf(&#34;This directory is writable&#34;)
	}
}
</code></pre>

<p>So yeah, a syscall to <code>CreateFile</code> to open a handle to a directory. I cannot figure out why you need to create a file to get a handle to be able to tell if you can write to a directory, even more so because <code>CreateDirectoryW</code> is also an actual call in the Win32 API, which is the only one that can actually create a directory. This is so confusing.</p>

<h3 id="linux-macosx" id="linux-macosx">Linux / MacOSX</h3>

<p>The other solution was sort of similar but much easier. There is a nice syscall to Access.</p>

<pre><code class="language-golang">package main

import (
	&#34;fmt&#34;
	&#34;syscall&#34;
)

func main() {
	// Checks if directory is writable
	if err := syscall.Access(&#34;/opt/&#34;, syscall.O_RDWR); err == nil {
		fmt.Printf(&#34;This directory is writable&#34;)
	}
}
</code></pre>

<h1 id="conclusion" id="conclusion">Conclusion</h1>

<p>I feel like all of this code can be hidden away in the Golang standard library and give us a nice <code>os.IsWritable(path string) bool</code> function signature for it.</p>

<p><a href="https://stealthycoder.writeas.com/tag:100DaysToOffload" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">100DaysToOffload</span></a> <a href="https://stealthycoder.writeas.com/tag:devlife" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">devlife</span></a> <a href="https://stealthycoder.writeas.com/tag:golang" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">golang</span></a></p>
]]></content:encoded>
      <guid>https://stealthycoder.writeas.com/but-can-i-write-to-it</guid>
      <pubDate>Wed, 04 Jan 2023 21:56:54 +0000</pubDate>
    </item>
    <item>
      <title>Investing in the past</title>
      <link>https://stealthycoder.writeas.com/investing-in-the-past?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I quite like keeping old things alive, longer than they were meant for maybe, but also repairing things that got broken. For example toys my children play with. !--more-- Currently I am listening to music through  a setup involving a device my dad bought new in 1983 and I like the fact that he bought whilst being in his 20s and me in my 30s are still using it. I have speakers in the living room that belonged to a woman I have known in my youth that lived to be slightly over 100 years old. These speakers are from Marantz. When I took them to the shop where I bought my vinyl record player, the owner who is an expert in his field said I did not even know Marantz made speakers. They are hooked to an amplifier my parents in law bought in the late 80s. Back to the device my dad bought though.&#xA;&#xA;JVC SEA-33&#xA;&#xA;It is a JVC SEA-33 Graphic Equalizer. This thing was made in the time that everything was analogue, and when mixing was not always done correctly. So what you could do is hook this thing up in your sound flow and change the amplifying or suppressing of certain frequencies to help bring out a better sound experience. Now my laptop has a 3.5mm Jack to two RCA adapter cable, and my headphones has a 6.3mm to two RCA adapter cable to hook everything up. Of course my laptop has a digital soundcard so it is not as effective as it could be, yet still I can tweak things and I just like the fact I have it working. Furthermore I can hook it up again to my record player into my amplifier in the future and then it will all work as it should be. &#xA;&#xA;Kintsugi&#xA;&#xA;This brings me way totally into another area, but we ordered a cake platter and it got delivered shattered. Yeah I know, there is no segue into this. Instead of returning it, we heard of kintsugi which is basically using a golden glue substance to mend broken vases, porcelain- and glassware. I liked this idea and now have a cool set piece which looks much better than the new version we got after reporting this broken delivery to customer service. &#xA;&#xA;Limiting waste&#xA;&#xA;I like the idea of minimizing waste and not just throw out stuff that could be repaired for a smaller fee than buying it new. What I do miss sometimes though is the fact that it is not appreciated by the companies who make their products that you start repairing them instead of buying new stuff. They will make it so difficult and hard to get into stuff or have the parts you can use to repair in a catalog somewhere. &#xA;&#xA;Like the old days when the first PCs came to the market for consumers, everyone got a giant list of part numbers and you could just order more specific parts if you needed them again. Now we actually have to fight for our right to repair stuff we already bought. It should be ours to do with what we want, but that control has long slipped from us consumers. &#xA;&#xA;Knowledge&#xA;&#xA;I also have the idea a certain body of knowledge has been forgotten, or is in the process of being forgotten and that is worrying. Not only because it might set us back in development as a species but also because it might mean we are stuck in a loop or will repeat past mistakes. &#xA;&#xA;Advice&#xA;&#xA;My advice is to just buy old stuff and repurpose it if you feel like it. I still want to buy an old radio from the 60s and put in a Raspberry Pi Zero or equivalent and make it a bluetooth enabled speaker system that can also play YouTube and other songs, yet it still being a wonderful piece to look at. It will be a nice hybrid between analogue and digital, the past being brought into the future. &#xA;&#xA;I have an old PC, 486DX2/50MHz, lying here that will have a running server on the network serving a real site. Obviously behind a TLS offloading proxy but it will still be serving the webpage. I hooked it up to my giant 65&#34; Sony TV and I just loved the fact I held 30 year old tech that still worked and could fulfill a job today. &#xA;&#xA;I also operated a saw that belonged to my great grand father, who I&#39;ve met, and I learned that it might have been my great great grand father&#39;s even. That tool easily has been in my family&#39;s hands for more than a 100 years, and I intend to further that lifespan. It is wonderful that still exists. It works beautiful and you cannot buy a similar product anymore these days. &#xA;&#xA;There something beautiful in that old things keep finding purposes and trying to get old things to fulfill a new purpose is fun to do. I do not think my dad in his 20s ever thought my son in his 30s will use this. Who knows what kind of tech I will buy that my children will repurpose in the future. &#xA;&#xA;#100DaysToOffload #devlife&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>I quite like keeping old things alive, longer than they were meant for maybe, but also repairing things that got broken. For example toys my children play with.  Currently I am listening to music through  a setup involving a device my dad bought new in 1983 and I like the fact that he bought whilst being in his 20s and me in my 30s are still using it. I have speakers in the living room that belonged to a woman I have known in my youth that lived to be slightly over 100 years old. These speakers are from Marantz. When I took them to the shop where I bought my vinyl record player, the owner who is an expert in his field said I did not even know Marantz made speakers. They are hooked to an amplifier my parents in law bought in the late 80s. Back to the device my dad bought though.</p>

<h2 id="jvc-sea-33" id="jvc-sea-33">JVC SEA-33</h2>

<p>It is a JVC SEA-33 Graphic Equalizer. This thing was made in the time that everything was analogue, and when mixing was not always done correctly. So what you could do is hook this thing up in your sound flow and change the amplifying or suppressing of certain frequencies to help bring out a better sound experience. Now my laptop has a 3.5mm Jack to two RCA adapter cable, and my headphones has a 6.3mm to two RCA adapter cable to hook everything up. Of course my laptop has a digital soundcard so it is not as effective as it could be, yet still I can tweak things and I just like the fact I have it working. Furthermore I can hook it up again to my record player into my amplifier in the future and then it will all work as it should be.</p>

<h2 id="kintsugi" id="kintsugi">Kintsugi</h2>

<p>This brings me way totally into another area, but we ordered a cake platter and it got delivered shattered. Yeah I know, there is no segue into this. Instead of returning it, we heard of <em>kintsugi</em> which is basically using a golden glue substance to mend broken vases, porcelain- and glassware. I liked this idea and now have a cool set piece which looks much better than the new version we got after reporting this broken delivery to customer service.</p>

<h2 id="limiting-waste" id="limiting-waste">Limiting waste</h2>

<p>I like the idea of minimizing waste and not just throw out stuff that could be repaired for a smaller fee than buying it new. What I do miss sometimes though is the fact that it is not appreciated by the companies who make their products that you start repairing them instead of buying new stuff. They will make it so difficult and hard to get into stuff or have the parts you can use to repair in a catalog somewhere.</p>

<p>Like the old days when the first PCs came to the market for consumers, everyone got a giant list of part numbers and you could just order more specific parts if you needed them again. Now we actually have to fight for our <strong>right</strong> to repair stuff we already bought. It should be ours to do with what we want, but that control has long slipped from us consumers.</p>

<h2 id="knowledge" id="knowledge">Knowledge</h2>

<p>I also have the idea a certain body of knowledge has been forgotten, or is in the process of being forgotten and that is worrying. Not only because it might set us back in development as a species but also because it might mean we are stuck in a loop or will repeat past mistakes.</p>

<h2 id="advice" id="advice">Advice</h2>

<p>My advice is to just buy old stuff and repurpose it if you feel like it. I still want to buy an old radio from the 60s and put in a Raspberry Pi Zero or equivalent and make it a bluetooth enabled speaker system that can also play YouTube and other songs, yet it still being a wonderful piece to look at. It will be a nice hybrid between analogue and digital, the past being brought into the future.</p>

<p>I have an old PC, 486DX2/50MHz, lying here that will have a running server on the network serving a real site. Obviously behind a TLS offloading proxy but it will still be serving the webpage. I hooked it up to my giant 65” Sony TV and I just loved the fact I held 30 year old tech that still worked and could fulfill a job today.</p>

<p>I also operated a saw that belonged to my great grand father, who I&#39;ve met, and I learned that it might have been my great great grand father&#39;s even. That tool easily has been in my family&#39;s hands for more than a 100 years, and I intend to further that lifespan. It is wonderful that still exists. It works beautiful and you cannot buy a similar product anymore these days.</p>

<p>There something beautiful in that old things keep finding purposes and trying to get old things to fulfill a new purpose is fun to do. I do not think my dad in his 20s ever thought my son in his 30s will use this. Who knows what kind of tech I will buy that my children will repurpose in the future.</p>

<p><a href="https://stealthycoder.writeas.com/tag:100DaysToOffload" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">100DaysToOffload</span></a> <a href="https://stealthycoder.writeas.com/tag:devlife" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">devlife</span></a></p>
]]></content:encoded>
      <guid>https://stealthycoder.writeas.com/investing-in-the-past</guid>
      <pubDate>Thu, 05 Jan 2023 21:39:29 +0000</pubDate>
    </item>
    <item>
      <title>Getting MS-DOS on the web</title>
      <link>https://stealthycoder.writeas.com/getting-ms-dos-on-the-web?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I love thinking back on my childhood and the PCs we had then. The new tech that got released then into our lives that made it feel like we were an instant part of the sci-fi movies and series we were watching. Something about that moment where there was made a distinction between analogue past and digital future felt definitive, surreal and invigorating. !--more-- I love that in that period of time everyone had their own custom PC with all slightly different parts and personalities. I think it must have been the same for when the automobile arrived for personal use the first time. Everyone probably could feel the difference between horse and carriage and other types of transportation and the new car they got. The feeling of the world becoming smaller. &#xA;&#xA;Uniqueness&#xA;&#xA;I love the uniqueness of those eras pertaining to the products. Also the freedom and knowledge of understanding the product, making it easy to tinker with it and naturally making it a status symbol. If you know and understand more you are smarter, better, stronger and overall a much more interesting of a human being than those others that do not even know what a motherboard is. I thoroughly enjoyed the weekends pouring over catalogues with my father and just hand picking the best items for our custom built PC. Waiting for the parts to arrive, installing them, and finally after weeks and months of this process flicking the switch and hoping you did not screw up. It lights up, it shows a boot prompt and the BIOS screen. There was a moment of pure joy there. &#xA;&#xA;Networking&#xA;&#xA;The day we spend an entire afternoon installing Novell Netware and running a special cross data cable, which was a bright pinkish/orange, from one PC in my bedroom to the computer on the same floor in my father&#39;s office to finally copy a file over and seeing it being transferred was amazing. What was even more amazing was that my mother and sister had no clue what we were doing, let alone sharing in our joy. &#xA;&#xA;There was also another way of networking and that was not just through the newly arriving Internet but just locally with people. Knowing where to go to get the good games and items. Secrets on floppy disks that you could loan from the public library and also people that put viruses on them. &#xA;&#xA;First virus&#xA;&#xA;My first virus I do not know exactly how I got it, but I suspect the floppy disk that held Doom I got from the public library. I had a 486DX2/66MHz, 4Mb RAM, 20Mb hard disk and, the best feature, a SoundBlaster Pro sound card. I cannot remember the graphics, but I believe it was a standard VGA setup. Video cards were not a thing back then, as it was unaffordable and only ever used for businesses that worked in an industry that needed such a thing. &#xA;&#xA;I could not play a lot of games, but I could always shuffle around some games. That is what I did most of the time. I would have a lot of files on there. Most of them pictures I created, or some game or another. Of course Windows 3.11 took a lot of space too. A whopping 8Mb. So as I was in the process of swapping out I think Jill of the Jungle for Doom, suddenly my PC would not boot anymore normally.&#xA;&#xA;I rebooted, and it loaded up my prompt and all the letters starting falling down. I could not input any commands anymore. What ended up happening was that I had to loan another HDD to install MS-DOS there and &#34;fix&#34; my HDD by copying over all the unspoiled MS-DOS files. &#xA;&#xA;I installed a anti virus program for the first time ever and I ran it. It found some more things on my hard drive and I was wondering how long I had been infected for and where it came from. &#xA;&#xA;Bringing the past with me&#xA;&#xA;I wondered if it was possible to revisit that time in the present. I bought a 486DX2/50MHz this time and it got a bit better specs except of course a SoundBlaster sound card. It does have networking now though and I tested it with my local LAN network. I could copy a file over FTP from my Dell XPS super charged laptop from 2021 to a PC running MS-DOS 6.22 from 1993. I will tell you, the moment I got it all working I was jumping up and down again with joy. It instantly took me back to those olden days. &#xA;&#xA;Now my project for this particular machine is to get a small screen to hook up to the VGA port and use the Molex cable to DC to power said monitor. Then it is a self contained system. Like a giant Raspberry Pi. After that I want to get a HTTP server on there to serve some files and make it the actual server for a website for a company called Studio Vlegel. Then make a game and have my friends at Studio Vlegel make a piece on the casing of the PC. After that sell this artwork as a statement piece.&#xA;&#xA;Artwork&#xA;&#xA;Something about getting the maximum with minimal resources, forgotten tech and also not discarding old tech. At the same time it is a bit of a jab at modern hardware with all their specs (especially servers) and the thing they do can be solved with a desktop computer from the 90s. &#xA;&#xA;It is also an homage to the demo scene of yore. It still exists to some degree luckily and I want to sell the PC with the only copy of the game and source code of the website with server on there. Maybe we will sell floppy disks with the game on there as well in a limited fashion. &#xA;&#xA;Coding for DOS&#xA;&#xA;This gets me to coding for DOS. I never did that way back when. Now though I have a nice setup using Open Watcom C/C++ Version 2.0, picotcp4dos and just an editor. You can use VSCode, vim or emacs or anything else. I can now easily compile for DOS and successfully done so already. To be fair I think most would program direct in Assembly back then and I could still do that, but I am not that far advanced....yet_. &#xA;&#xA;The first time I compiled and shipped the binary to my actual hardware machine and seeing it work was amazing. &#xA;&#xA;So stay tuned and in the future there will be a server added to the statistics that has an OS tag MS-DOS 6.22. &#xA;&#xA;#100DaysToOffload #msdos #art]]&gt;</description>
      <content:encoded><![CDATA[<p>I love thinking back on my childhood and the PCs we had then. The new tech that got released then into our lives that made it feel like we were an instant part of the sci-fi movies and series we were watching. Something about that moment where there was made a distinction between analogue past and digital future felt definitive, surreal and invigorating.  I love that in that period of time everyone had their own custom PC with all slightly different parts and personalities. I think it must have been the same for when the automobile arrived for personal use the first time. Everyone probably could feel the difference between horse and carriage and other types of transportation and the new car they got. The feeling of the world becoming smaller.</p>

<h2 id="uniqueness" id="uniqueness">Uniqueness</h2>

<p>I love the uniqueness of those eras pertaining to the products. Also the freedom and knowledge of understanding the product, making it easy to tinker with it and naturally making it a status symbol. If you know and understand more you are smarter, better, stronger and overall a much more interesting of a human being than those others that do not even know what a motherboard is. I thoroughly enjoyed the weekends pouring over catalogues with my father and just hand picking the best items for our custom built PC. Waiting for the parts to arrive, installing them, and finally after weeks and months of this process flicking the switch and hoping you did not screw up. It lights up, it shows a boot prompt and the BIOS screen. There was a moment of pure joy there.</p>

<h2 id="networking" id="networking">Networking</h2>

<p>The day we spend an entire afternoon installing Novell Netware and running a special cross data cable, which was a bright pinkish/orange, from one PC in my bedroom to the computer on the same floor in my father&#39;s office to finally copy a file over and seeing it being transferred was amazing. What was even more amazing was that my mother and sister had no clue what we were doing, let alone sharing in our joy.</p>

<p>There was also another way of networking and that was not just through the newly arriving Internet but just locally with people. Knowing where to go to get the good games and items. Secrets on floppy disks that you could loan from the public library and also people that put viruses on them.</p>

<h2 id="first-virus" id="first-virus">First virus</h2>

<p>My first virus I do not know exactly how I got it, but I suspect the floppy disk that held Doom I got from the public library. I had a 486DX2/66MHz, 4Mb RAM, 20Mb hard disk and, the best feature, a SoundBlaster Pro sound card. I cannot remember the graphics, but I believe it was a standard VGA setup. Video cards were not a thing back then, as it was unaffordable and only ever used for businesses that worked in an industry that needed such a thing.</p>

<p>I could not play a lot of games, but I could always shuffle around some games. That is what I did most of the time. I would have a lot of files on there. Most of them pictures I created, or some game or another. Of course Windows 3.11 took a lot of space too. A whopping 8Mb. So as I was in the process of swapping out I think Jill of the Jungle for Doom, suddenly my PC would not boot anymore normally.</p>

<p>I rebooted, and it loaded up my prompt and all the letters starting falling down. I could not input any commands anymore. What ended up happening was that I had to loan another HDD to install MS-DOS there and “fix” my HDD by copying over all the unspoiled MS-DOS files.</p>

<p>I installed a anti virus program for the first time ever and I ran it. It found some more things on my hard drive and I was wondering how long I had been infected for and where it came from.</p>

<h2 id="bringing-the-past-with-me" id="bringing-the-past-with-me">Bringing the past with me</h2>

<p>I wondered if it was possible to revisit that time in the present. I bought a 486DX2/50MHz this time and it got a bit better specs except of course a SoundBlaster sound card. It does have networking now though and I tested it with my local LAN network. I could copy a file over FTP from my Dell XPS super charged laptop from 2021 to a PC running MS-DOS 6.22 from 1993. I will tell you, the moment I got it all working I was jumping up and down again with joy. It instantly took me back to those olden days.</p>

<p>Now my project for this particular machine is to get a small screen to hook up to the VGA port and use the Molex cable to DC to power said monitor. Then it is a self contained system. Like a giant Raspberry Pi. After that I want to get a HTTP server on there to serve some files and make it the actual server for a website for a company called <a href="https://studiovlegel.nl" rel="nofollow">Studio Vlegel</a>. Then make a game and have my friends at Studio Vlegel make a piece on the casing of the PC. After that sell this artwork as a statement piece.</p>

<h2 id="artwork" id="artwork">Artwork</h2>

<p>Something about getting the maximum with minimal resources, forgotten tech and also not discarding old tech. At the same time it is a bit of a jab at modern hardware with all their specs (especially servers) and the thing they do can be solved with a desktop computer from the 90s.</p>

<p>It is also an homage to the demo scene of yore. It still exists to some degree luckily and I want to sell the PC with the only copy of the game and source code of the website with server on there. Maybe we will sell floppy disks with the game on there as well in a limited fashion.</p>

<h2 id="coding-for-dos" id="coding-for-dos">Coding for DOS</h2>

<p>This gets me to coding for DOS. I never did that way back when. Now though I have a nice setup using <a href="https://open-watcom.github.io/open-watcom-v2-wikidocs/c_readme.html" rel="nofollow">Open Watcom C/C++ Version 2.0</a>, <a href="http://picotcp4dos.sourceforge.net" rel="nofollow">picotcp4dos</a> and just an editor. You can use VSCode, vim or emacs or anything else. I can now easily compile for DOS and successfully done so already. To be fair I think most would program direct in Assembly back then and I could still do that, but I am not that far advanced....<em>yet</em>.</p>

<p>The first time I compiled and shipped the binary to my actual hardware machine and seeing it work was amazing.</p>

<p>So stay tuned and in the future there will be a server added to the statistics that has an OS tag MS-DOS 6.22.</p>

<p><a href="https://stealthycoder.writeas.com/tag:100DaysToOffload" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">100DaysToOffload</span></a> <a href="https://stealthycoder.writeas.com/tag:msdos" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">msdos</span></a> <a href="https://stealthycoder.writeas.com/tag:art" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">art</span></a></p>
]]></content:encoded>
      <guid>https://stealthycoder.writeas.com/getting-ms-dos-on-the-web</guid>
      <pubDate>Sat, 07 Jan 2023 23:20:48 +0000</pubDate>
    </item>
    <item>
      <title>File sharing is difficult</title>
      <link>https://stealthycoder.writeas.com/file-sharing-is-difficult?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[You would think this takes place in the early days of computing, but no this takes place in 2023. I needed to share some files to help my brother to launch a new website. !--more-- I told him just get me the files and I will put them online. So you cannot just mail them, since they could be considered harmful/malicious files, or it is too big to send through the mail provider. Then you go to services like WeTransfer but then you have a limit on time to get the files, not only that if you change one file then you have to send it again. It gets very difficult to track what file belongs to what state unless you name them like that. &#xA;&#xA;Then there are other options like running a FTP server yourself to share them, but that is not very intuitive to non-tech people. So you have to use a service of some kind, we settled on Dropbox in this case. Now getting away from just helping out a friend. &#xA;&#xA;How about helping someone in my own household. I have some videos/pictures I want to send to another machine very easily and directly. I could use Bluetooth but it is horrible at file transfer specifically. The rest it does quite nicely. So if I transfer from my PC or device to another I usually just start a python3 -m http.server in the directory holding the files and then downloading the files I need. &#xA;&#xA;Every time I run into this problem I find myself asking why are these seemingly so simple and core functionalities so difficult to implement. Even if everyone would run Windows, it is not guaranteed to work because I have a Windows machine in my network and I cannot get it to share files with any other machine.&#xA;&#xA;Casual on the go sharing&#xA;&#xA;Let us say you want to quickly share a file with a friend you happened to run into. You cannot share it via Bluetooth you will be standing there for 10 minutes. You either have to e-mail it, but then what mail address? You have to send it through some sort of messaging service, but then do you have them in that particular service? &#xA;&#xA;Why can I not just drag and drop a file to this person&#39;s contact information and they will get the file? This should seem very easy to attain for a mobile phone OS. &#xA;&#xA;If there is another way like NFC to share the file instantly that would be wonderful, but I have not found it yet. This is not limited to files though, what about Spotify songs, YouTube videos, webcomics, or anything else digital you can think of. &#xA;&#xA;#100DaysToOffload #devlife]]&gt;</description>
      <content:encoded><![CDATA[<p>You would think this takes place in the early days of computing, but no this takes place in 2023. I needed to share some files to help my brother to launch a new website.  I told him just get me the files and I will put them online. So you cannot just mail them, since they could be considered harmful/malicious files, or it is too big to send through the mail provider. Then you go to services like WeTransfer but then you have a limit on time to get the files, not only that if you change one file then you have to send it again. It gets very difficult to track what file belongs to what state unless you name them like that.</p>

<p>Then there are other options like running a FTP server yourself to share them, but that is not very intuitive to non-tech people. So you have to use a service of some kind, we settled on Dropbox in this case. Now getting away from just helping out a friend.</p>

<p>How about helping someone in my own household. I have some videos/pictures I want to send to another machine very easily and directly. I could use Bluetooth but it is horrible at file transfer specifically. The rest it does quite nicely. So if I transfer from my PC or device to another I usually just start a <code>python3 -m http.server</code> in the directory holding the files and then downloading the files I need.</p>

<p>Every time I run into this problem I find myself asking why are these seemingly so simple and core functionalities so difficult to implement. Even if everyone would run Windows, it is not guaranteed to work because I have a Windows machine in my network and I cannot get it to share files with any other machine.</p>

<h2 id="casual-on-the-go-sharing" id="casual-on-the-go-sharing">Casual on the go sharing</h2>

<p>Let us say you want to quickly share a file with a friend you happened to run into. You cannot share it via Bluetooth you will be standing there for 10 minutes. You either have to e-mail it, but then what mail address? You have to send it through some sort of messaging service, but then do you have them in that particular service?</p>

<p>Why can I not just drag and drop a file to this person&#39;s contact information and they will get the file? This should seem very easy to attain for a mobile phone OS.</p>

<p>If there is another way like NFC to share the file instantly that would be wonderful, but I have not found it yet. This is not limited to files though, what about Spotify songs, YouTube videos, webcomics, or anything else digital you can think of.</p>

<p><a href="https://stealthycoder.writeas.com/tag:100DaysToOffload" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">100DaysToOffload</span></a> <a href="https://stealthycoder.writeas.com/tag:devlife" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">devlife</span></a></p>
]]></content:encoded>
      <guid>https://stealthycoder.writeas.com/file-sharing-is-difficult</guid>
      <pubDate>Sat, 07 Jan 2023 23:51:08 +0000</pubDate>
    </item>
    <item>
      <title>Frontend dev is best dev</title>
      <link>https://stealthycoder.writeas.com/frontend-dev-is-best-dev?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[So this has been the case for many a year now, that everything in JavaScript land or in this day and age, TypeScript land, is targeting the fact that the developer can use one codebase for everything frontend and backend. !--more-- This means you would use the same file for the definitions of the data models that both the NodeJS/Deno backend will use as well as the frontend to query/reason about those objects. Use a simple REST API to ship data back and forth and so on.&#xA;&#xA;I just have one thought/question; why?&#xA;&#xA;Why indeed&#xA;&#xA;Well it seems like more and more was pulled into the frontend as time passed on, mainly to free up the backend to do the real processing. Like number crunching, data processing and so on. Which makes sense, but now it is weird to say the state of an application should be as close to the user as possible. Also servers are now more than capable of doing both work, and therefore the natural conclusion is that since most of the things were in the frontend anyway now; why not move the whole thing to the frontend?&#xA;&#xA;The thing is that it makes the codebase more difficult to manage, you introduce unwanted complexity let alone making it secure. The whole codebase has so many extra dependencies and vulnerabilities since a lot of Server Side Template Injection (SSTI) plague the landscape now and have been fixed/found already in the old backend languages. &#xA;&#xA;Of course it seems convenient, but let&#39;s take the data models that you want to use. You define them either from the frontend perspective or backend perspective first. In the frontend you might only want first name and an e-mail address, for example. Then in the backend you want to have maybe something more than just first name and an email address. How about a password to login? Or a session id, or maybe just an id in general for the databases so you can store multiple combinations of people? Then you have to transform schemas or build them differently. So you get views or something equivalent and suddenly everything is back to this again.&#xA;&#xA;I would steer away from having the whole application in one language with one library/framework since nothing so far checks all the boxes and covers all the bases. &#xA;&#xA;#100DaysToOffload #frontend #JavaScript #devlife&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>So this has been the case for many a year now, that everything in JavaScript land or in this day and age, TypeScript land, is targeting the fact that the developer can use one codebase for everything frontend and backend.  This means you would use the same file for the definitions of the data models that both the NodeJS/Deno backend will use as well as the frontend to query/reason about those objects. Use a simple REST API to ship data back and forth and so on.</p>

<p>I just have one thought/question; why?</p>

<h2 id="why-indeed" id="why-indeed">Why indeed</h2>

<p>Well it seems like more and more was pulled into the frontend as time passed on, mainly to free up the backend to do the real processing. Like number crunching, data processing and so on. Which makes sense, but now it is weird to say the state of an application should be as close to the user as possible. Also servers are now more than capable of doing both work, and therefore the natural conclusion is that since most of the things were in the frontend anyway now; why not move the whole thing to the frontend?</p>

<p>The thing is that it makes the codebase more difficult to manage, you introduce unwanted complexity let alone making it secure. The whole codebase has so many extra dependencies and vulnerabilities since a lot of Server Side Template Injection (SSTI) plague the landscape now and have been fixed/found already in the old backend languages.</p>

<p>Of course it seems convenient, but let&#39;s take the data models that you want to use. You define them either from the frontend perspective or backend perspective first. In the frontend you might only want first name and an e-mail address, for example. Then in the backend you want to have maybe something more than just first name and an email address. How about a password to login? Or a session id, or maybe just an id in general for the databases so you can store multiple combinations of people? Then you have to transform schemas or build them differently. So you get views or something equivalent and suddenly everything is back to this again.</p>

<p>I would steer away from having the whole application in one language with one library/framework since nothing so far checks all the boxes and covers all the bases.</p>

<p><a href="https://stealthycoder.writeas.com/tag:100DaysToOffload" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">100DaysToOffload</span></a> <a href="https://stealthycoder.writeas.com/tag:frontend" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">frontend</span></a> <a href="https://stealthycoder.writeas.com/tag:JavaScript" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">JavaScript</span></a> <a href="https://stealthycoder.writeas.com/tag:devlife" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">devlife</span></a></p>
]]></content:encoded>
      <guid>https://stealthycoder.writeas.com/frontend-dev-is-best-dev</guid>
      <pubDate>Sun, 08 Jan 2023 23:48:30 +0000</pubDate>
    </item>
    <item>
      <title>Backend dev is best dev</title>
      <link>https://stealthycoder.writeas.com/backend-dev-is-best-dev?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[So I see more and more these past months that everything should be done again in the backend. Meaning that there should be no more frontend interactivity. !--more-- The same that can be said from the frontend. There is this war going on between these two camps.&#xA;&#xA;Rust&#xA;&#xA;Rustlang seems to be the culprit since ever since that relative new language came to the scene of programming languages. Nowadays everything is in Rust. Every new project at least and they are using for example yew or maybe leptos. It aims at even rendering HTML tags as some sort of bizarre JSX deal. &#xA;&#xA;So we went from having everything in C for the browser. Code that could be made to fit unto a blackboard but was difficult to be approached by everyone. Slowly but surely more languages came and some were tere to try to make C easier. Like PHP, which is essentially C for the web. Now more and more processing power came to the frontend and the backend has to do more and more since it has a lot of more responsibilities. So as time went on, showing the UI went to the frontend. Then maybe interactivity went to the frontend, then also login, state and pretty much everything you can think of. &#xA;&#xA;Now we are slowly but surely swinging back the other way. Having everything in C, I mean Rust. We are back right where we were 40 years ago. &#xA;&#xA;What a day to be alive. &#xA;&#xA;Carbon&#xA;&#xA;There exists a new language in town and it is called Carbon which I think is to be a stab at Rust, and at the very least a small attack. It aims to be a better C++ that supports the full stack experience. So naturally Google has to get it&#39;s paws in this pie. &#xA;&#xA;It seems very odd to me that people condemn PHP for the silly wrong language it is, but seem to take to this Rust being a full stack environment like it is the next best thing and the way forward into the future. As if it is the best idea ever and why did nobody think of it before today? It is just old wine in new bottles.&#xA;&#xA;#100DaysToOffload #carbon #rust #backend]]&gt;</description>
      <content:encoded><![CDATA[<p>So I see more and more these past months that everything should be done again in the backend. Meaning that there should be no more frontend interactivity.  The same that can be said from the frontend. There is this war going on between these two camps.</p>

<h1 id="rust" id="rust">Rust</h1>

<p>Rustlang seems to be the culprit since ever since that relative new language came to the scene of programming languages. Nowadays everything is in Rust. Every new project at least and they are using for example <a href="https://github.com/yewstack/yew" rel="nofollow">yew</a> or maybe <a href="https://github.com/gbj/leptos" rel="nofollow">leptos</a>. It aims at even rendering HTML tags as some sort of bizarre JSX deal.</p>

<p>So we went from having everything in C for the browser. Code that could be made to fit unto a blackboard but was difficult to be approached by everyone. Slowly but surely more languages came and some were tere to try to make C easier. Like PHP, which is essentially C for the web. Now more and more processing power came to the frontend and the backend has to do more and more since it has a lot of more responsibilities. So as time went on, showing the UI went to the frontend. Then maybe interactivity went to the frontend, then also login, state and pretty much everything you can think of.</p>

<p>Now we are slowly but surely swinging back the other way. Having everything in C, I mean Rust. We are back right where we were 40 years ago.</p>

<p>What a day to be alive.</p>

<h1 id="carbon" id="carbon">Carbon</h1>

<p>There exists a new language in town and it is called <a href="https://github.com/carbon-language/carbon-lang" rel="nofollow">Carbon</a> which I think is to be a stab at Rust, and at the very least a small attack. It aims to be a better C++ that supports the full stack experience. So naturally Google has to get it&#39;s paws in this pie.</p>

<p>It seems very odd to me that people condemn PHP for the silly wrong language it is, but seem to take to this Rust being a full stack environment like it is the next best thing and the way forward into the future. As if it is the best idea ever and why did nobody think of it before today? It is just old wine in new bottles.</p>

<p><a href="https://stealthycoder.writeas.com/tag:100DaysToOffload" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">100DaysToOffload</span></a> <a href="https://stealthycoder.writeas.com/tag:carbon" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">carbon</span></a> <a href="https://stealthycoder.writeas.com/tag:rust" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">rust</span></a> <a href="https://stealthycoder.writeas.com/tag:backend" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">backend</span></a></p>
]]></content:encoded>
      <guid>https://stealthycoder.writeas.com/backend-dev-is-best-dev</guid>
      <pubDate>Sun, 08 Jan 2023 23:56:13 +0000</pubDate>
    </item>
    <item>
      <title>There exists more than REST</title>
      <link>https://stealthycoder.writeas.com/there-exists-more-than-rest?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[So these days it seems there are too many choices, and the wrong simplification is being applied. Since everything  in the frontend and backend is being consolidated to a singular language/framework to solve the entire issue. !--more-- I think it should be pulled back a little further and also taken a bit broader. &#xA;&#xA;Remember REST?&#xA;&#xA;Remember when REST first was a big thing. The idea that you would have stateless calls to the backend that just ships data? That was revolutionary. Then all sites became REST API sites. The SPA got borne out of it. Then everything became an SPA. Whether that fit or not. Handling any type of state/session data became extremely difficult and also the calls to get data from the database through some reverse proxies to ultimately end up in the frontend after several transpiling steps made everything worse in most every way.&#xA;&#xA;There are many other ways to program the application. You had the MVC pattern, MVVM or any of the MV* patterns. These still hold value to this day. I think it is being skipped because it is so old and not new. Not exciting, yeah yeah tech is boring is good post coming. Well not really. Try to make tech exciting and challenging, but not just for the sake of making it difficult for you or following a trend that seems to be forming.&#xA;&#xA;Then there is the fact you do the entire thing in either the backend or frontend that seems to be prevailing now. Which brings with it this idea of following the caravan out into the desert where everyone thinks someone else will know the way to the oasis and they all end up dying in the desert because of it.&#xA;&#xA;I have been in so many projects and every time they reached for a web application with a login screen and everything being behind REST API. The next thing that always has to happen is how to secure the REST API? Then we are always building everything twice, the data models get created in the backend and then have to be exposed to the frontend and because that is in TypeScript we have the data model definitions (schemas) now twice. Once in the backend for let&#39;s say Java and Hibernate and in the frontend for Angular. After pointing this out, we still keep on the same train tracks. Why? Well we can charge the customer twice for the same work.&#xA;&#xA;Pick the right tool&#xA;&#xA;Just keep your options open and choose the right tool for the job. Not just take GraphQL which is nothing more than a database explorer disguised as a RPC framework. When I am working on my personal sites, I try to make it as simple as possible and not do double work. So a MVC app works perfect for me. &#xA;&#xA;#100DaysToOffload #devlife]]&gt;</description>
      <content:encoded><![CDATA[<p>So these days it seems there are too many choices, and the wrong simplification is being applied. Since everything  in the frontend and backend is being consolidated to a singular language/framework to solve the entire issue.  I think it should be pulled back a little further and also taken a bit broader.</p>

<h1 id="remember-rest" id="remember-rest">Remember REST?</h1>

<p>Remember when REST first was a big thing. The idea that you would have stateless calls to the backend that just ships data? That was revolutionary. Then all sites became REST API sites. The SPA got borne out of it. Then everything became an SPA. Whether that fit or not. Handling any type of state/session data became extremely difficult and also the calls to get data from the database through some reverse proxies to ultimately end up in the frontend after several transpiling steps made everything worse in most every way.</p>

<p>There are many other ways to program the application. You had the MVC pattern, MVVM or any of the MV* patterns. These still hold value to this day. I think it is being skipped because it is so old and not new. Not exciting, yeah yeah tech is boring is good post coming. Well not really. Try to make tech exciting and challenging, but not just for the sake of making it difficult for you or following a trend that seems to be forming.</p>

<p>Then there is the fact you do the entire thing in either the backend or frontend that seems to be prevailing now. Which brings with it this idea of following the caravan out into the desert where everyone thinks someone else will know the way to the oasis and they all end up dying in the desert because of it.</p>

<p>I have been in so many projects and every time they reached for a web application with a login screen and everything being behind REST API. The next thing that always has to happen is how to secure the REST API? Then we are always building everything twice, the data models get created in the backend and then have to be exposed to the frontend and because that is in TypeScript we have the data model definitions (schemas) now twice. Once in the backend for let&#39;s say Java and Hibernate and in the frontend for Angular. After pointing this out, we still keep on the same train tracks. Why? Well we can charge the customer twice for the same work.</p>

<h1 id="pick-the-right-tool" id="pick-the-right-tool">Pick the right tool</h1>

<p>Just keep your options open and choose the right tool for the job. Not just take GraphQL which is nothing more than a database explorer disguised as a RPC framework. When I am working on my personal sites, I try to make it as simple as possible and not do double work. So a MVC app works perfect for me.</p>

<p><a href="https://stealthycoder.writeas.com/tag:100DaysToOffload" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">100DaysToOffload</span></a> <a href="https://stealthycoder.writeas.com/tag:devlife" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">devlife</span></a></p>
]]></content:encoded>
      <guid>https://stealthycoder.writeas.com/there-exists-more-than-rest</guid>
      <pubDate>Mon, 09 Jan 2023 00:07:50 +0000</pubDate>
    </item>
    <item>
      <title>So it is just like JavaScript</title>
      <link>https://stealthycoder.writeas.com/so-it-is-just-like-javascript?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I saw this thing being advertised on HackerNews or some such similar thing. After checking it out, it was just a blatant lie and more than that, it is just trying to sell JavaScript and HTML as they are intended on being used as if they thought of this. !--more-- The library/product in question is this. &#xA;&#xA;Why does it even exist&#xA;&#xA;I am having a hard time figuring out why this even exists as being a thing. It sounds like someone is thinking highly of themselves and go this thing I thought of that nobody has done before needs to be shared with the world and it will make me famous. Then it is something that not only already existed but is used globally and it is the exact same idea that already exists but now somehow transformed into their own idea.&#xA;&#xA;It claims to transform a regular html site into a realtime application by loading in some JavaScript.&#xA;&#xA;That is exactly what JavaScript already does. Not only that, it is apparently realtime, which I think is definitely misused here. &#xA;&#xA;I just do not like these frameworks that try to claim they are superior and did something amazing, where in reality they did nothing really. &#xA;&#xA;Something impressive&#xA;&#xA;If you want to make something truly impressive, use the basic tools that are there and build upon it. Like using construction materials and tools to create a doll house for example. Not try and sell to people see this wood and tools, it can turn any room into one filled with a dollhouse, you just have to create it. &#xA;&#xA;#100DaysToOffload #devlife&#xA;&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>I saw this thing being advertised on HackerNews or some such similar thing. After checking it out, it was just a blatant lie and more than that, it is just trying to sell JavaScript and HTML as they are intended on being used as if they thought of this.  The library/product in question is <a href="https://cocreate.app/docs/#introductions" rel="nofollow">this</a>.</p>

<h2 id="why-does-it-even-exist" id="why-does-it-even-exist">Why does it even exist</h2>

<p>I am having a hard time figuring out why this even exists as being a thing. It sounds like someone is thinking highly of themselves and go this thing I thought of that nobody has done before needs to be shared with the world and it will make me famous. Then it is something that not only already existed but is used globally and it is the exact same idea that already exists but now somehow transformed into their own idea.</p>

<p>It claims to transform a regular html site into a realtime application by loading in some JavaScript.</p>

<p>That is exactly what JavaScript already does. Not only that, it is apparently realtime, which I think is definitely misused here.</p>

<p>I just do not like these frameworks that try to claim they are superior and did something amazing, where in reality they did nothing really.</p>

<h2 id="something-impressive" id="something-impressive">Something impressive</h2>

<p>If you want to make something truly impressive, use the basic tools that are there and build upon it. Like using construction materials and tools to create a doll house for example. Not try and sell to people see this wood and tools, it can turn any room into one filled with a dollhouse, you just have to create it.</p>

<p><a href="https://stealthycoder.writeas.com/tag:100DaysToOffload" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">100DaysToOffload</span></a> <a href="https://stealthycoder.writeas.com/tag:devlife" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">devlife</span></a></p>
]]></content:encoded>
      <guid>https://stealthycoder.writeas.com/so-it-is-just-like-javascript</guid>
      <pubDate>Tue, 10 Jan 2023 09:12:07 +0000</pubDate>
    </item>
  </channel>
</rss>