The only source of knowledge is experience

Category: Geen categorie

The hidden maintenance cost, bitrot!

Sometimes it is very hard for customers to understand the hidden costs involved when you build custom software. By a hidden cost, I mean a phenomenon that apparently is back in our industry called BitRot.

What is BitRot?

Back in the day, BitRot was caused by the fact that the magnetic media we used to store our computer programs sometimes lost their magnetic information, causing problems reading the data back. With the industry moving to new ways to store data, like solid-state drives, the problem is still there but not predominant visible anymore. We have algorithms stat store our data in such a way the lost data can be recovered, and from an end-user perspective, it seems to have vanished completely.

Data degradation – Wikipedia

Bitrot redefined

So although the original problem has gone more or less away, we are now confronted with a completely new way that bitrot is coming back in our industry. You might disagree with me about reusing the name BitRot for something different, but in essence, the problem we face manifests itself in the exact same way. If we don’t touch the software, we write for even a few weeks, the software deteriorates!

Let me explain what is going on here.

BitRot today is the issue when we don’t touch our software for a few days or weeks, and the software deteriorates caused by a number of sources. Let me share a few of the issues you will face when building software today

New known vulnerabilities

A known vulnerability is some weakness found in the software you wrote or in any software you used to build your application or website that can be exploited by someone. You might think, why would my software all of a sudden become exploitable while I haven’t touched it? This is caused by the fact that attackers become smarter each day. they find new innovative ways to exploit software and since the software that is written often contains tons of code not only written by your own company but also open source components, the chance of your software becoming vulnerable itself is a significant risk. This does not immediately mean your software will be exploited as well, but the likelihood your software becomes exploitable will increase almost every day. This is something that you need to keep track of and you need to make updates or changes to your software to keep up with the current state of the industry. The number of vulnerabilities found is also increasing all the time. You can see how fast this is picking up in the following graph that is published NVD – CVSS Severity Distribution Over Time (nist.gov). I added a capture of the graph you can find there, that shows how many known vulnerabilities are found and that the rate of finding them is constantly increasing.

NVD – CVSS Severity Distribution Over Time (nist.gov)

Updates of used frameworks or packages

Your software is rarely built from scratch. To build software, you will use other software components all the time. Depending on the technology you use to write your programs, you use NuGet packages (.NET), Maven packages (Java), Node Packages (Node/Web Development), RubyGems (Ruby development), etc. These packages are built and maintained by others. and coming back to the previous topic, they also need to maintain their software to keep it free from known vulnerabilities. Also, they want to provide new capabilities and features constantly. This implies those packages will get new versions all the time, and keeping up with those more recent versions is not to take lightly.

Let’s assume you build a simple Hello World web application using .NET 6. and React to give a simple example. You can see the screenshot of what I did to create the application in Visual Studio 2022:

New ASP.NET with React project

Then this will result in the astonishing set of 1,487 Dependencies from the NodeJS ecosystem and 15 more from the .NET ecosystem. Starting with a clean template (I updated everything before I created the application in visual studio), this already resulted in 23 Known Vulnerabilities of which 9 are at the level of High!

Analysis of new project with Dependabot (GitHub)

Updates on compilers and tooling

Then there are the dependencies on Visual Studio 2022, which has updates at least once every month. Then we took a dependency on the .NET framework that is updated at least 1x per year and every two years has a new stable release that is supported for a maximum of 3 years. And finally, we took a dependency on the NodeJS toolset, which is also updated multiple times a year. These tools also tend to make breaking changes. You need to constantly keep them up to date because they can also contain known vulnerabilities that might compromise your development environment!

Newer versions of the languages and frameworks

Finally, there are also dependencies on the languages we used. In this example, I used React which is Javascript/typescript based, and C#10 for the .NET codebase. C# updated on a yearly cycle, and if you look at the versions of Node, then you see you need to update this every six months:

Releases | Node.js (nodejs.org)

Those language and framework updates can be significant if you look at the amount of work involved to actually use the new capabilities. Not using them still makes your codebase deteriorate from a maintainability perspective since the industry is moving on with new ways of utilizing the language and framework features. New team members on a project will have a hard time adapting old programming styles and inefficiencies if you only ensure it compiles.

Conclusion

The software you build is in a constant state of decay, and you need to allocate a significant portion of time to keep things evergreen and up to date. Waiting for your updates will cost you significantly more time than updating constantly. Also, here you see, the adage “if it hurts, do it more often” applies and makes the software delivery cadence more predictable and more secure. So make updating packages, frameworks, and languages part of the standard maintenance cycle!

The real challenge lies in making our customers aware of this problem and finding ways to make them aware they need to maintain software. You can not let software untouched for a few weeks since, in the meanwhile, the software becomes outdated and vulnerable. And last but not least it will become more complicated each day to make it up to date again.

Walking with RD’s

The idea of Walking with RD’s is that we provide a monthly videocast to you where we as the RDs in the Netherlands will go for a walk (because that’s one of the typical things we all do in these COVID times) and talk about a business-related topic. During this walk, we share our insights and discuss the challenges and opportunities we have faced during our career in the hope this provides valuable insights to you.

This First episode introduces the RD’s in the Netherlands and introduces the Regional Director program which is not always know to everyone in the Microsoft ecosystem.

In the Netherlands we are with five Microsoft regional directors: Sjoukje Zaal, CTO at Capgemini, Maarten Goet, Director at Wortell, Andre Carlucci, Global Director of Application Engineering at Kinley and Maarten Eekels Chief, Digital Officer and Managing Partner at Portiva and myself.

You can read more about the Microsoft Regional Director Program here, the short summary is here below:

“The Regional Director Program provides Microsoft leaders with the customer insights and real-world voices it needs to continue empowering developers and IT professionals with the world’s most innovative and impactful tools, services, and solutions.”

The topic we discuss in this episode is:

Working from home’ while still performing. How do we deal with this and what has worked very well for ourselves and the teams we work with.

This walk gave me also some new insights into how we can work from home and help and support our teams to stay healthy and sane. I really hope you will enjoy this episode and please let me know if you have additional tips and trick you would like to share, so we can all learn from each other and come out of this pandemic better than when we started. Hope you enjoy this episode and please share additional insights you might have with us as well!

And I want to thank Maarten Goet for arranging the creation of the leader and edit of this video!

Configure your ASP.NET Core application to use HTTPS that runs as a container in a Kubernetes cluster

In my course “Deploying ASP.NET Core Microservices Using Kubernetes and AKS” that you can find @pluralsight, I discuss the use of SLL inside and outside the kubernetes cluster.

In this course I made the choice to have no HTTPS between the services inside my cluster, but only to have SSL on the web frontend. But in my demo’s I did not address how I set up SSL on the website that I deploy to the cluster. In this article I want to give you some pointers how to make this work.

What do I need to get SSL on the website?

First of all you need a certificate file that can be used. Since I host the web endpoint direct as a Kubernetes service, without the use of any ingress controller, the only way to make this work is either by putting something in front of the website that will handle SSL (e.g. by using a Web Application firewall), or by handling SSL in the pod itself. You can also delegate this work to ingress in the cluster, but I did not use that in this scenario. If you want to use ingress and set up SSL that way, then you can find other articles like here, that you can follow. In this article I will go through the steps to handle the SSL communication direct with the kestrel server that runs inside the pod. Setting up kestrel to use the production certificate. Because we set up SSL without the use of the ingress controller this implies that the request will be handled by the actual pods that host the website. For this we need to make some tweaks to the ASP.NET core website startup, so it accepts the certificate file I have for the domain globoticket.com.

Get a certificate file that can be used to configure SSL for production on ASP.NET Core

ASP.NET core runs on the Kestrel server when you build the website in a docker container. You can configure the kestrel server to use a certificate to host the website on the domain name you have for your production environment. For this I used the capabilities you can find in azure to create a so called App Service Certificate. This is normally used to create a certificate that you can bind to an APP Service. In my case I just wanted to get the certificate and export it so I can then deploy it to the Kubernetes cluster as a secret and then use it in the container to configure the server.

When you create the app certificate and you imported it into a keyvault, you can export it as an PFX file using a few lines of PowerShell. there is an article you can find here that contains that PowerShell script that you can use.

Now we have an PFX file, and we can use this to configure the server. For this we can use two options. One is to make a change to the codebase and configure kestrel to use the pfx file with a password that you provide. My first idea wat to use this and then provide the PFX file as a file mounted to the pod, but this was more challenging then I thought. There is no simple way to just upload the pfx file to the cluster as a secret and then mount it to the pod. This is because you cannot set a secret to just contain a binary file. Instead I was required convert the pfx file to a base64 encoded string, that I can upload as a secret to the cluster.

To create a base64 encoded string from the pfx file you can use the following lines of PowerShell:

Because I now have a pfx file that is not understood by default by ASP.NET core I also needed to change a bit of startup code. At startup I save the pfx file to local storage and then configure Kestrel using environment variables to pick up the certificate file and the password.

For this I leverage the environment variables ASPNETCORE_Kestrel__Certificates__Default__Password and ASPNETCORE_Kestrel__Certificates__Default__Path When these are configured then the server will pick up the pfx file and configure SSL automatically, without any additional code changes

The way I solved this is by creating a new environment variable that I called base64pfxfile and I provided it the value that I first pushed to the cluster as a secret. This can be done with the following configuration:

This keeps the config file clean of any secrets and it passes the base 64 encoded string to the container as an environment variable.

I had to change the startup code of the ASP.NET website to decode the string, save it to a local file and then configure the environment variable ASPNETCORE_Kestrel__Certificates__Default__Path to point to the file. The Password is passed in the configuration in the same way. I created a secret and pass that secret as a value to the environment variable ASPNETCORE_Kestrel__Certificates__Default__Password

The startup code was changed as follows:

The way to create the secrets is as follows:

Now we have all the ingredients to make it work.

Next you can push the changes to the azure DevOps or GitHub repo and then let the GitHub Action or Azure DevOps pipeline run the build and deployment to the cluster.

The final step was to configure the domain name globoticket.com point to the IP address that is used by the Service endpoint and browse to the website.

website

Hope this helps setting up SSL on your externally exposed website endpoint. I made the choice to do it direct in the pod, of course there are also other ways. You can also set up the ingress controller to take care of the SSL termination and then pass through the request to the web pod.

How to Fix: Login failed for SA when running MS SQL Server on Linux in a Docker container

TLDR; It seems the documentation on how to start the Linux based sql server container contains a bug! The documentation states you need to start the container using the following command-line:

But this is wrong, because it will not set the password for SA

It took me hours to discover the environment variable they actually use to set the SA password is ‘MSSQL_SA_PASSWORD’

So when you use the following command line it just works

Hopefully it will save you from pulling your hear out and also searching for hours. I filed this as a bug, hopefully the docs or the scripts are updated soon. Github Issue

Steps to reproduce:

  • Run the command-line as advertised in the documentation to start the container.
  • Run a docker exec command to run a bash shell in the container interactively
  • Run the following command:

This will give you the following error:

Now remove the container and try again, but replace the -e ‘SA_PASSWORD=yourStrong(!)Password’ with -e ‘MSSQL_SA_PASSWORD=yourStrong(!)Password’

Try the same steps again, and voila, you are connected to the SQL server.

© 2023 Fluentbytes

Theme by Anders NorenUp ↑