Today I learned/remembered that to format a number with a delimiter, we can use ActiveSupport::NumericWithFormat#to_s(:delimited).
require 'active_support' require 'active_support/core_ext' 123456789.to_s(:delimited) # => "123,456,789" Not only that, but this method also provides other formats and takes options to tweak its behavior.
123456789.to_s(:delimited, delimiter: '-') # => "123-456-789" 123456789.to_s(:currency, precision: 3) # => "$123,456,789.000" 123456789.to_s(:human_size) # => "118 MB" 123456789.to_s(:human) # => "123 Million" Originally, when I need to format a number to a delimited number, I would try to use number_to_delimited provided by ActiveSupport::NumberHelper. But to_s is handy when we have to do that where the helper is not reachable by default, such as in a serializer class for ActiveModel Serializer. Calling this method passing a format is the same as calling ActiveSupport::NumberHelper.number_to_#{format}.
In software development, a feedback loop that takes time and effort could be a major bottleneck. Especially if we have to run things on remote servers and figuring out what’s going on requires lots of effort, debugging could make frustrations. Today, I’m going to introduce a couple of tools that help to debug GitHub Actions workflows from local machines.
Act With act command, you can run workflows on your local machine. Once you run act command it reads YAML files under .github/workflows and runs the jobs in Docker containers immediately. We can make sure if there’s any major problem in the YAML without making commit and pushing to GitHub. Although act does not provide 100% compatible environment with GitHub Actions, it helps a lot when you start writing workflow settings. For example, it is useful to find any errors and mistakes in the YAML file structure, dependencies between jobs, calling actions, and steps written in shell scripts.
When we have to process a large number of files, we should be aware of the resources the process consumes. Here’s the scenario I faced in my work recently: we have to create one archive file from lots of files that are stored on an S3 bucket. Created archive files must be placed on another S3 bucket. The number of files of an archive could be 700,000. The size of each file is up to 10MiB. The size of an archive file will be less than 140GiB.
Stripe provides an easy way to implement subscription services. This document explains how Stripe’s subscription works and the things you can. They also provide an example to implement fixed-price subscriptions, like Netflix. You can try the example repository on GitHub and see the whole lifecycle of a subscription. Actually, you don’t have to even set up any environment for the example app if you have Docker on your computer. The only dependency is Docker.
Sometimes I want to make multiple outputs from multiple processes into one single output. Let’s say we have to aggregate K8s pod logs from a deployment. We could do that by redirecting each output to a file and follow the file by tail -f like below.
tempfile=$(mktemp /tmp/rip-example.XXXXXXXXXX) trap "rm \"$tempfile\"" EXIT pods=$(kubectl get pods | grep Running | grep -oe "app-[a-z0-9\-]\+") for pod in $pods do kubectl logs -f "$pod" >> "$tempfile" & done tail -f "$tempfile" However, that method consumes disk spaces. Even worse, all the output data remains in the local disk if the process were killed with 9 (SIGKILL). There must be better ways. That’s where named pipe comes to play.