Author Archives: Antranig Vartanian

About Antranig Vartanian

Doing things @ illuria, Inc. Unix, BSD, InfoSec, Elixir/Erlang, DNS, XMPP. Mostly harmless.

Fortune in Times of Need

I was setting up Huginn on FreeBSD, I needed to do some manual testings of commands before I automate them, one of them was using twurl to Tweet. When I was trying to tweet in Armenian, the terminal prompt was giving me a bell. I realized that I needed to change the locale.

When I opened another shell to change the locale, FreeBSD’s fortune printed the following:

In order to support national characters for European languages in tools like
less without creating other nationalisation aspects, set the environment
variable LC_ALL to 'en_US.UTF-8'.

Ah, thank you!

By the way, if you ever saw a fortune that you liked and you needed it later, but didn’t remember the details, you can do fortune -m pattern freebsd-tips, here’s an example:

% fortune -m USB freebsd-tips
%% (freebsd-tips)
If you need to create a FAT32 formatted USB thumb drive, find out its devicename
running dmesg(8) after inserting it. Then create an MBR schema, a single slice and
format it:

# gpart create -s MBR ${devicename}
# gpart add -t fat32 ${devicename}
# newfs_msdos -F 32 -L thumbdrive ${devicename}s1

                -- Lars Engels <lme@FreeBSD.org>

Cheers.

Reply via email.

Music Monday: gorgeouz beats

I saw the silly milestone of Rubenerd regarding 8,000 blog posts and the feedback about it, which got me thinking: even if I don’t blog what’s on my mind due to lack of time and skills, I can, still, blog about some things that I like. Following the concept of Music Monday, I’ll be also doing Citing Saturday, where I cite whatever I find interesting, that being technical, political or otherwise.

So two posts a week (at minimum) that’s 2×52 = 104 posts in a year.

Am I trying to reach a number? No. I’m just trying to build a framework where I can be more… consistent.

So, Monday it is, I’d like to present an artist that I found about late last year, and I’ve been buying his work from iTunes Store since then, please welcome, gorgeouz beats, a musical experimental laboratory.

Here are my favorites:

Reply via email.

ZFS compression is so good that it cost me 2 hours

So we have this build machine (build0) where we build FreeBSD in Jails and then we mount the src and obj dirs via NFS or we sync them using rsync to destinations so we can run make installworld on not-so-powerful servers.

Couple of days ago we had a network issue at the data center, the switches crashed and we had to reboot them. Turns out I was running rsync on one of our servers, so I decided to make sure that the files were copied.

Like a lazy sysadmin, I run the following commands on both the build0 server, as well as the remote host.

root@build0:~ # du -h -d 0 /usr/local/jails/f130/usr/obj/
 13G    /usr/local/jails/f130/usr/obj/

root@illuriasecurity:~ # du -h -d 0 /usr/obj/
5.5G    /usr/obj/

Hmm, maybe files were not copied properly? So I remove the obj dir and I rsync again.

Looks like the size is 5.5G AGAIN!

So I do a little bit of piping!

root@build0:/usr/local/jails/f130/usr/obj # find . | sort > /tmp/obj_build0.txt

root@illuriasecurity:/usr/obj # find . | sort > /tmp/obj.txt

zvartnots:~ $ scp illuria:/tmp/obj.txt  /tmp/
zvartnots:~ $ scp build0:/tmp/obj_build0.txt /tmp/

zvartnots:~ $ diff /tmp/obj.txt /tmp/obj_build0.txt

Um, no difference?

Looks like the size reported by du was… confusing?

Okay, let’s check the manual of du(1):

     -A		Display the apparent size instead of the disk usage.  This	can be
     		helpful when operating on compressed volumes or sparse files.

Oops, looks like ZFS compression is enabled on my machine…

Let’s try this again!

root@build0:~ # du -h -d 0 -A /usr/local/jails/f130/usr/obj/
 12G    /usr/local/jails/f130/usr/obj/

root@illuriasecurity:~ # du -h -d 0 -A /usr/obj/
 12G    /usr/obj/

Ok! This makes more sense 🙂

Let’s also check with ZFS.

root@illuriasecurity:~ # zfs get compression zroot/usr
NAME       PROPERTY     VALUE     SOURCE
zroot/usr  compression  lz4       inherited from zroot

I wonder what’s the build0 server is doing?

root@build0:~ # zfs get compression zroot/usr
cannot open 'zroot/usr': dataset does not exist

Hn o.O ? Oh yeah, I wonder.

root@build0:~ # mount | grep ' / '
/dev/ufs/rootfs on / (ufs, local, journaled soft-updates)

Okay, this makes much more sense now 🙂

That’s all folks!

Reply via email.

Techlife Crisis

This is another migration story, like the one that I wrote back in 2020. Unlike the other story, the motivation of this migration is totally different. It’s emotional instead of technical.

Last year a friend of mine got a new job that I referred her to. She passed the interviews and I helped her to get on-boarded as the employer was a friend of mine and I was pretty familiar with their product. The job was remote and she didn’t have a good laptop. Since I have many laptops I ended up giving her my ThinkPad T480s where she ran Ubuntu. As you can tell the employer was a VERY close friend of mine 🙂

All of this meant that I moved back to my MacBook Pro running macOS. I used to like macOS, for me it was always a rock-solid UNIX system with a proper graphical interface.

Unfortunatly these years the UNIX part is not solid anymore and the graphical interface is more iOS-y eye candy than a proper desktop interface.

But I was okay with that, since I spent most of my time in a terminal running vim, ssh, etc. I’d run typical work apps like Mail.app with GPGSuite and a Slack browser client.

But then something snapped in me. I think it was after the car accident. I spent two weeks at home, not able to work. So I started coding on my open-source projects again, doing some patches in FreeBSD, improving code on software that I like and so on.

I realized that I’ve been an Open Source advocate for years, and yet I was in the Apple ecosystem. Not that I don’t like the Apple ecosystem, don’t get me wrong, but as someone who’s been telling the government to use open source, helping them migrate, giving lectures to students about the open source movement and its history, I felt… bad.

I had this MacBook Pro laptop and this iPhone that both control me more than I can control it.

I contacted my friend again, asking if we can swap the laptops and she told me yes. She actually ended up working at our company and now she has a fancy new MacBook Pro while I came back to my lovely ThinkPad T480s running FreeBSD like I wanted in the first place.

As I mentioned, this time it hit me hard, so I decided to escape non-OSS things completely and now I’m running a Pixel 2 with Lineage OS.

There’s a whole story on how I got that Pixel 2 at this day and age and that story is coming soon. And the funniest thing is, as soon as I completed my transaction/migration to Open Source, I got the news that Apple Pay will finally work in Armenia.

Open Source changed my life when I was a kid in Syria, I learned more about computers because of Open Source and while I got distracted with the cute and nice macOS for a while, it’s time to come back home.

Here’s a screenshot

That’s all folks!

Reply via email.

Wrong Indicators

Bryan Cantrill has this amazing talk about debugging where he tells the story of Three Mile Island.

After watching that talk all I thought was “well, let’s hope this doesn’t happen in my life”, and by “my life”, I meant my personal or work server, not my AFK life!

55 days ago my girlfriend and I moved to a new apartment downtown the capital. I like everything about this house, specially that many things are electric, including the stove.

Like a sane person, when I see a stove with multiple levels (1, 2, 3) I assume that the lowest number is the lowest and highest number is the highest.

Now you’d think and say “Antranig, didn’t you notice that your cooking was talking 2 hours, so there must be something wrong?”

Oh no, my friend, very much no. As you can see we have two stove tops, a small one and a big one. Now, the small one is working very fine. At the highest level it heats more than at lowest level.

But the big one, the big stove top, not so much.

We thought that there’s a problem with that top and used it only during slow emergencies.

One day I come home from work and Lilith is laughing. I asked “what happened?” and she replied “you’re not gonna believe this!”

Well she was right, it’s been couple of days now and I still can’t believe this. I mean if both of the stove tops were in reverse order, I would understand that someone was very Unix-y and they wanted to design it similar to nice.

But when each of those knobs are the exact opposite of each other, it makes you think, “why me?”

Why me indeed.

That’s all folks!

Reply via email.

2021 Retrospective

Lately Rubenerd blogged about Reading people’s blog archives and I remembered that I used to do that when I had nothing better to do. One of my favorites was Norayr’s yearly retrospective. Let’s try doing the same here.

I’m happy that I’m not doing this last year, as 2020 was the year of pain, loss and war.

2021, on the other hand, was the year of happiness, gain and love.

Let’s start 🙂

Personal life

  • I finally had a proper vacation
  • I moved with my girlfriend, Coffee, to a new apartment!
    • It has an amazing view of our lovely city, Yerevan 🙂
  • Had a car accident around October
    • No major incident, fully recovered
  • My ideological crisis made me rethink my technological choices

Technology

  • I moved to open-source Android, LineageOS, running on my Google Pixel 2 phone.
    • You’d think that now I’ve completely ditched Apple, but that’s not true.
    • A friend of mine needed a laptop, so I moved back to my MacBook Pro and gave her my ThinkPad T480s
  • I learned JavaScript and started writing VueJS at work
  • Wrote a new FreeBSD Jail orchestration software named Jailer. I planned on open-sourcing it in the Summer, but with the lack of hands I hope it will be released Spring of 2022
    • I moved all my servers to Jailer
  • I ditched music streaming services and I’m more than happy with this choice
  • I moved all my networks to WireGuard and I’m happy with that choice too!

Blogging

  • I wrote 65 entries, which makes me sad. I hope I will blog a lot more in 2022
    • I wrote a blog post about blogging regularly where I mentioned Jamie Zawinski. Then I emailed that blog post to him. When I was drunk. I also mentioned that I was drunk. Good thing he replied 🙂
  • 3 of my entries were posted on the orange site with many people commenting about them
  • I added a comment section to my website using isso
  • Reading more about blogging led me to learn about Dave Winer
    • Which led me to learn about OPML and oldSchoolBlog, which I still don’t know how to use or even if I should
  • I redid the theme of my website based on Archie theme which was written by Athul
  • I automated my writing process using Shell Scripts

Community

  • I organized the first Systems We Love — Armenia meetup
    • The plan was to make it a monthly thing, but life happened
  • For years I dreamed about an Armenian Open-Source Ecosystem, as of 2021 we have
  • We went to the city of Stepanavan and organized a GradaranCamp (LibraryCamp) at the Stepanavan public library. Unlike me, my friends continue organizing it
  • I started using Twitter Spaces which led me to talk with my idols, including Bryan Cantrill. His talks have shaped my methodology and ideology for years. I don’t have the word to describe how happy I am about this. I kinda still don’t believe it 🙂

Talks

  • I have given 8 talks this years, including and not limited to
    • Portland Linux/Unix Group Online Meeting: 360Cloud based on FreeBSD
    • BarCamp 2021: FreeBSD Cloud
    • BarCamp 2021: Open-Source ecosystems and Armenian tech communities
    • BarCamp 2021: Information Security Panel
    • PYerevan #15 Meetup: Tracing Production
    • ArmSec 2021: Tracing hackers for fun and profit
    • ArmSec 2021: DNSSEC in Armenia

Work

I hope 2022 brings the best to all of us.

That’s all folks

Reply via email.

WireGuard “dynamic” routing on FreeBSD

I originally wrote about this on my Armenian blog when ISPs started blocking DNS queries during and after the war. I was forces to use either 9.9.9.9, 1.1.1.1, 8.8.8.8 or any other major DNS resolver. For me this was a pain because I was not able to dig +trace, and I dig +trace a lot.

After some digging (as mentioned in the Armenian blog) I was able to figure out that this affects only the home users. Luckily, I also run servers at my home and the ISPs were not blocking anything on those “server” ranges, so I setup WireGuard.

This post is not about setting up WireGuard, there are plenty of posts and articles on the internet about that.

Over time my network became larger. I also started having servers outside of my network. One of the fast (and probably wrong) ways of restricting access to my servers was allowing traffic only from my own network.

I have a server that acts as WireGuard VPN Peer and does NAT-ing. That being said, the easiest way for me to start accessing my restricted servers is by doing route add restricted_server_addr -interface wg0.

Turns out I needed to write some code for that, which I love to do!

Anytime that I need to setup a WireGuard VPN client I go back to my Armenian post and read there, so now I’ll be blogging how to do dynamic routing with WireGuard so I read whenever I need to. I hope it becomes handy for you too!

Now, let’s assume you need to add a.b.c.d in your routes, usually you’d do route add a.b.c.d -interface wg0, but this would not work, since in your WireGuard configuration you have a line that says

[Peer]
AllowedIPs = w.x.y.z/24

Which means, even if you add the route, the WireGuard application/kernel module will not route those packets.

To achieve “dynamic” routing we could do

[Peer]
AllowedIPs = 0.0.0.0/0

This, however will route ALL your traffic via WireGuard, which is also something you don’t want, you want to add routes at runtime.

What we could do, however, is to ask WireGuard to NOT add the routes automatically. Here’s how.

[Interface]
PrivateKey      = your_private_key
Address         = w.x.y.z/32
Table           = off
PostUp          = /usr/local/etc/wireguard/add_routes.sh %i
DNS             = w.z.y.1

[Peer]
PublicKey       = their_public_key
PresharedKey    = pre_shared_key
AllowedIPs      = 0.0.0.0/0
Endpoint        = your_server_addr:wg_port

The two key points here are Table = off which asks WireGuard to not add the routes automatically and PostUp = /usr/local/etc/wireguard/add_routes.sh %i which is a script that does add the routes, where %i is expanded to the WireGuard interface name; could be wg0, could be home0, depends in your configuration.

Now for add_routes.sh we write the following.

#!/bin/sh

interface=${1}

networks="""
w.x.y.0/24
restricted_server_addr/32
another_server/32
"""

for _n in ${networks};
do
  route -q -n add ${_n} -interface ${interface}
done

And we can finally do wg-quick up server0.conf

If you need to add another route while WireGuard is running, you can do

route add another_restricted_server -interface wg0

Okay, what if you need to route everything while WireGuard is running? Well, that’s easy too!

First, find your default gateway.

% route -n get default | grep gateway
    gateway: your_gateway

Next, add a route for your endpoint via your current default gateway.

route add you_server_addr your_gateway

Next, add TWO routes for WireGuard.

route add 0.0.0.0/1     -interface wg0
route add 128.0.0.0/1   -interface wg0

So it’s the two halves of the Internet 🙂

That’s all folks!

Reply via email.

Car accident and pain

So I had a car accident two weeks ago and like any sane person my first action was to tweet about it. It’s good to let people know that you will be offline for couple of weeks.

There were no serious injuries to the bone, but my muscles hurt till now. The main reason why I wanted to go offline so I don’t become asshole do mistakes while under pain.

The downtime was good. I was able to spend some time contributing to some personal and open-source projects that I care about.

At this point I’m back on track, hopefully will blog about some upcoming personal and $WORK news.

That’s all folks…

Reply via email.

VoidLinux in FreeBSD Jail; with init

Two important things happened this week for me.

First, Faraz asked me if I can rename my Jail manager to something other than Jailio because he got that domain for his Jailer manager already. So I named it

Second, I was able to run a complete Linux system using Jailer. While the repo for Jailer is not released yet (we are auditing for possible security issues), I would like to share how I was able to run VoidLinux in a Jail.

Since Jailer is not announced yet, I will give the examples using jail.conf, as most people either are or should be familiar with its concepts.

I went with VoidLinux because I am able to run the init process without its need to be running as PID1.

Let’s start, shall we?

First, ZFS dataset for our jail!

zfs create zroot/jails/voidlinux

Next we need to fetch the base system of VoidLinux. Luckily they do provide it on their website.

fetch https://alpha.de.repo.voidlinux.org/live/current/void-x86_64-ROOTFS-20210218.tar.xz

Now we can extract this into our dataset

tar xf void-x86_64-ROOTFS-20210218.tar.xz -C /usr/local/jails/voidlinux/

You might get an error that ./usr/bin/iputils-ping: Cannot restore extended attributes: security.capability, which is fine, I think?

If you are on FreeBSD 12.2-RELEASE or later, now you need to enable the Linuxulator.

service linux enable; service linux start

Now you can at least chroot into the system.

chroot /usr/local/jails/voidlinux/ /bin/bash

If everything is fine until now, perfect.

Now we need to add a root user into the system.

root@host:~ # cd /usr/local/jails/voidlinux/etc/
root@host:/usr/local/jails/voidlinux/etc # echo "root::0:0::0:0:Charlie &:/root:/bin/bash" > master.passwd
root@host:/usr/local/jails/voidlinux/etc # pwd_mkdb -d ./ -p master.passwd
pwd_mkdb: warning, unknown root shell

Execute the rest of the commands in Void.

root@host:~ # chroot /usr/local/jails/voidlinux/ /bin/bash
bash-5.1# cd /etc/
bash-5.1# pwconv 
bash-5.1# grpconv 
bash-5.1# passwd 
New password: 
Retype new password: 
passwd: password updated successfully
bash-5.1# exit

If all went fine, then the system is ready to be run as a Jail!

First we need to make an fstab for the system.

Create a file at /usr/local/jails/voidlinux/etc/fstab.pre and insert the following inside

devfs       /usr/local/jails/voidlinux/dev      devfs           rw                      0   0
tmpfs       /usr/local/jails/voidlinux/dev/shm  tmpfs           rw,size=1g,mode=1777    0   0
fdescfs     /usr/local/jails/voidlinux/dev/fd   fdescfs         rw,linrdlnk             0   0
linprocfs   /usr/local/jails/voidlinux/proc     linprocfs       rw                      0   0
linsysfs    /usr/local/jails/voidlinux/sys      linsysfs        rw                      0   0
/tmp        /usr/local/jails/voidlinux/tmp      nullfs          rw                      0   0

Next, let’s create a loopback interface for networking. Oh yes, VNET is not supported yet, but I’m working on a patch 🙂

ifconfig lo1 create
ifconfig lo1 inet 10.10.0.1/24 up # sorry, 10.0.0.0/24 was unavailable :P

Okay, time to create our Jail conf!

exec.clean;
allow.raw_sockets;
mount.devfs;

voidlinux {
    $id     = "1";
    $ipaddr = "10.10.0.42";
    $mask   = "255.255.255.0";
    $domain = "srv0.bsd.am";
    devfs_ruleset  = 4;
    allow.mount;
    allow.mount.devfs;
    mount.fstab = "${path}/etc/fstab.pre";

    exec.start     = "/bin/sh /etc/runit/2 &";
    exec.stop      = "/bin/sh /etc/runit/3";


    ip4.addr      = "${ipaddr}";
    interface     = "lo1";
    host.hostname = "${name}.${domain}";
    path = "/usr/local/jails/voidlinux";
    exec.consolelog = "/var/log/jail-${name}.log";
    persist;
    allow.socket_af;
}

Let’s check?

# jls
   JID  IP Address      Hostname                      Path
     1  192.168.0.42    voidlinux.srv0.bsd.am         /usr/local/jails/voidlinux

And the process tree?

# ps auxd -J voidlinux
USER   PID %CPU %MEM  VSZ  RSS TT  STAT STARTED    TIME COMMAND
root 35182  0.0  0.1 2320 1428  -  SsJ  21:09   0:00.12 runsvdir -P /run/runit/runsvdir/current log: ot set SO_PASSCRED: Protocol not available\ncould not set SO_PASSCRED: Protocol
root 35190  0.0  0.1 2168 1376  -  SsJ  21:09   0:00.02 - runsv agetty-tty6
root 35397  0.0  0.1 2412 1704  -  SsJ  21:10   0:00.00 `-- agetty tty6 38400 linux
root 35191  0.0  0.1 2168 1376  -  SsJ  21:09   0:00.02 - runsv agetty-tty1
root 35396  0.0  0.1 2412 1704  -  SsJ  21:10   0:00.00 `-- agetty --noclear tty1 38400 linux
root 35192  0.0  0.1 2168 1376  -  SsJ  21:09   0:00.02 - runsv agetty-tty5
root 35398  0.0  0.1 2412 1704  -  SsJ  21:10   0:00.01 `-- agetty tty5 38400 linux
root 35193  0.0  0.1 2168 1376  -  SsJ  21:09   0:00.02 - runsv agetty-tty2
root 35393  0.0  0.1 2412 1704  -  SsJ  21:10   0:00.00 `-- agetty tty2 38400 linux
root 35194  0.0  0.1 2168 1396  -  RsJ  21:09   0:00.12 - runsv udevd
root 35195  0.0  0.1 2168 1376  -  SsJ  21:09   0:00.02 - runsv agetty-tty3
root 35394  0.0  0.1 2412 1704  -  SsJ  21:10   0:00.00 `-- agetty tty3 38400 linux
root 35196  0.0  0.1 2168 1376  -  SsJ  21:09   0:00.02 - runsv agetty-tty4
root 35390  0.0  0.1 2412 1704  -  SsJ  21:10   0:00.00 `-- agetty tty4 38400 linux

You may jexec now 🙂

# jexec voidlinux /bin/bash
bash-5.1# uname -a
Linux voidlinux.srv0.bsd.am 3.2.0 FreeBSD 12.2-RELEASE-p6 GENERIC x86_64 GNU/Linux

Let’s check networking?

bash-5.1# ping -c 1 10.10.0.1
ping: WARNING: setsockopt(ICMP_FILTER): Protocol not available
PING 10.10.0.1 (10.10.0.1) 56(84) bytes of data.
64 bytes from 10.10.0.1: icmp_seq=1 ttl=64 time=0.069 ms

--- 10.10.0.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms

There you go! Well, things that are related to netlink might not work, but other than that it’s okay.

I did have some problems while installing packages, something about too many levels of symbolic links. Here’s the exact output when I was trying to install the curl package

[*] Unpacking packages
libev-4.33_1: unpacking ...
ERROR: libev-4.33_1: [unpack] failed to extract file `./usr/lib/libev.so.4': Too many levels of symbolic links
ERROR: libev-4.33_1: [unpack] failed to extract files: Too many levels of symbolic links
ERROR: libev-4.33_1: [unpack] failed to unpack files from archive: Too many levels of symbolic links
Transaction failed! see above for errors.

Now, I did not find the time to fix this yet, but if you have any idea, please let me know or comment below 🙂

So, what do we have here? A Linux Jail, running VoidLinux, with init, so you can also run services, and basic networking for it.

That’s all folks…

Reply via email.

Linux is dead, long-live Docker monoculture

Full Discloser: While reading this blog post, please put yourself in my shoes. You’ve been looking around for a simple monitoring solution, you found some. None of the some are working because you use an Operating System that is used by Apple, WhatsApp, Netflix and many more, but developers think that everyone, everywhere, runs either macOS or Linux. And they all use Docker.

A while back Rubenerd wrote that he’s not sure that UNIX won and how Linux created a monoculture of assuming everything is supposed to run on Linux.

For me, this was not much of a problem, I can run Linux binaries on FreeBSD, I even watch Netflix using Linuxulator.

But now things are on another level, WAY another level.

I have a simple monitoring setup using cron, Grafana, InfluxDB and ping. It basically pings my servers and sends me a telegram message if they are down.

I set that up years ago, but now I have more public facing infrastructure that other people use as well, such as an Armenian Lobsters instance, Jabber.am, a WriteFreely instance and more.

As a self-respecting Ops, I wanted to make a simple dashboard for my users to see the uptime status of these services as well. First, they won’t bug me asking if something is not working; they will SEE, that, SSL/TLS certificate is expired, or the network is an issue, or that the server is down.

<rant>

So I started hunting on the internet for some software that do just that.

The first one that came to my mind was Gatus. I’ve used Gatus before for one of my clients, I like it a lot. It’s simple, it does what it’s supposed to do.

As a sane person, I fetched the code from GitHub using fetch, extracted the tarball and ran make. Nothing happens. Let’s see the Makefile, shall we?

Docker executed in Make

Oh boy, if only, only, I had Docker, all my problems would be solved. First of all, let’s talk about the fact that this Makefile is used as a… script. There’s no dependencies in the targets!

Okay, let’s read that Dockerfile. Executing the scripts inside it should help out, aye?

# Build the go application into a binary
FROM golang:alpine as builder
RUN apk --update add ca-certificates
WORKDIR /app
COPY . ./
RUN CGO_ENABLED=0 GOOS=linux go build -mod vendor -a -installsuffix cgo -o gatus .

# Run Tests inside docker image if you don't have a configured go environment
#RUN apk update && apk add --virtual build-dependencies build-base gcc
#RUN go test ./... -mod vendor

# Run the binary on an empty container
FROM scratch
COPY --from=builder /app/gatus .
COPY --from=builder /app/config.yaml ./config/config.yaml
COPY --from=builder /app/web/static ./web/static
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
ENV PORT=8080
EXPOSE ${PORT}

There are multiple things wrong in me this.

First, please stop putting your binaries in /app, please, pretty-please? We have /usr/local/bin/ for that.

Second, I thought that running go build without GOOS=linux would solve all of my problems. I was wrong, very wrong.

root@mon:~/gatus/gatus-2.8.1 # env CGO_ENABLED=0 go build -mod vendor -a -installsuffix cgo -o gatus .
package github.com/TwinProduction/gatus
        imports github.com/TwinProduction/gatus/config
        imports github.com/TwinProduction/gatus/storage
        imports github.com/TwinProduction/gatus/storage/store
        imports github.com/TwinProduction/gatus/storage/store/sqlite
        imports modernc.org/sqlite
        imports modernc.org/libc
        imports modernc.org/libc/errno: build constraints exclude all Go files in /root/gatus/gatus-2.8.1/vendor/modernc.org/libc/errno

Okay, check this out, the package is called modernc.org/sqlite and it says:

Package sqlite is a CGo-free port of SQLite.

SQLite is an in-process implementation of a self-contained, serverless, zero-configuration, transactional SQL database engine.

Of course it is. Looks like I have to port all of this to FreeBSD. Which, don’t get me wrong, I’m okay with doing that, but I thought that we have POSIX for a reason. notsomuch.

Okay, I’m an open-source guy, I’ll spend some time this weekend to port this to FreeBSD. Let’s look for another solution!

Here’s another one, it’s called statping, also written in Go, the readme is so promising.

No Requirements

Statping is built in Go Language so all you need is the precompile binary based on your operating system. You won’t need to install anything extra once you have the Statping binary installed. You can even run Statping on a Raspberry Pi.

Sounds good! Let’s try it out.

Again, I fetch the tarball, I extract and I bake make.

apt executed in Make

Of course it requires apt! Because not only we all run Linux, be we all run a specific distribution of Linux with a specific package manager.

While tweeting with anger, Daniel pointed out that I should tell them kindly and it’ll work out. I’m sure it will. Let’s hope I can make it work first. I don’t like just opening issues. I’d rather send a patch directly.

</rant>

Overall, now I understand why most *BSD folks use, what’s the word here? ah, yes, old-school software on their systems, like Nagios and the rest.

The developers of the New World Order will assume, always, you are running Linux, as Ubuntu, and you always have Docker.

Hopefully this weekend I will be able to port these software to FreeBSD, otherwise I will just use the Linux layer.

Like Rubenerd said, I am thankful that the mainstream-ness of Linux helped other Unix systems as well, but monocultures are destroying what people have spent years to improve.

Hopefully, next week, I will write a blog post on how to fix these issues and how I got all of those up and running.

That’s all folks…

Reply via email.