Tag Archives: HowTo

Mirroring OmniOS: The Complete Guide; Part One

Chapter Ⅰ

I know that “Complete Guide” and “Part One” are oxymorons, but hey, be happy that I’m publishing in parts, otherwise I’d completely ignore this blog post.

Two weeks ago I decided to play with illumos again. I was speaking with a friend and we were sharing our frustrations regarding Open-Source contribution. We write the code, we submit, we get feedback, we submit again, and then we’re ghosted. It’s like the LinkedIn or Tinder version of Software Engineering.

Then I asked him about his best open-source experience and he told me “illumos of course!”.

I was amazed. I thought you had to be very technical in order to even build illumos, but turns out they have an amazing documentation on building illumos and OmniOS (an illumos distribution) has done work to make sure that the system can be self-hosted (i.e. The OS can build itself).

So, I decided to fire up OmniOS on our hackerspace server running FreeBSD inside a bhyve VM.

The installation went smoothly, but the IPS packages were slow to download, and I might be wrong (please correct me if I am) but IPS doesn’t seem to be keeping a local copy of the files. It always downloads. Is that configurable?

Regardless. I thought that the best way to contribute is to advocate. In order to do that I needed to make sure that IPS servers are fast in Armenia. Hence the mirroring project started.

Obey!

Requirements

Here are some terminology that I will use in this blog post, just so we are on the same page.

  • OmniOS: an illumos distribution
  • Origin: OmniOS’s IPS servers at pkg.omnios.org
  • Local: A copy of the Origin
  • Repository: A collection of software
  • Core: The Core Repository of OmniOS
  • Extras: The Extra Repository of OmniOS
  • IPS or PKG: The Image Packaging System and its utility, pkg
  • Zone: an illumos Zone (similar to FreeBSD Jails, Linux Containers, chroot) running on OmniOS

Now that we are on the same page, let’s talk about our setup and what we need.

  • An internet connection: duh!
  • A domain name: I decided to use pkg.omnios.illumos.am. Yes, I’m lucky like that.
  • A publicly accessible IP address.
  • A server: I am running OmniOS Stable (r151048) inside a VM. You can use bare-metal or a cloud VM if you want.
  • Storage: I am currently using around 50GB of storage, expect that to go around 300GB when we get to Part Three

Pre-Mirroring Setup

Before we setup our mirror, let’s make sure that we have a good infrastructure that we can maintain.

Here’s what we’ll create

  • A Zone that will act as the HTTP(s) server using nginx at IP address 10.10.0.80
  • A Zone that will do the mirroring using IPS tools at 10.10.0.51
  • An virtual dumb switch (etherstub) that will connect the Zones and the Global-Zone (a.k.a The Host) together. The GZ will have an address of 10.10.0.1
  • ZFS datasets for each Core and Extras Repository (for each release)

Please note that there are many ways to do this, for example, having everything in a Global Zone, running IPS mirroring and nginx in a single Zone, not using etherstub at all, etc. But I like this setup as it will allow us to “grow” in the future.

From now on, omnios# means that we’re in the Global Zone and zone0# means we’re inside a Zone named zone0.

Let’s start with setting up our etherstub and connecting our Global Zone to it

omnios# dladm create-etherstub switch0
omnios# dladm create-vnic -l switch0 vnic0
omnios# ipadm create-if vnic0
omnios# ipadm create-addr -T static -a 10.10.0.1/24 vnic0/switch0

Done!

Now, we will setup our Zones using the zadm utility. Install zadm by running

omnios# pkg install zadm

After installing zadm, we’ll create a dataset for our Zones

omnios# zfs create -o mountpoint=/zones rpool/zones

This assumes that your ZFS pool is named rpool.

Finally, we can create our Zones. Running

omnios# zadm create -b pkgsrc www0

will open your $EDITOR, where you need to modify some JSON, here’s what mine looks like!

{
   "autoboot" : "true",
   "brand" : "pkgsrc",
   "ip-type" : "exclusive",
"dns-domain" : "omnios.illumos.am", "net" : [ { "allowed-address" : "10.10.0.80/24", "defrouter" : "10.10.0.1", "global-nic" : "switch0", "physical" : "www0" } ], "pool" : "", "scheduling-class" : "", "zonename" : "www0", "zonepath" : "/zones/www0" }

After saving the file, zadm will install the Zone.

Now let’s setup our mirroring Zone. Do the same but change the Zone name to repo, the brand to lipkg (and -b lipkg) and set the IP address to 10.10.0.51/24.

All we need now is to forward the HTTP/HTTPS traffic to www0 Zone and allow all Zones to access the internet using NAT.

Create and edit the IPFilter’s NAT file at /etc/ipf/ipnat.conf, here’s an example configuration

map vioif0 10.10.0.0/24 -> 212.34.250.10

rdr vioif0 212.34.250.10/32 port 80 -> 10.10.0.80 port 80 tcp
rdr vioif0 212.34.250.10/32 port 443 -> 10.10.0.80 port 443 tcp

Make sure you set the correct interface name and the correct external IP address.

Finally, we can boot our Zones!

omnios# zadm boot www0
omnios# zadm boot repo

You should see the following output when you run zadm again

omnios# zadm
NAME              STATUS     BRAND       RAM    CPUS  SHARES
global            running    ipkg        56G      12       1
repo              running    lipkg         -       -       1
www0              running    pkgsrc        -       -       1

Great! Let’s setup the mirroring process.

Mirroring Setup

Let’s create a ZFS dataset for repos for each release

repo# zfs create -o mountpoint=/repo rpool/zones/repo/ROOT/repo      
repo# zfs create rpool/zones/repo/ROOT/repo/r151048      
repo# zfs create rpool/zones/repo/ROOT/repo/r151048/core 
repo# zfs create rpool/zones/repo/ROOT/repo/r151048/extra

And then we use the pkgrepo command to create a repository

repo# pkgrepo create /repo/r151048/core
repo# pkgrepo create /repo/r151048/extra

And finally, we can start receiving the packages from Origin to Local

repo# pkgrecv -s https://pkg.omnios.org/r151048/core/  -d /repo/r151048/core  '*'
repo# pkgrecv -s https://pkg.omnios.org/r151048/extra/ -d /repo/r151048/extra '*'

This will take a while depending on your internet connection speed and the load on OmniOS’s Origin. It’s like a good investment, we spend load and time now so we save traffic and time later 🙂

After it’s done, we need to set the publisher of these repos the same as Origin.

repo# pkgrepo set -s /repo/r151048/core   publisher/prefix=omnios
repo# pkgrepo set -s /repo/r151048/extra/ publisher/prefix=extra.omnios

And we’re done!

Now need to serve these repos using IPS’s depot server.

We will create two instances of the depotd server, one for core and one for extra.

  • r151048/core will run on 5148
  • r151048/extra will run on 1148
  • (in the future) r151050/core will run on 5150
  • (in the future) r151050/extra will run on 1150

We start with core

repo# svccfg -s pkg/server add r151048_core
repo# svccfg -s pkg/server:r151048_core addpg pkg application
repo# svccfg -s pkg/server:r151048_core setprop pkg/inst_root = /repo/r151048/core/
repo# svccfg -s pkg/server:r151048_core setprop pkg/port = 5148
repo# svccfg -s pkg/server:r151048_core setprop pkg/proxy_base = https://pkg.omnios.illumos.am/r151048/core

And we do the same for extra

repo# svccfg -s pkg/server add r151048_extra
repo# svccfg -s pkg/server:r151048_extra addpg pkg application
repo# svccfg -s pkg/server:r151048_extra setprop pkg/inst_root = /repo/r151048/extra/
repo# svccfg -s pkg/server:r151048_extra setprop pkg/port = 1148
repo# svccfg -s pkg/server:r151048_extra setprop pkg/proxy_base = https://pkg.omnios.illumos.am/r151048/extra

Finally, we enable the services

repo# svcadm enable  pkg/server:r151048_core pkg/server:r151048_extra
repo# svcadm restart pkg/server:r151048_core pkg/server:r151048_extra

Let’s check!

We’re good! Now let’s setup Nginx 🙂

The Web Server

This part is pretty easy, we login into www0, install nginx, and setup some paths. I will be posting a copy-pasta of my configs, I assume you can do the rest 🙂

www0# pkgin update
www0# pkgin install nginx

Thank you SmartOS! 🧡

In my nginx.conf, I added

include vhosts/*.conf;

and then in /opt/local/etc/nginx/vhosts I created a file
named pkg.omnios.illumos.am.conf, which looks like this

server {
        listen 80;
        server_name pkg.omnios.illumos.am;

        location /.well-known/acme-challenge/ {
          alias /opt/local/www/acme/.well-known/acme-challenge/;
        }

        location / {
            return 301 "https://pkg.omnios.illumos.am";
        }
}

server {
    listen       443 ssl;
    server_name  pkg.omnios.illumos.am;

    ssl_certificate      /etc/ssl/pkg.omnios.illumos.am/fullchain.pem;
    ssl_certificate_key  /etc/ssl/pkg.omnios.illumos.am/key.pem;
    location /r151048/core/ {
                proxy_pass http://10.10.0.51:5148/;
    }

    location /r151048/extra/ {
                proxy_pass http://10.10.0.51:1148/;
    }

    location / {
# This needs to be changed, later... add_header Content-Type text/plain; return 200 "ok..."; } }

Finally, we just need to enable nginx

www0# svcadm enable pkgsrc/nginx

and check!

Using the Local Repos

This part is actually pretty easy. We just need to remove everything that exists and add our own. I will be running this on a computer named dna0.

dna0# pkg set-publisher -M '*' -G '*' omnios
dna0# pkg set-publisher -M '*' -G '*' extra.omnios
dna0# pkg set-publisher -O https://pkg.omnios.illumos.am/r151048/core omnios
dna0# pkg set-publisher -O https://pkg.omnios.illumos.am/r151048/extra extra.omnios
dna0# pkg publisher PUBLISHER TYPE STATUS P LOCATION extra.omnios origin online F https://pkg.omnios.illumos.am/r151048/extra/ omnios origin online F https://pkg.omnios.illumos.am/r151048/core/

We’re good! 🙂

Fetching Updates

By the time I wanted to publish this I noticed that there’s a new OmniOS Weekly Update, so I thought, hey, maybe I should try updating the Local Repo as well… how do we do that?

Turns out I just need to pkgrecv again, and then run a refresh command.

pkgrecv -v -s https://pkg.omnios.org/r151048/core/ -d /repo/r151048/core/ '*'
pkgrepo -s /repo/r151048/core refresh

And looks like we’re good! Maybe we can setup a simple cronjob 🙂

Final Notes

This has been an amazing experience. Since I started using OmniOS two weeks ago, I’ve setup the mirror, I installed two OmniOS deployments on production for two organization, and I talked about it during our Armenian Hackers Radio Podcast. With this mirror completely setup, I can advocate even more!

I’d like to send my thanks (and later, my money) to the OmniOS team for the amazing work they’re doing, special thanks to andyf for answering all of my questions, neirac for pushing me to try more illumos in my life and everyone who contributed to the docs and blog posts that I used. I’ll leave some links below.

Finally, for the coming (two) posts I will talk about mirroring downloads.OmniOS.org (for ISO/USB/ZFS images) and the pkgsrc repository run by SmartOS/MNX.

Thank you for reading and thank you, illumos-community for being so nice ^_^

That’s all folks…

Links

Reply via email.

FreeBSD arm64.aarch64 on QEMU/UTM with better (but not perfect) graphics

A week ago I posted about Running arm64.aarch64 FreeBSD on QEMU/UTM.app on Apple Silicon, and looks like

  1. Many people liked that post
  2. Everyone asked about running graphics (Xorg)

It took me a while but in the end it was, again, a simple change.

All you have to do is to add this single line to /boot/loader.conf

efi_max_resolution="1920x1080"

Now, QEMU’s display will not be 1080p, but it will be the following

VT(efifb): resolution 1024x768

Here are some screenshots

Here’s also Firefox doing an HTML5 test. As you can see, it passed the exam!

However, I’d like to get more resolution out of this. If you know how, please let me know.

That’s all folks…

Reply via email.

Running arm64.aarch64 FreeBSD on QEMU/UTM.app on Apple Silicon

Around a year ago I got an M1 MacBook Air for work. At this point, a lot of people that I know use these Apple Silicon machines.

While my personal machine is running FreeBSD, many times I’ve been in a situation where I need to run FreeBSD on my M1 MacBook Air, at least as a Virtual Machine.

For 9 months I’ve been running the AMD64 version of FreeBSD on QEMU/UTM.app using emulation. It gets the job done.

But whenever I want to do FreeBSD development, I need a fast machine. While M1 is pretty fast, VM emulation is still slow.

The problem is that whenever I booted the arm64.aarch64 FreeBSD on QEMU, it would use so much CPU on the host, that my battery would die in an hour or so.

After a lot of searching, I finally found this, this and this, which eventually got me to this page on the handbook

1. Set Boot Loader Variables
The most important step is to reduce the kern.hz tunable to reduce the CPU utilization of FreeBSD under the Parallels environment. This is accomplished by adding the following line to /boot/loader.conf:

kern.hz=100

Without this setting, an idle FreeBSD Parallels guest will use roughly 15% of the CPU of a single processor iMac®. After this change the usage will be closer to 5%.

Configuring FreeBSD on Parallels

So I tried that, and here you go!

Ahh, finally, I can do some work.

That’s all folks…

Reply via email.

Meta-programming in Shell

Wikipedia defines meta-programming as:

programming technique in which computer programs have the ability to treat other programs as their data. It means that a program can be designed to read, generate, analyze or transform other programs, and even modify itself while running

Uncle Wiki

I had to write a “framework” at work where a shell program would run other shell programs “dynamically”. Let’s dig in!

As I mentioned in my earlier post Two Colons Equals Modules, you can “emulate” modules and functions in Shell (at least in FreeBSD’s /bin/sh) by using ::, so it would be module::function

Here we will do the same, however we will do hook::module.

The goal is to have a Shell program that would take a pid as an argument and do something with that PID, say print a group of information, maybe use DTrace to trace it, etc.

Let’s start by writing our main program.

#!/bin/sh
set -m

usage()
{
  echo "${0##*/} pid"
}

# print usage if argc < 1
[ "${#}" -lt "1" ] && usage && exit 1

# load scripts
load_scripts()
{
  for ctl in ./*.ctl.sh;
  do
    . "${ctl}"
  done
}

# stop the runner by killing the PIDs
runner_stop()
{
  IFS=":"
  for pid in $1;
  do
    kill $pid
  done
  exit
}

# Stop the runner if user sends an input
# This is useful if the runner is executed via a controller
wait_input()
{
  read command
  runner_stop ${PIDS}
}

# a.k.a. main()
runner_start()
{
  # make sure the process exists
  _pid="$1"
  ps -p "${_pid}" 1>/dev/null
  [ $? != 0 ] && exit 2

  # initiate scripts
  load_scripts

  # change IFS to :
  # loop over $SCRIPTS and execute the add hook
  IFS=":"
  for ctl in ${SCRIPTS};
  do
    add::${ctl} "${_pid}"
  done

  # now that we know the commands, loop over them too!
  # inside the loop set IFS to "," to set args
  for cmd in ${COMMAND};
  do
    IFS=","
    set -- "${cmd}"
    run::$1 $2
  done

  # Add trap for signals
  trap "runner_stop ${PIDS}" EXIT SIGINT SIGPIPE SIGHUP 0
  # reset IFS
  unset IFS
  wait_input
}

RUNNERDIR=$(dirname "$0")
(cd $RUNNERDIR && runner_start "$1")

Let’s digest a bit of that. First, we check if the number of arguments provided is less than 1

[ "${#}" -lt "1" ] && usage && exit 1

then we call usage and we exit with return code 1

The load_scripts function will load a bunch of scripts (from the same directory) as long as the scripts are suffixed .ctl.sh

Here’s an example script, say fds.ctl.sh, which will print File Descriptors used by the process, we will use procstat internally.

#!/bin/sh

add::fds()
{
  COMMAND="fds,$1:$COMMAND"
}

run::fds()
{
  procstat --libxo=xml -w 5 -f "$1" &
  PIDS="$!:$PIDS"
}

export SCRIPTS="fds:$SCRIPTS"

Here’s where meta-programming comes into use (I think), we have a variable named $SCRIPTS, which is modified to add the script name into it, $PATH-style, and two functions, add::fds and run::fds. As you have guessed add:: and run:: are the hook names.

I’ll add another script, it will use procstat as well, but this time we will print the resource usage

#!/bin/sh

add::resource()
{
  COMMAND="resource,$1:$COMMAND"
}

run::resource()
{
  procstat --libxo=xml -w 5 -r "$1" &
  PIDS="$!:$PIDS"
}

export SCRIPTS="resource:$SCRIPTS"

The same applies here, one variable, $SCRIPTS and two functions, add::resource and run::resource.

Which means, after loading our scripts all four functions will be loaded into our program and the environment variable $SCRIPTS will have the value resource:fds:

Good? Okay let’s continue.

Since we used : to separate the name of the scripts we must set IFS to :, and we start looping over $SCRIPTS. Now we just run add::${ctl}, which would be add::fds and add::resource. We also pass the ${_pid} variable, if we need to

These two functions would do more meta-programming by setting the $COMMAND variable to script_name,arguments:$COMMAND, again PATH-style.

Which means that the $COMMAND variable has the value fds,89913:resource,89913:

The next bit is a bit tricky, since we’ve set $COMMAND to prog0,arg1:prog1,arg1,arg2: (well, not really arg2, but we could’ve) then we need to

  1. Use “,” as IFS
  2. Tell sh to set the positional parameters, so prog0 becomes $1 and arg1 becomes $2, etc.

and now we execute run::$1 $2, which would be run::fds 89913 then run::resource 89913.

I think I can make this better by running run::$@, where $@ is basically all the parameters, but will test that later

– antranigv at 6am reading the code that he wrote drunk

In the end, we add some signal trapping, we reset IFS and we just wait for an input.

Okay, so we now have a piece of software that reads other programs and modifies itself while running. We have a meta-program!

Let’s give it a run.

# ./runner.sh 89913
<procstat version="1"><files><89913><procstat version="1"><rusage><89913><process_id>89913</process_id><command>miniflux</command><user time>01:37:54.339245</user time><system time>00:19:43.630210</system time><maximum RSS>61236</maximum RSS><integral shared memory>5917491656</integral shared memory><integral unshared data>1310633336</integral unshared data><integral unshared stack>114278656</integral unshared stack><process_id>89913</process_id><command>miniflux</command><files><fd>text</fd><fd_type>vnode</fd_type><vode_type>regular</vode_type><fd_flags>read</fd_flags><ref_count>-</ref_count><offset>-</offset><protocol>-</protocol><path>/usr/local/jails/rss/usr/local/bin/miniflux</path><page reclaims>16939</page reclaims><page faults>7</page faults><swaps>0</swaps><block reads>5</block reads><block writes>1</block writes><messages sent>12603917</messages sent></files><files><fd>cwd</fd><fd_type>vnode</fd_type><vode_type>directory</vode_type><fd_flags>read</fd_flags><ref_count>-</ref_count><offset>-</offset><protocol>-</protocol><path>/usr/local/jails/rss/root</path><messages received>14057863</messages received><signals received>807163</signals received><voluntary context switches>79530890</voluntary context switches><involuntary context switches>5489854</involuntary context switches></files><files><fd>root</fd><fd_type>vnode</fd_type><vode_type>directory</vode_type><fd_flags>read</fd_flags><ref_count>-</ref_count><offset>-</offset><protocol>-</protocol><path>/usr/local/jails/rss</path></89913></rusage></procstat></files><files><fd>jail</fd><fd_type>vnode</fd_type><vode_type>directory</vode_type><fd_flags>read</fd_flags><ref_count>-</ref_count><offset>-</offset><protocol>-</protocol><path>/usr/local/jails/rss</path></files><files><fd>0</fd><fd_type>vnode</fd_type><vode_type>character</vode_type><fd_flags>read</fd_flags><fd_flags>write</fd_flags><ref_count>4</ref_count><offset>0</offset><protocol>-</protocol><path>/usr/local/jails/rss/dev/null</path></files><files><fd>1</fd><fd_type>pipe</fd_type><fd_flags>read</fd_flags><fd_flags>write</fd_flags><ref_count>2</ref_count><offset>0</offset><protocol>-</protocol><path>-</path></files><files><fd>2</fd><fd_type>pipe</fd_type><fd_flags>read</fd_flags><fd_flags>write</fd_flags><ref_count>2</ref_count><offset>0</offset><protocol>-</protocol><path>-</path></files><files><fd>3</fd><fd_type>kqueue</fd_type><fd_flags>read</fd_flags><fd_flags>write</fd_flags><ref_count>2</ref_count><offset>0</offset><protocol>-</protocol><path>-</path></files><files><fd>4</fd><fd_type>pipe</fd_type><fd_flags>read</fd_flags><fd_flags>write</fd_flags><fd_flags>nonblocking</fd_flags><ref_count>2</ref_count><offset>0</offset><protocol>-</protocol><path>-</path></files><files><fd>5</fd><fd_type>pipe</fd_type><fd_flags>read</fd_flags><fd_flags>write</fd_flags><fd_flags>nonblocking</fd_flags><ref_count>1</ref_count><offset>0</offset><protocol>-</protocol><path>-</path></files><files><fd>6</fd><fd_type>socket</fd_type><fd_flags>read</fd_flags><fd_flags>write</fd_flags><fd_flags>nonblocking</fd_flags><ref_count>3</ref_count><offset>0</offset><protocol>TCP</protocol><sendq>0</sendq><recvq>0</recvq><path>192.168.10.5:63835 192.168.10.3:5432</path></files><files><fd>7</fd><fd_type>socket</fd_type><fd_flags>read</fd_flags><fd_flags>write</fd_flags><fd_flags>nonblocking</fd_flags><ref_count>3</ref_count><offset>0</offset><protocol>TCP</protocol><sendq>0</sendq><recvq>0</recvq><path>::.8080 ::.0</path></files></89913></files></procstat>

Why XML? Because libxos JSON output is not “real” JSON when procstat‘s running in repeat mode, but that’s a story for another day.

All code examples can be found as a GitHub Gist.

That’s all folks…

Reply via email.

Huginn on FreeBSD

Huginn is probably the best automation software that I’ve ever seen. It’s not only easy to use, but also easy to deploy and easy to extend. Unfortunately there’s no FreeBSD port for it, but looks like it’s something wanted by the community, at least according to WantedPorts.

I realized that I have at least 5 accounts on IFTTT, which is also an amazing automation service. However, 3/5 of these accounts were not “my own”. It belonged to our communities. You know, Meetups in Armenia and news listing websites similar to Lobste.rs. So if I get hit by a bus, it will be very hard for our community to operate these accounts, that’s why I wanted to deploy Huginn.

Like a sane person, I deploy in FreeBSD Jails (I recommend you do too!). Which meant there’s no official (or maybe even unofficial?) docs on how to deploy Huginn on FreeBSD.

It’s written in Ruby, which means it should work and should be very easy to ports. I’ll go over the deployment needs without the actual deployment, setup of Jails, or anything similar. Let’s go!

First thing first, you need Ruby thingies:

  • ruby
  • rubygem-bundler
  • rubygem-mimemagic
  • rubygem-rake
  • rubygem-mysql2

Here’s the full command for copy/paster:

pkg instal ruby rubygem-bundler rubygem-mimemagic rubygem-rake rubygem-mysql2

Next, you’ll need gmake for makefiles and node for assets:

pkg install gmake node

This should be enough. I’m also going to install git-tiny so I can follow their updates with ease.

pkg install git-tiny

Okay, let’s make a separate user for Huginn.

pw user add huginn -s /bin/tcsh -m -d /usr/local/huginn

Let’s switch our user

su - huginn

Okay, now I’m going to clone the repo 🙂

git clone --depth=1 https://github.com/huginn/huginn/

At this point you can go do installation.md#4-databases and configure your database.

You should also do cp .env.example .env and configure the environment, make sure to set RAILS_ENV=production

Next, as root, you should execute the following

cd /usr/local/huginn/huginn/ && bundle install

You might get an error saying

In Gemfile:
  mini_racer was resolved to 0.2.9, which depends on
    libv8

Don’t panic! That’s fine, unfortunately it’s trying to compile libv8 using Gems. Even if we installed the patched version of v8 using pkg, it still doesn’t work. I’ll try to workaround that later.
I an ideal world, all of these Ruby Gems should be ported to FreeBSD, I’m not sure which are ported, so I’ll just be using the bundle command to install them. And that’s why we use Jails 🙂

Anyways, the dependency Gem is mini_racer, comment its line in Gemfile

#gem 'mini_racer', '~> 0.2.4'      # JavaScriptAgent

Now let’s run Bundle again

cd /usr/local/huginn/huginn/ && bundle install

Okay! everything is good!

Let’s also build the assets, this one should be run as the user huginn

bundle exec rake assets:precompile RAILS_ENV=production

NOTE: If you get the following error ExecJS::RuntimeError: ld-elf.so.1: /lib/libcrypto.so.111: version OPENSSL_1_1_1e required by /usr/local/bin/node not found then you need to upgrade your FreeBSD version to the latest patch!

Aaand that’s it, everything is ready.

For the rest of the deployment process, such as the database, nginx, etc., please refer to installation.md

Currently, I’m running Huginn in a tmux session running bundle exec foreman start, but in the future, I’ll write an rc.d script and share it with you, too.

That’s all folks.

Reply via email.

WireGuard “dynamic” routing on FreeBSD

I originally wrote about this on my Armenian blog when ISPs started blocking DNS queries during and after the war. I was forces to use either 9.9.9.9, 1.1.1.1, 8.8.8.8 or any other major DNS resolver. For me this was a pain because I was not able to dig +trace, and I dig +trace a lot.

After some digging (as mentioned in the Armenian blog) I was able to figure out that this affects only the home users. Luckily, I also run servers at my home and the ISPs were not blocking anything on those “server” ranges, so I setup WireGuard.

This post is not about setting up WireGuard, there are plenty of posts and articles on the internet about that.

Over time my network became larger. I also started having servers outside of my network. One of the fast (and probably wrong) ways of restricting access to my servers was allowing traffic only from my own network.

I have a server that acts as WireGuard VPN Peer and does NAT-ing. That being said, the easiest way for me to start accessing my restricted servers is by doing route add restricted_server_addr -interface wg0.

Turns out I needed to write some code for that, which I love to do!

Anytime that I need to setup a WireGuard VPN client I go back to my Armenian post and read there, so now I’ll be blogging how to do dynamic routing with WireGuard so I read whenever I need to. I hope it becomes handy for you too!

Now, let’s assume you need to add a.b.c.d in your routes, usually you’d do route add a.b.c.d -interface wg0, but this would not work, since in your WireGuard configuration you have a line that says

[Peer]
AllowedIPs = w.x.y.z/24

Which means, even if you add the route, the WireGuard application/kernel module will not route those packets.

To achieve “dynamic” routing we could do

[Peer]
AllowedIPs = 0.0.0.0/0

This, however will route ALL your traffic via WireGuard, which is also something you don’t want, you want to add routes at runtime.

What we could do, however, is to ask WireGuard to NOT add the routes automatically. Here’s how.

[Interface]
PrivateKey      = your_private_key
Address         = w.x.y.z/32
Table           = off
PostUp          = /usr/local/etc/wireguard/add_routes.sh %i
DNS             = w.z.y.1

[Peer]
PublicKey       = their_public_key
PresharedKey    = pre_shared_key
AllowedIPs      = 0.0.0.0/0
Endpoint        = your_server_addr:wg_port

The two key points here are Table = off which asks WireGuard to not add the routes automatically and PostUp = /usr/local/etc/wireguard/add_routes.sh %i which is a script that does add the routes, where %i is expanded to the WireGuard interface name; could be wg0, could be home0, depends in your configuration.

Now for add_routes.sh we write the following.

#!/bin/sh

interface=${1}

networks="""
w.x.y.0/24
restricted_server_addr/32
another_server/32
"""

for _n in ${networks};
do
  route -q -n add ${_n} -interface ${interface}
done

And we can finally do wg-quick up server0.conf

If you need to add another route while WireGuard is running, you can do

route add another_restricted_server -interface wg0

Okay, what if you need to route everything while WireGuard is running? Well, that’s easy too!

First, find your default gateway.

% route -n get default | grep gateway
    gateway: your_gateway

Next, add a route for your endpoint via your current default gateway.

route add you_server_addr your_gateway

Next, add TWO routes for WireGuard.

route add 0.0.0.0/1     -interface wg0
route add 128.0.0.0/1   -interface wg0

So it’s the two halves of the Internet 🙂

That’s all folks!

Reply via email.

VoidLinux in FreeBSD Jail; with init

Two important things happened this week for me.

First, Faraz asked me if I can rename my Jail manager to something other than Jailio because he got that domain for his Jailer manager already. So I named it

Second, I was able to run a complete Linux system using Jailer. While the repo for Jailer is not released yet (we are auditing for possible security issues), I would like to share how I was able to run VoidLinux in a Jail.

Since Jailer is not announced yet, I will give the examples using jail.conf, as most people either are or should be familiar with its concepts.

I went with VoidLinux because I am able to run the init process without its need to be running as PID1.

Let’s start, shall we?

First, ZFS dataset for our jail!

zfs create zroot/jails/voidlinux

Next we need to fetch the base system of VoidLinux. Luckily they do provide it on their website.

fetch https://alpha.de.repo.voidlinux.org/live/current/void-x86_64-ROOTFS-20210218.tar.xz

Now we can extract this into our dataset

tar xf void-x86_64-ROOTFS-20210218.tar.xz -C /usr/local/jails/voidlinux/

You might get an error that ./usr/bin/iputils-ping: Cannot restore extended attributes: security.capability, which is fine, I think?

If you are on FreeBSD 12.2-RELEASE or later, now you need to enable the Linuxulator.

service linux enable; service linux start

Now you can at least chroot into the system.

chroot /usr/local/jails/voidlinux/ /bin/bash

If everything is fine until now, perfect.

Now we need to add a root user into the system.

root@host:~ # cd /usr/local/jails/voidlinux/etc/
root@host:/usr/local/jails/voidlinux/etc # echo "root::0:0::0:0:Charlie &:/root:/bin/bash" > master.passwd
root@host:/usr/local/jails/voidlinux/etc # pwd_mkdb -d ./ -p master.passwd
pwd_mkdb: warning, unknown root shell

Execute the rest of the commands in Void.

root@host:~ # chroot /usr/local/jails/voidlinux/ /bin/bash
bash-5.1# cd /etc/
bash-5.1# pwconv 
bash-5.1# grpconv 
bash-5.1# passwd 
New password: 
Retype new password: 
passwd: password updated successfully
bash-5.1# exit

If all went fine, then the system is ready to be run as a Jail!

First we need to make an fstab for the system.

Create a file at /usr/local/jails/voidlinux/etc/fstab.pre and insert the following inside

devfs       /usr/local/jails/voidlinux/dev      devfs           rw                      0   0
tmpfs       /usr/local/jails/voidlinux/dev/shm  tmpfs           rw,size=1g,mode=1777    0   0
fdescfs     /usr/local/jails/voidlinux/dev/fd   fdescfs         rw,linrdlnk             0   0
linprocfs   /usr/local/jails/voidlinux/proc     linprocfs       rw                      0   0
linsysfs    /usr/local/jails/voidlinux/sys      linsysfs        rw                      0   0
/tmp        /usr/local/jails/voidlinux/tmp      nullfs          rw                      0   0

Next, let’s create a loopback interface for networking. Oh yes, VNET is not supported yet, but I’m working on a patch 🙂

ifconfig lo1 create
ifconfig lo1 inet 10.10.0.1/24 up # sorry, 10.0.0.0/24 was unavailable :P

Okay, time to create our Jail conf!

exec.clean;
allow.raw_sockets;
mount.devfs;

voidlinux {
    $id     = "1";
    $ipaddr = "10.10.0.42";
    $mask   = "255.255.255.0";
    $domain = "srv0.bsd.am";
    devfs_ruleset  = 4;
    allow.mount;
    allow.mount.devfs;
    mount.fstab = "${path}/etc/fstab.pre";

    exec.start     = "/bin/sh /etc/runit/2 &";
    exec.stop      = "/bin/sh /etc/runit/3";


    ip4.addr      = "${ipaddr}";
    interface     = "lo1";
    host.hostname = "${name}.${domain}";
    path = "/usr/local/jails/voidlinux";
    exec.consolelog = "/var/log/jail-${name}.log";
    persist;
    allow.socket_af;
}

Let’s check?

# jls
   JID  IP Address      Hostname                      Path
     1  192.168.0.42    voidlinux.srv0.bsd.am         /usr/local/jails/voidlinux

And the process tree?

# ps auxd -J voidlinux
USER   PID %CPU %MEM  VSZ  RSS TT  STAT STARTED    TIME COMMAND
root 35182  0.0  0.1 2320 1428  -  SsJ  21:09   0:00.12 runsvdir -P /run/runit/runsvdir/current log: ot set SO_PASSCRED: Protocol not available\ncould not set SO_PASSCRED: Protocol
root 35190  0.0  0.1 2168 1376  -  SsJ  21:09   0:00.02 - runsv agetty-tty6
root 35397  0.0  0.1 2412 1704  -  SsJ  21:10   0:00.00 `-- agetty tty6 38400 linux
root 35191  0.0  0.1 2168 1376  -  SsJ  21:09   0:00.02 - runsv agetty-tty1
root 35396  0.0  0.1 2412 1704  -  SsJ  21:10   0:00.00 `-- agetty --noclear tty1 38400 linux
root 35192  0.0  0.1 2168 1376  -  SsJ  21:09   0:00.02 - runsv agetty-tty5
root 35398  0.0  0.1 2412 1704  -  SsJ  21:10   0:00.01 `-- agetty tty5 38400 linux
root 35193  0.0  0.1 2168 1376  -  SsJ  21:09   0:00.02 - runsv agetty-tty2
root 35393  0.0  0.1 2412 1704  -  SsJ  21:10   0:00.00 `-- agetty tty2 38400 linux
root 35194  0.0  0.1 2168 1396  -  RsJ  21:09   0:00.12 - runsv udevd
root 35195  0.0  0.1 2168 1376  -  SsJ  21:09   0:00.02 - runsv agetty-tty3
root 35394  0.0  0.1 2412 1704  -  SsJ  21:10   0:00.00 `-- agetty tty3 38400 linux
root 35196  0.0  0.1 2168 1376  -  SsJ  21:09   0:00.02 - runsv agetty-tty4
root 35390  0.0  0.1 2412 1704  -  SsJ  21:10   0:00.00 `-- agetty tty4 38400 linux

You may jexec now 🙂

# jexec voidlinux /bin/bash
bash-5.1# uname -a
Linux voidlinux.srv0.bsd.am 3.2.0 FreeBSD 12.2-RELEASE-p6 GENERIC x86_64 GNU/Linux

Let’s check networking?

bash-5.1# ping -c 1 10.10.0.1
ping: WARNING: setsockopt(ICMP_FILTER): Protocol not available
PING 10.10.0.1 (10.10.0.1) 56(84) bytes of data.
64 bytes from 10.10.0.1: icmp_seq=1 ttl=64 time=0.069 ms

--- 10.10.0.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms

There you go! Well, things that are related to netlink might not work, but other than that it’s okay.

I did have some problems while installing packages, something about too many levels of symbolic links. Here’s the exact output when I was trying to install the curl package

[*] Unpacking packages
libev-4.33_1: unpacking ...
ERROR: libev-4.33_1: [unpack] failed to extract file `./usr/lib/libev.so.4': Too many levels of symbolic links
ERROR: libev-4.33_1: [unpack] failed to extract files: Too many levels of symbolic links
ERROR: libev-4.33_1: [unpack] failed to unpack files from archive: Too many levels of symbolic links
Transaction failed! see above for errors.

Now, I did not find the time to fix this yet, but if you have any idea, please let me know or comment below 🙂

So, what do we have here? A Linux Jail, running VoidLinux, with init, so you can also run services, and basic networking for it.

That’s all folks…

Reply via email.

VNET Jail HowTo Part 2: Networking

As always, Dan has been tweeting about VNET Jail issues, which means it’s time for another VNET Jail post.

This post assumes that you’ve read the original post on VNET Jail HowTo.

In Part two we will discuss Networking.

We will use PF as a firewall to do things like NAT.

If you need more help please check the FreeBSD Handbook: Chapter – Firewalls or send me an email/tweet.

At this point (from the last post) we were able to ping from the Jail to the Host.

root@www:/ # ping -c 1 10.0.0.1
PING 10.0.0.1 (10.0.0.1): 56 data bytes
64 bytes from 10.0.0.1: icmp_seq=0 ttl=64 time=0.087 ms

--- 10.0.0.1 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.087/0.087/0.087/0.000 ms

Now we will setup PF on the host by adding the following to /etc/pf.conf

ext_if="em0"
jailnet="10.0.0.0/24"

nat pass on $ext_if inet from $jailnet to any -> ($ext_if)

set   skip on { lo0, bridge0 }
pass  inet proto icmp
pass  out all keep state

We also need to enable IP Forwarding in the kernel

Add the following in /etc/sysctl.conf

net.inet.ip.forwarding=1

And now execute

sysctl -f /etc/sysctl.conf
service pf restart

That should be it, now your Jail should be able to ping the outside world

root@zvartnots:~ # jexec -l www
You have mail.
root@www:~ # ping -c 1 9.9.9.9
PING 9.9.9.9 (9.9.9.9): 56 data bytes
64 bytes from 9.9.9.9: icmp_seq=0 ttl=61 time=2.566 ms

--- 9.9.9.9 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 2.566/2.566/2.566/0.000 ms
root@www:~ # 

If you setup a resolver, you should also be able to ping domain names as well.

root@www:~ # echo 'nameserver 9.9.9.9' > /etc/resolv.conf 
root@www:~ # ping -c 1 freebsd.org
PING freebsd.org (96.47.72.84): 56 data bytes
64 bytes from 96.47.72.84: icmp_seq=0 ttl=53 time=133.851 ms

--- freebsd.org ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 133.851/133.851/133.851/0.000 ms

Now, for a more complicated setup that assumes no firewalls and multiple IP addresses, where each Jail has its own IP address. I have a similar setup at home where my ZNC server Jail has its own IP address by connecting the physical NIC to the same bridge as the ZNC Jail.

In my rc.conf on the host

ifconfig_em0="inet 192.168.0.34 netmask 255.255.255.0"
defaultrouter="192.168.0.1"

cloned_interfaces="bridge0"
ifconfig_bridge0="addm em0"

Here’s an example with jail.conf

znc {
	$id		= "52";
	$addr		= "192.168.0.252";
	$mask		= "255.255.255.0";
	$gw		= "192.168.0.1";
	vnet;
	vnet.interface	= "epair${id}b";

	exec.prestart	= "ifconfig epair${id} create up";
	exec.prestart	+= "ifconfig epair${id}a up descr vnet-${name}";
	exec.prestart	+= "ifconfig bridge0 addm epair${id}a up";

	exec.start	= "/sbin/ifconfig lo0 127.0.0.1 up";
	exec.start	+= "/sbin/ifconfig epair${id}b ${addr} netmask ${mask} up";
	exec.start	+= "/sbin/route add default ${gw}";
	exec.start	+= "/bin/sh /etc/rc";

	exec.poststop   = "ifconfig bridge0 deletem epair${id}a";
	exec.poststop  += "ifconfig epair${id}a destroy";

	host.hostname = "${name}.bsd.am";
	path = "/usr/local/jails/${name}";
 	exec.consolelog = "/var/log/jail-${name}.log";
	persist;
}

And that’s pretty much it!

That’s all folks.

Reply via email.

Signal-cli with scli on FreeBSD

So couple of days ago I migrated from macOS on Macbook Pro to FreeBSD on ThinkPad T480s.

Unfortunately, since we are in war, I do not have the time to blog about the migration, although I’m taking notes every day about every change that I do so I can blog later on 🙂

However, one of the biggest concerns for me was running Signal on FreeBSD, as I understnad, Signal people are not interested in supporting the *BSDs.

As any sane person, I started searching the internet for possible solutions and turns out all I need is two pieces of software

The installation is as easy as running

pkg install signal-cli scli

Now for the simple part.

First, you need to link your phone by running

signal-cli link -n "FreeBSD"

It will give an output that says tsdevice:/?uuid=...&pub_key=....

Copy that output, and then in another terminal run

qrencode 'tsdevice:/?uuid=...&pub_key=...' -t ANSI256

You will be represented by a QR Code in the console (cool, aye?).

Using the phone app, link the device by scanning the QR Code.

To receive list of your contacts run

signal-cli -u +myphonenumber receive

Now try to run the TUI interface by running

scli

Side-note: In case you are not able to send or receive messages, you might need to do some DBUS magic.

First, find if you have DBUS running

antranigv@zvartnots:~ $ ps -x -o comm,pid | grep dbus
dbus-launch         53571
dbus-daemon         54064
dbus-daemon         54963

Then, you will need to find the DBUS_SESSION_BUS_ADDRESS environment variable, this is usually set in the DBUS child process, in our case, it’s 54963, so we can use procstat as root

root@zvartnots:~ # procstat -e 54963
  PID COMM             ENVIRONMENT                                          
54963 dbus-daemon      SHELL=/usr/local/bin/bash DBUS_STARTER_ADDRESS=unix:path=/tmp/dbus-TaY0zoKZIb,guid=4f518f874f97170e788a94fb5fa14a3c DISPLAY=:0.0 WMAKER_BIN_NAME=wmaker PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/antranigv/bin WINDOWPATH=9 MAIL=/var/mail/antranigv GTK_THEME=Adwaita:dark DBUS_SESSION_BUS_ADDRESS=unix:path=/tmp/dbus-TaY0zoKZIb,guid=4f518f874f97170e788a94fb5fa14a3c USER=antranigv DBUS_STARTER_BUS_TYPE=session MM_CHARSET=UTF-8 WRASTER_COLOR_RESOLUTION0=4 PWD=/usr/home/antranigv BLOCKSIZE=K LANG=en_US.UTF-8 LOGNAME=antranigv HOME=/home/antranigv

Okay! we have our variable!

Now, we need to set the ENV and we are done, if you use (t)csh then execute

setenv DBUS_SESSION_BUS_ADDRESS unix:path=/tmp/dbus-TaY0zoKZIb,guid=4f518f874f97170e788a94fb5fa14a3c

If you are using bash, run the following

export DBUS_SESSION_BUS_ADDRESS=unix:path=/tmp/dbus-TaY0zoKZIb,guid=4f518f874f97170e788a94fb5fa14a3c

Now, you can run scli again and it will work fine 🙂

Happy Chatting!@#$%

That’s all folks! 🙂

Reply via email.

Erlang dbg Intro

If there’s one programming language that changed my life, that’s Erlang. After using Erlang for couple of years, I “moved” to Elixir, which is based on Erlang’s VM.

One the most important aspects of Erlang’s VM is that it’s a “real” VM, there’s a kernel, processes, messaging facilities and many more.

Lately I’ve been debugging a huge Erlang application whose architecture I was not very familiar with and I needed to find a way to see what kind of messages are being sent and received, which Modules and Functions are being called and what are they returning.

So I wanted to write a small How-To for me and you, in case we need it again in the future.

Okay, for this example I’ll be using Elixir TCP Server, a simple TCP server that gets data and sends it back to its origin.

First, let’s clone the repo.

antranigv@zvartnots:prj $ git clone https://github.com/SonaTigranyan/ElixirTcpServer

Okay, now let’s run the server

antranigv@zvartnots:ElixirTcpServer $ iex -S mix
Erlang/OTP 23 [erts-11.0] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:1] [hipe]

Compiling 3 files (.ex)
Generated tcp_server app
Interactive Elixir (1.10.3) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)>

Good! By default, the TCP server listens on port 9000, as specified in the Application Tree.

Okay, we can send data now 🙂

antranigv@zvartnots:ElixirTcpServer $ echo test | nc localhost 9000
test

Or in an interactive way!

antranigv@zvartnots:ElixirTcpServer $ nc localhost 9000
First mesage!
First mesage!
Good TCP server!
Good TCP server!
bye
antranigv@zvartnots:ElixirTcpServer $

Good! As you can see the connection is closed when the server gets bye.

Okay, say we want to trace the do_send function, observe what does it get and return.

iex(2)> :dbg.start()
{:ok, #PID<0.191.0>}
iex(3)> :dbg.tracer()
{:ok, #PID<0.191.0>}
iex(4)> :dbg.tpl(TcpServer, :do_send, [{:_, [], [{:return_trace}]}])
{:ok, [{:matched, :nonode@nohost, 1}, {:saved, 1}]}
iex(5)> :dbg.p(:new_processes, :c)
{:ok, [{:matched, :nonode@nohost, 0}]}
iex(6)>
(<0.198.0>) call 'Elixir.TcpServer':do_send(#Port<0.545>,"Message from client!\n")

Okay, first we start the dbg facility, and then we start a tracing server on the local node.

After that, we use function tpl to specify which local calls we want to trace.

And in the end we use the p function to start tracing the calls (c) of all new_processes 🙂

Now, when the do_send function is called, we see what it gets.

And when we send bye, we see the following:

(<0.198.0>) returned from 'Elixir.TcpServer':do_send/2 -> ok

And all of this is happening when the software system is running. In production, we can do the same, by either attaching to the node or connecting to it!

That’s all folks! 🙂

Reply via email.