Tag Archives: Unix

Mirroring OmniOS: The Complete Guide; Part One

Chapter Ⅰ

I know that “Complete Guide” and “Part One” are oxymorons, but hey, be happy that I’m publishing in parts, otherwise I’d completely ignore this blog post.

Two weeks ago I decided to play with illumos again. I was speaking with a friend and we were sharing our frustrations regarding Open-Source contribution. We write the code, we submit, we get feedback, we submit again, and then we’re ghosted. It’s like the LinkedIn or Tinder version of Software Engineering.

Then I asked him about his best open-source experience and he told me “illumos of course!”.

I was amazed. I thought you had to be very technical in order to even build illumos, but turns out they have an amazing documentation on building illumos and OmniOS (an illumos distribution) has done work to make sure that the system can be self-hosted (i.e. The OS can build itself).

So, I decided to fire up OmniOS on our hackerspace server running FreeBSD inside a bhyve VM.

The installation went smoothly, but the IPS packages were slow to download, and I might be wrong (please correct me if I am) but IPS doesn’t seem to be keeping a local copy of the files. It always downloads. Is that configurable?

Regardless. I thought that the best way to contribute is to advocate. In order to do that I needed to make sure that IPS servers are fast in Armenia. Hence the mirroring project started.

Obey!

Requirements

Here are some terminology that I will use in this blog post, just so we are on the same page.

  • OmniOS: an illumos distribution
  • Origin: OmniOS’s IPS servers at pkg.omnios.org
  • Local: A copy of the Origin
  • Repository: A collection of software
  • Core: The Core Repository of OmniOS
  • Extras: The Extra Repository of OmniOS
  • IPS or PKG: The Image Packaging System and its utility, pkg
  • Zone: an illumos Zone (similar to FreeBSD Jails, Linux Containers, chroot) running on OmniOS

Now that we are on the same page, let’s talk about our setup and what we need.

  • An internet connection: duh!
  • A domain name: I decided to use pkg.omnios.illumos.am. Yes, I’m lucky like that.
  • A publicly accessible IP address.
  • A server: I am running OmniOS Stable (r151048) inside a VM. You can use bare-metal or a cloud VM if you want.
  • Storage: I am currently using around 50GB of storage, expect that to go around 300GB when we get to Part Three

Pre-Mirroring Setup

Before we setup our mirror, let’s make sure that we have a good infrastructure that we can maintain.

Here’s what we’ll create

  • A Zone that will act as the HTTP(s) server using nginx at IP address 10.10.0.80
  • A Zone that will do the mirroring using IPS tools at 10.10.0.51
  • An virtual dumb switch (etherstub) that will connect the Zones and the Global-Zone (a.k.a The Host) together. The GZ will have an address of 10.10.0.1
  • ZFS datasets for each Core and Extras Repository (for each release)

Please note that there are many ways to do this, for example, having everything in a Global Zone, running IPS mirroring and nginx in a single Zone, not using etherstub at all, etc. But I like this setup as it will allow us to “grow” in the future.

From now on, omnios# means that we’re in the Global Zone and zone0# means we’re inside a Zone named zone0.

Let’s start with setting up our etherstub and connecting our Global Zone to it

omnios# dladm create-etherstub switch0
omnios# dladm create-vnic -l switch0 vnic0
omnios# ipadm create-if vnic0
omnios# ipadm create-addr -T static -a 10.10.0.1/24 vnic0/switch0

Done!

Now, we will setup our Zones using the zadm utility. Install zadm by running

omnios# pkg install zadm

After installing zadm, we’ll create a dataset for our Zones

omnios# zfs create -o mountpoint=/zones rpool/zones

This assumes that your ZFS pool is named rpool.

Finally, we can create our Zones. Running

omnios# zadm create -b pkgsrc www0

will open your $EDITOR, where you need to modify some JSON, here’s what mine looks like!

{
   "autoboot" : "true",
   "brand" : "pkgsrc",
   "ip-type" : "exclusive",
"dns-domain" : "omnios.illumos.am", "net" : [ { "allowed-address" : "10.10.0.80/24", "defrouter" : "10.10.0.1", "global-nic" : "switch0", "physical" : "www0" } ], "pool" : "", "scheduling-class" : "", "zonename" : "www0", "zonepath" : "/zones/www0" }

After saving the file, zadm will install the Zone.

Now let’s setup our mirroring Zone. Do the same but change the Zone name to repo, the brand to lipkg (and -b lipkg) and set the IP address to 10.10.0.51/24.

All we need now is to forward the HTTP/HTTPS traffic to www0 Zone and allow all Zones to access the internet using NAT.

Create and edit the IPFilter’s NAT file at /etc/ipf/ipnat.conf, here’s an example configuration

map vioif0 10.10.0.0/24 -> 212.34.250.10

rdr vioif0 212.34.250.10/32 port 80 -> 10.10.0.80 port 80 tcp
rdr vioif0 212.34.250.10/32 port 443 -> 10.10.0.80 port 443 tcp

Make sure you set the correct interface name and the correct external IP address.

Finally, we can boot our Zones!

omnios# zadm boot www0
omnios# zadm boot repo

You should see the following output when you run zadm again

omnios# zadm
NAME              STATUS     BRAND       RAM    CPUS  SHARES
global            running    ipkg        56G      12       1
repo              running    lipkg         -       -       1
www0              running    pkgsrc        -       -       1

Great! Let’s setup the mirroring process.

Mirroring Setup

Let’s create a ZFS dataset for repos for each release

repo# zfs create -o mountpoint=/repo rpool/zones/repo/ROOT/repo      
repo# zfs create rpool/zones/repo/ROOT/repo/r151048      
repo# zfs create rpool/zones/repo/ROOT/repo/r151048/core 
repo# zfs create rpool/zones/repo/ROOT/repo/r151048/extra

And then we use the pkgrepo command to create a repository

repo# pkgrepo create /repo/r151048/core
repo# pkgrepo create /repo/r151048/extra

And finally, we can start receiving the packages from Origin to Local

repo# pkgrecv -s https://pkg.omnios.org/r151048/core/  -d /repo/r151048/core  '*'
repo# pkgrecv -s https://pkg.omnios.org/r151048/extra/ -d /repo/r151048/extra '*'

This will take a while depending on your internet connection speed and the load on OmniOS’s Origin. It’s like a good investment, we spend load and time now so we save traffic and time later 🙂

After it’s done, we need to set the publisher of these repos the same as Origin.

repo# pkgrepo set -s /repo/r151048/core   publisher/prefix=omnios
repo# pkgrepo set -s /repo/r151048/extra/ publisher/prefix=extra.omnios

And we’re done!

Now need to serve these repos using IPS’s depot server.

We will create two instances of the depotd server, one for core and one for extra.

  • r151048/core will run on 5148
  • r151048/extra will run on 1148
  • (in the future) r151050/core will run on 5150
  • (in the future) r151050/extra will run on 1150

We start with core

repo# svccfg -s pkg/server add r151048_core
repo# svccfg -s pkg/server:r151048_core addpg pkg application
repo# svccfg -s pkg/server:r151048_core setprop pkg/inst_root = /repo/r151048/core/
repo# svccfg -s pkg/server:r151048_core setprop pkg/port = 5148
repo# svccfg -s pkg/server:r151048_core setprop pkg/proxy_base = https://pkg.omnios.illumos.am/r151048/core

And we do the same for extra

repo# svccfg -s pkg/server add r151048_extra
repo# svccfg -s pkg/server:r151048_extra addpg pkg application
repo# svccfg -s pkg/server:r151048_extra setprop pkg/inst_root = /repo/r151048/extra/
repo# svccfg -s pkg/server:r151048_extra setprop pkg/port = 1148
repo# svccfg -s pkg/server:r151048_extra setprop pkg/proxy_base = https://pkg.omnios.illumos.am/r151048/extra

Finally, we enable the services

repo# svcadm enable  pkg/server:r151048_core pkg/server:r151048_extra
repo# svcadm restart pkg/server:r151048_core pkg/server:r151048_extra

Let’s check!

We’re good! Now let’s setup Nginx 🙂

The Web Server

This part is pretty easy, we login into www0, install nginx, and setup some paths. I will be posting a copy-pasta of my configs, I assume you can do the rest 🙂

www0# pkgin update
www0# pkgin install nginx

Thank you SmartOS! 🧡

In my nginx.conf, I added

include vhosts/*.conf;

and then in /opt/local/etc/nginx/vhosts I created a file
named pkg.omnios.illumos.am.conf, which looks like this

server {
        listen 80;
        server_name pkg.omnios.illumos.am;

        location /.well-known/acme-challenge/ {
          alias /opt/local/www/acme/.well-known/acme-challenge/;
        }

        location / {
            return 301 "https://pkg.omnios.illumos.am";
        }
}

server {
    listen       443 ssl;
    server_name  pkg.omnios.illumos.am;

    ssl_certificate      /etc/ssl/pkg.omnios.illumos.am/fullchain.pem;
    ssl_certificate_key  /etc/ssl/pkg.omnios.illumos.am/key.pem;
    location /r151048/core/ {
                proxy_pass http://10.10.0.51:5148/;
    }

    location /r151048/extra/ {
                proxy_pass http://10.10.0.51:1148/;
    }

    location / {
# This needs to be changed, later... add_header Content-Type text/plain; return 200 "ok..."; } }

Finally, we just need to enable nginx

www0# svcadm enable pkgsrc/nginx

and check!

Using the Local Repos

This part is actually pretty easy. We just need to remove everything that exists and add our own. I will be running this on a computer named dna0.

dna0# pkg set-publisher -M '*' -G '*' omnios
dna0# pkg set-publisher -M '*' -G '*' extra.omnios
dna0# pkg set-publisher -O https://pkg.omnios.illumos.am/r151048/core omnios
dna0# pkg set-publisher -O https://pkg.omnios.illumos.am/r151048/extra extra.omnios
dna0# pkg publisher PUBLISHER TYPE STATUS P LOCATION extra.omnios origin online F https://pkg.omnios.illumos.am/r151048/extra/ omnios origin online F https://pkg.omnios.illumos.am/r151048/core/

We’re good! 🙂

Fetching Updates

By the time I wanted to publish this I noticed that there’s a new OmniOS Weekly Update, so I thought, hey, maybe I should try updating the Local Repo as well… how do we do that?

Turns out I just need to pkgrecv again, and then run a refresh command.

pkgrecv -v -s https://pkg.omnios.org/r151048/core/ -d /repo/r151048/core/ '*'
pkgrepo -s /repo/r151048/core refresh

And looks like we’re good! Maybe we can setup a simple cronjob 🙂

Final Notes

This has been an amazing experience. Since I started using OmniOS two weeks ago, I’ve setup the mirror, I installed two OmniOS deployments on production for two organization, and I talked about it during our Armenian Hackers Radio Podcast. With this mirror completely setup, I can advocate even more!

I’d like to send my thanks (and later, my money) to the OmniOS team for the amazing work they’re doing, special thanks to andyf for answering all of my questions, neirac for pushing me to try more illumos in my life and everyone who contributed to the docs and blog posts that I used. I’ll leave some links below.

Finally, for the coming (two) posts I will talk about mirroring downloads.OmniOS.org (for ISO/USB/ZFS images) and the pkgsrc repository run by SmartOS/MNX.

Thank you for reading and thank you, illumos-community for being so nice ^_^

That’s all folks…

Links

Reply via email.

Antranig Vartanian

February 8, 2023

Turns out when you start MariaDB for the first time it prints technical messages and theeen it says:

Please report any problems at https://mariadb.org/jira

The latest information about MariaDB is available at https://mariadb.org/.

Consider joining MariaDB's strong and vibrant community:
⠀https://mariadb.org/get-involved/

Starting mysql.

I love this!

I think we should add something similar to FreeBSD, where after the installation is done it says something like:

Please report any problems at https://bugs.freebsd.org/
The latest Handbook is available at https://freebsd.org/handbook/

Consider joining FreeBSD’s worldwide community:
https://docs.freebsd.org/en/articles/contributing/

Thank you for choosing FreeBSD!

Wait, maybe we have such a message? I have to check and then patch if we don’t 🙂

That’s all folks…

Reply via email.

The command command

According to the 2018 edition of The Open Group Base Specifications (Issue 7), there’s a command named command which executes commands.

Wait, macOS is OpenGroup UNIX 03 certified, right?

command running uname -a

I tried tracing back the history, macOS is mostly based on FreeBSD, as we can see in their open-source code.

So I started tracing back the FreeBSD code, and I found the current one.

I found the oldest commit about command in FreeBSD’s source tree, but it said

Import the 4.4BSD-Lite2 /bin/sh sources

builtins.def

So I opened up the SVN tree of CSRG, and there I found this

date and time created 91/03/07 20:24:04 by bostic

builtins.def

However, if I knew how to use SVNWeb, I’m pretty sure I’d navigate around the /old/sh directory.

It’s funny, how this line
# NOTE: bltincmd must come first!
Is both in the macOS code AND the CSRG code from 30 years ago.

That’s all folks…

Reply via email.

Linux is dead, long-live Docker monoculture

Full Discloser: While reading this blog post, please put yourself in my shoes. You’ve been looking around for a simple monitoring solution, you found some. None of the some are working because you use an Operating System that is used by Apple, WhatsApp, Netflix and many more, but developers think that everyone, everywhere, runs either macOS or Linux. And they all use Docker.

A while back Rubenerd wrote that he’s not sure that UNIX won and how Linux created a monoculture of assuming everything is supposed to run on Linux.

For me, this was not much of a problem, I can run Linux binaries on FreeBSD, I even watch Netflix using Linuxulator.

But now things are on another level, WAY another level.

I have a simple monitoring setup using cron, Grafana, InfluxDB and ping. It basically pings my servers and sends me a telegram message if they are down.

I set that up years ago, but now I have more public facing infrastructure that other people use as well, such as an Armenian Lobsters instance, Jabber.am, a WriteFreely instance and more.

As a self-respecting Ops, I wanted to make a simple dashboard for my users to see the uptime status of these services as well. First, they won’t bug me asking if something is not working; they will SEE, that, SSL/TLS certificate is expired, or the network is an issue, or that the server is down.

<rant>

So I started hunting on the internet for some software that do just that.

The first one that came to my mind was Gatus. I’ve used Gatus before for one of my clients, I like it a lot. It’s simple, it does what it’s supposed to do.

As a sane person, I fetched the code from GitHub using fetch, extracted the tarball and ran make. Nothing happens. Let’s see the Makefile, shall we?

Docker executed in Make

Oh boy, if only, only, I had Docker, all my problems would be solved. First of all, let’s talk about the fact that this Makefile is used as a… script. There’s no dependencies in the targets!

Okay, let’s read that Dockerfile. Executing the scripts inside it should help out, aye?

# Build the go application into a binary
FROM golang:alpine as builder
RUN apk --update add ca-certificates
WORKDIR /app
COPY . ./
RUN CGO_ENABLED=0 GOOS=linux go build -mod vendor -a -installsuffix cgo -o gatus .

# Run Tests inside docker image if you don't have a configured go environment
#RUN apk update && apk add --virtual build-dependencies build-base gcc
#RUN go test ./... -mod vendor

# Run the binary on an empty container
FROM scratch
COPY --from=builder /app/gatus .
COPY --from=builder /app/config.yaml ./config/config.yaml
COPY --from=builder /app/web/static ./web/static
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
ENV PORT=8080
EXPOSE ${PORT}

There are multiple things wrong in me this.

First, please stop putting your binaries in /app, please, pretty-please? We have /usr/local/bin/ for that.

Second, I thought that running go build without GOOS=linux would solve all of my problems. I was wrong, very wrong.

root@mon:~/gatus/gatus-2.8.1 # env CGO_ENABLED=0 go build -mod vendor -a -installsuffix cgo -o gatus .
package github.com/TwinProduction/gatus
        imports github.com/TwinProduction/gatus/config
        imports github.com/TwinProduction/gatus/storage
        imports github.com/TwinProduction/gatus/storage/store
        imports github.com/TwinProduction/gatus/storage/store/sqlite
        imports modernc.org/sqlite
        imports modernc.org/libc
        imports modernc.org/libc/errno: build constraints exclude all Go files in /root/gatus/gatus-2.8.1/vendor/modernc.org/libc/errno

Okay, check this out, the package is called modernc.org/sqlite and it says:

Package sqlite is a CGo-free port of SQLite.

SQLite is an in-process implementation of a self-contained, serverless, zero-configuration, transactional SQL database engine.

Of course it is. Looks like I have to port all of this to FreeBSD. Which, don’t get me wrong, I’m okay with doing that, but I thought that we have POSIX for a reason. notsomuch.

Okay, I’m an open-source guy, I’ll spend some time this weekend to port this to FreeBSD. Let’s look for another solution!

Here’s another one, it’s called statping, also written in Go, the readme is so promising.

No Requirements

Statping is built in Go Language so all you need is the precompile binary based on your operating system. You won’t need to install anything extra once you have the Statping binary installed. You can even run Statping on a Raspberry Pi.

Sounds good! Let’s try it out.

Again, I fetch the tarball, I extract and I bake make.

apt executed in Make

Of course it requires apt! Because not only we all run Linux, be we all run a specific distribution of Linux with a specific package manager.

While tweeting with anger, Daniel pointed out that I should tell them kindly and it’ll work out. I’m sure it will. Let’s hope I can make it work first. I don’t like just opening issues. I’d rather send a patch directly.

</rant>

Overall, now I understand why most *BSD folks use, what’s the word here? ah, yes, old-school software on their systems, like Nagios and the rest.

The developers of the New World Order will assume, always, you are running Linux, as Ubuntu, and you always have Docker.

Hopefully this weekend I will be able to port these software to FreeBSD, otherwise I will just use the Linux layer.

Like Rubenerd said, I am thankful that the mainstream-ness of Linux helped other Unix systems as well, but monocultures are destroying what people have spent years to improve.

Hopefully, next week, I will write a blog post on how to fix these issues and how I got all of those up and running.

That’s all folks…

Reply via email.

Two Colons Equals Modules

Days ago I tweeted a shell function which is part of jailio’s code base. Jailio is a project I’ve been working on for the last 6 months. As the name implies, it’s a container management software for FreeBSD Jails.

It has two unique things compared to other Jail management software. First of all, it has no dependencies, it’s written purely in Shell. You can say the same about BastilleBSD, however, Jailio’s second unique thing is that it uses base tools only and requires the base system only. For example, you need to have bastille_enable in BastilleBSD, it also uses its own config files, etc. In Jailio, you need to have jail_enable, because technically Jailio automates jail.conf files. It also uses my patch to automate the jail.confs in /etc/jail.conf.d.

Anyway, back to our topic about Colons and Modules.

I like modules, I got introduced to them when I started programming in school. In Syria, we learn programming at 7th grade but in our school we started a year early, so 6th grade. We always start with block diagrams and then Turbo Pascal!

Yes, 16-bit Turbo Pascal was my first programming language and it had the concept of modules which we called Units.

And then you have languages like C or Shell which don’t have modules. If you use modules you KNOW that it’s hard not to use modules after that.

While reading the source code of vm-bhyve I learned that you can use two colons (::) as part of the function name, which can give you an amazing new superpower to take over the world write cleaner code.

For me this was a life-changer. I write a LOT of Shell code. I ship them to production too. No, you don’t need to write everything in a fancy new language and run it on kubernetes, you can always use simple languages like Shell and run them in a FreeBSD Jail. Or in my case, write in Shell to automate FreeBSD Jails.

Here’s an example code with “modules” in Shell. Note, this works in FreeBSD’s shell, I have not tested other Shells yet.

main.sh

#!/bin/sh

. ./mod1.sh

mod1::func1

mod1.sh

#!/bin/sh

mod1::func1(){
  printf "Here I am, rock you like a hurricane\n"
}
antranigv@pingvinashen:~ % ./main.sh 
Here I am, Rock you like a hurricane

As you can see it all relies on the concept that the function name itself has two colons in its name.

Here’s the code from jailio that I tweeted.

jail::get_next_id(){
  expr $(
    ( grep -s '$id' /etc/jail.conf.d/* || echo '$id = "0";' ) |
    awk -F '[="]' '{print $3}' |
    sort -h |
    tail -1
  ) + 1
}

After tweeting the code above Annatar replied that this should NOT work elsewhere and that’s how I got introduced to The Heirloom Project which provides traditional implementations of the original Unix tools from the original Unix source code.

Hopefully, I will see more people using “modules” in Shell scripts. Hopefully this trick works in other Shell implementations like Bash and zsh.

That’s all folks.

Reply via email.