Tag Archives: FreeBSD

Changing FreeBSD’s rcorder without patching

I was upgrading my jails, when I noticed that the WriteFreely instance for օրագիր.հայ was not running. I jexec’d into the jail and noticed that the writefreely process was not running at all, doing a simple service writefreely start made it work. Why?

Turns out that WriteFreely needs MySQL to be running during startup, and I assume it wasn’t. By running rcorder I was able to see the boot process.

# rcorder /usr/local/etc/rc.d/* 2>/dev/null
writefreely
rsyncd
mysql-server
garb

So, my first instinct was to patch the /usr/local/etc/rc.d/writefreely script and add mysql into the REQUIRE line, but then I thought to myself, I can’t be the only person who had this problem, right? I mean, I know that the script will be overwritten during the next upgrade. What’s the actual solution here?

After searching a bit, I found the article Override rc order in FreeBSD, so based on that, I created the following file: /usr/local/etc/rc.d/__writefreely which has the following content

#!/bin/sh

# PROVIDE: __writefreely
# REQUIRE: mysql
# BEFORE: writefreely

This is a much cleaner way to do things, let’s check the rcorder again

# rcorder /usr/local/etc/rc.d/* 2>/dev/null
mysql-server
rsyncd
garb
__writefreely
writefreely

Much, much better. After restarting the jail, however, I noticed that WriteFreely is still not running… huh?

Oh, of course, I just needed to do chmod +x __writefreely

And now it works.

Reply via email.

FreeBSD-Update and ~200 Jails

Initially, when I heard about freebsd-rustdate I was very skeptical. I have a fear of “Written in <new hip language>”. I thought, however, I’ll wait, and when the time comes, I will try and see how it works.

For the last couple of days I’ve been updating hosts and jails for my customers and my company, and one of the best resources I found was the FreeBSD Update page on FreeBSD’s Wiki, specially the “freebsd-update Reverse Proxy Cache” section. It has saved me hours when updating the hosts. For some hosts we even did an NFS mount of /var/db/freebsd-update/files directory.

But when it came to upgrading the jails, I realized that this is going to take a very long time. Each host has at least 15 jails, up to 50. There’s a host which has 100+ jails.

Upgrading all of them was going to take a very, very long time. So I ended up doing some research. Here were my options.

  • Build FreeBSD once and run make install everywhere else using NFS and DESTDIR (I used to do this years ago)
  • Migrate to PkgBase (we’ve started doing this, but we’re not done yet, and it will take a while)
  • Nuke the Jails, start fresh, and just move the data (this could work, and I will do that in the future, but now I need to update ~200 jails in the coming 3 days)
  • Somehow, make freebsd-update run faster.

As you have guessed, I went for the last option. Uncle Dave reminded me of freebsd-rustdate again, and I decided to give it a try. Even before starting, my good friend Daniel wrote in our group chat:

@dch my guy. You just saved me several hours per year of flipping back and forth between terminals waiting for the next part of a freebsd-update upgrades to finish running on a million systems.

I arrived to my parent’s house, installed freebsd-rustdate on a host, and tested it on a single jail. Here is my initial reaction

holy fuck freebsd-rustdate is fucking fast

Like I said, I hate “rewrite in <new hip language>”, but clearly, this time it’s a winner.

And frankly speaking, my Jail manager, jailer, does have the same problems that freebsd-update has. It’s much, much slower when you have to manage 100+ jails. I will, however, not rewrite it in another language (for now, and if I do, it will be in Oberon). Although I might end up spending some good amount of time optimizing it 🙂

Kudos to Matthew Fuller, amazing work. And I have to mention, when I was thinking about moving to FreeBSD more than a decade ago, his rant BSD for Linux Users was the deciding factor for me, and I’ve been using FreeBSD ever since.

That’s all folks…

Reply via email.

dtrace.conf is back as dtrace.conf(24)

Woke up middle of the night to grab a cup of water, decided to check Mastodon, and what do I see?

dtrace.conf(24) Tickets, Wed, Dec 11, 2024 at 9:00 AM

This makes me very happy! I love seeing DTrace in the wild, and having more DTrace content out there is beneficial to everyone in the DTrace community.

Obviously, being a Syrian with passport issues, I will not be able to attend, but hopefully everything will be recorded and published online. I’ll try to make it to dtrace.conf(28).

Have fun everyone!

Reply via email.

Antranig Vartanian

October 6, 2024

Initially, Jailer has had a single image format to download, the “FreeBSD base image”, also known as base.txz.

Now we’re trying to integrate PkgBase, OCI images, Jailer binary images, Jailer source images (jailerfile), Linux bootstrap images, and regular tarballs.

This is the point where I just want to kill myself. This is harder than expected.

Linux has a package management problem. I’m having a “too many registry types” problem.

Let’s see how it goes.

#Jailer #FreeBSD

Reply via email.

The FreeBSD-native-ish home lab and network

For many years my setup was pretty simple: A FreeBSD home server running on my old laptop. It runs everything I need to be present on the internet, an email server, a web server (like the one you’ve accessed right now to see this blog post) and a public chat server (XMPP/Jabber) so I can be in touch with friends.

For my home network, I had a basic Access Point and a basic Router.

Lately, my setup has become more… intense. I have IPv6 thanks to Hurricane Electric, the network is passed to my home network (which we’ll talk about in a bit), a home network with multiple VLANs, since friends who come home also need WiFi.

I decided to blog about the details, hoping it would help someone in the future.

I’ll start with the simplest one.

The Home Server

I’ve been running home servers for a long time. I believe that every person/family needs a home server. Forget about buying your kids iPads and Smartphones. Their first devices should be a real computer (sorry Apple, iOS devices are still just a toy) like a desktop/laptop and a home server. The home server doesn’t need to be on the public internet, but mine is, for variety of reasons. This blog being one of them.

I get a static IP address from my ISP, Ucom. After the management change that happened couple of years ago, Ucom has become a very typical ISP (think shitty), but they are the only ones that provide a static IP address, instead of setting it on your router, where you have to do port forwarding.

My home server, hostnamed pingvinashen (meaning the town of the penguins, named after the Armenian cartoon) run FreeBSD. Historically this machine has run Debian, Funtoo, Gentoo and finally FreeBSD.

Hardware wise, here’s what it is:

root@pingvinashen:~ # dmidecode -s system-product-name
Latitude E5470
root@pingvinashen:~ # sysctl hw.model
hw.model: Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz
root@pingvinashen:~ # sysctl hw.physmem
hw.physmem: 17016950784
root@pingvinashen:~ # zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot   420G   178G   242G        -         -    64%    42%  1.00x    ONLINE  -

While most homelabbers use hardware virtualization, I think that resources are a tight thing, and should be managed properly. Any company that markets itself as “green/eco-friendly” and uses hardware virtualization should do calculations using a pen and paper and prove if going native would save power/resources or not. (sometimes it doesn’t, usually it does)

I use containers, the old-school ones, Jails to be more specific.

I manage jails using Jailer, my own tool, that tries to stay out of your way when working with Jails.

Here are my current jails:

root@pingvinashen:~ # jailer list
NAME        STATE    JID  HOSTNAME              IPv4               GW
antranig    Active   1    antranig.bsd.am       192.168.10.42/24   192.168.10.1
antranigv   Active   2    antranigv.bsd.am      192.168.10.52/24   192.168.10.1
git         Stopped
huginn0     Active   4    huginn0.bsd.am        192.168.10.34/24   192.168.10.1
ifconfig    Active   5    ifconfig.bsd.am       192.168.10.33/24   192.168.10.1
lucy        Active   6    lucy.vartanian.am     192.168.10.37/24   192.168.10.1
mysql       Active   7    mysql.antranigv.am    192.168.10.50/24   192.168.10.1
newsletter  Active   8    newsletter.bsd.am     192.168.10.65/24   192.168.10.1
oragir      Active   9    oragir.am             192.168.10.30/24   192.168.10.1
psql        Active   10   psql.pingvinashen.am  192.168.10.3/24    192.168.10.1
rss         Active   11   rss.bsd.am            192.168.10.5/24    192.168.10.1
sarian      Active   12   sarian.am             192.168.10.53/24   192.168.10.1
syuneci     Active   13   syuneci.am            192.168.10.60/24   192.168.10.1
znc         Active   14   znc.bsd.am            192.168.10.152/24  192.168.10.1

You already get a basic idea of how things are. Each of my blogs (Armenian and English) has its own Jail. Since I’m using WordPress, I need a database, so I have a MySQL jail (which ironically runs MariaDB) inside of it.

I also have a Git server, running gitea, which is down at the moment as I’m doing maintanence. The Git server (and many other services) requires PostgreSQL, hence the existence of  a PostgreSQL jail. I run huginn for automation (RSS to Telegram, RSS to XMPP). My sister has her own blog, using WordPress, so that’s a Jail of its own. Same goes about my fiancée.

Other Jails are Newsletter using Listmonk, Sarian (the Armenian instance of lobste.rs) and a personal ZNC server.

As an avid RSS advocate, I also have a RSS Jail, which runs Miniflux. Many of my friends use this service.

Oragir is an instance of WriteFreely, as I advocate public blogging and ActivityPub. Our community uses that too.

The web server that forwards all this traffic from the public to the Jails is nginx. All it does is proxy_pass as needed. It runs on the host.

Other services that run on the host are DNS (BIND9), an email service running OpenSMTPd (which will be moved to a Jail soon), the chat service running prosody (which will be moved to a Jail soon) and finally, WireGuard, because I love VPNs.

Finally, there’s a IPv6-over-IPv4 tunnel that I use to obtain IPv6 thanks to Hurricane Electric.

Yes, I have a firewall, I use pf(4).

For the techies in the room, here’s what my rc.conf looks like.

# cat /etc/rc.conf
# Defaults
clear_tmp_enable="YES"
syslogd_flags="-ss"
sendmail_enable="NONE"
#local_unbound_enable="YES"
sshd_enable="YES"
moused_enable="YES"
ntpd_enable="YES"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"
zfs_enable="YES"

hostname="pingvinashen.am"

# Networking
defaultrouter="37.157.221.1"
gateway_enable="YES"


ifconfig_em0="up"
vlans_em0="37 1000" # 1000 -> WAN; 37 -> Home Router

ifconfig_em0_1000="inet 37.157.221.130 netmask 255.255.255.0"
ifconfig_em0_37="inet 192.168.255.2 netmask 255.255.255.0"

static_routes="home"
route_home="-net 172.16.100.0/24 -gateway 192.168.255.1"


cloned_interfaces="bridge0 bridge6 bridge10"
ifconfig_bridge10="inet 192.168.10.1 netmask 255.255.255.0"


## IPv6
ipv6_gateway_enable="YES"

gif_interfaces="gif0"
gifconfig_gif0="37.157.221.130 216.66.84.46"
ifconfig_gif0="inet6 2001:470:1f14:ef::2 2001:470:1f14:ef::1 prefixlen 128"
ipv6_defaultrouter="2001:470:1f14:ef::1"

ifconfig_em0_37_ipv6="inet6 2001:470:7914:7065::2 prefixlen 64"
ipv6_static_routes="home guest"
ipv6_route_home="-net 2001:470:7914:6a76::/64 -gateway 2001:470:7914:7065::1"
ipv6_route_guest="-net 2001:470:7914:6969::/64 -gateway 2001:470:7914:7065::1"

ifconfig_bridge6_ipv6="inet6 2001:470:1f15:e4::1 prefixlen 64"

ifconfig_bridge6_aliases="inet6 2001:470:1f15:e4::25 prefixlen 64 \
inet6 2001:470:1f15:e4::80 prefixlen 64      \
inet6 2001:470:1f15:e4::5222 prefixlen 64    \
inet6 2001:470:1f15:e4:c0fe::53 prefixlen 64 \
"


# VPN
wireguard_enable="YES"
wireguard_interfaces="wg0"

# Firewall
pf_enable="YES"

# Jails
jail_enable="YES"
jailer_dir="zfs:zroot/jails"

# DNS
named_enable="YES"

# Mail
smtpd_enable="YES"
smtpd_config="/usr/local/etc/smtpd.conf"

# XMPP
prosody_enable="YES"
turnserver_enable="YES"

# Web
nginx_enable="YES"
tor_enable="YES"

The gif0 interface is a IPv6-over-IPv4 tunnel. I have static routes to my home network, so I don’t go to my server over the ISP every time. This also gives me the ability to get IPv6 in my home network that is routed via my home server.

As you have guessed from this config file, I do have VLANs setup. So let’s get into that.

The Home Network

First of all, here’s a very cheap diagram

I have the following VLANs setup on the switch.

VLAN ID Purpose
1 Switch Management
1000 pingvinashen (home server) WAN
1001 evn0 (home router) WAN
37 pingvinashen ↔ evn0
42 Internal Management
100 Home LAN
69 Home Guest

Here are the active ports

Port VLANs Purpose
24 untagged: 1 Switch management, connects to Port 2
22 untagged: 1000 pingvinashen WAN, from ISP
21 untagged: 1001 Home WAN, from ISP
20 tagged: 1000, 37 To pingvinashen, port em0
19 untagged: 1001 To home router, port igb1
18 tagged: 42, 100, 69, 99 To home router, port igb2
17 untagged: 37 To home router, port igb0
16 tagged: 42, 100, 69 To Lenovo T480s
15 untagged: 100 To Raspberri Pi 4
2 untagged: 99 From Port 24, for switch management
1 untagged: 42; tagged: 100, 69; PoE To UAP AC Pro

The home router, hostnamed evn0 (named after the IATA code of Yerevan’s Zvartnots International Airport) runs FreeBSD as well, the hardware is the following

root@evn0:~ # dmidecode -s system-product-name
APU2
root@evn0:~ # sysctl hw.model
hw.model: AMD GX-412TC SOC                               
root@evn0:~ # sysctl hw.physmem
hw.physmem: 4234399744
root@evn0:~ # zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot  12.5G  9.47G  3.03G        -         -    67%    75%  1.00x    ONLINE  -

The home router does… well, routing. It also does DHCP, DNS, SLAAC, and can act as a syslog server.

Here’s what the rc.conf looks like

clear_tmp_enable="YES"
sendmail_enable="NONE"
syslogd_flags="-a '172.16.100.0/24:*' -H"
zfs_enable="YES"
dumpdev="AUTO"

hostname="evn0.illuriasecurity.com"

pf_enable="YES"
gateway_enable="YES"
ipv6_gateway_enable="YES"

sshd_enable="YES"

# Get an IP address from the ISP's GPON
ifconfig_igb1="DHCP"

# Internal routes with pingvinashen
ifconfig_igb0="inet 192.168.255.1 netmask 255.255.255.0"
ifconfig_igb0_ipv6="inet6 2001:470:7914:7065::1 prefixlen 64"

static_routes="pingvinashen"
route_pingvinashen="-net 37.157.221.130/32 -gateway 192.168.255.2"

ipv6_defaultrouter="2001:470:7914:7065::2"

# Home Mgmt, Switch Mgmt, Home LAN, Home Guest
ifconfig_igb2="up"
vlans_igb2="42 99 100 69"
ifconfig_igb2_42="inet 172.31.42.1 netmask 255.255.255.0"
ifconfig_igb2_99="inet 172.16.99.1 netmask 255.255.255.0"

ifconfig_igb2_100="inet 172.16.100.1 netmask 255.255.255.0"
ifconfig_igb2_100_ipv6="inet6 2001:470:7914:6a76::1 prefixlen 64"

ifconfig_igb2_69="inet 192.168.69.1 netmask 255.255.255.0"
ifconfig_igb2_69_ipv6="inet6 2001:470:7914:6969::1 prefixlen 64"

# DNS and DHCP
named_enable="YES"
dhcpd_enable="YES"

named_flags=""

# NTP
ntpd_enable="YES"

# Router Advertisement and LLDP
rtadvd_enable="YES"
lldpd_enable="YES"
lldpd_flags=""

Here’s pf.conf, because security is important.

ext_if="igb1"
bsd_if="igb0"
int_if="igb2.100"
guest_if="igb2.69"
mgmt_if="igb2.42"
sw_if="igb2.99"

ill_net="172.16.0.0/16"

nat pass on $ext_if from $int_if:network to any -> ($ext_if)
nat pass on $ext_if from $mgmt_if:network to any -> ($ext_if)
nat pass on $ext_if from $guest_if:network to any -> ($ext_if)

set skip on { lo0 }

block in all

pass on $int_if   from $int_if:network   to any
pass on $mgmt_if  from $mgmt_if:network  to any
pass on $sw_if    from $sw_if:network    to any
pass on $guest_if from $guest_if:network to any

block quick on $guest_if from any to { $int_if:network, $mgmt_if:network, $ill_net, $sw_if:network }

pass in on illuria0 from $ill_net to { $ill_net, $mgmt_if:network }

pass inet  proto icmp
pass inet6 proto icmp6
pass out   all   keep state

I’m sure there are places to improve, but it gets the job done and keeps the guest network isolated.

Here’s rtadvd.conf, for my IPv6 folks

igb2.100:\
  :addr="2001:470:7914:6a76::":prefixlen#64:\
  :rdnss="2001:470:7914:6a76::1":\
  :dnssl="evn0.loc.illuriasecurity.com,loc.illuriasecurity.com":

igb2.69:\
  :addr="2001:470:7914:6969::":prefixlen#64:\
  :rdnss="2001:470:7914:6969::1":

For DNS, I’m running BIND, here’s the important parts

listen-on     { 127.0.0.1; 172.16.100.1; 172.16.99.1; 172.31.42.1; 192.168.69.1; };
listen-on-v6  { 2001:470:7914:6a76::1; 2001:470:7914:6969::1; };
allow-query   { 127.0.0.1; 172.16.100.0/24; 172.31.42.0/24; 192.168.69.0/24; 2001:470:7914:6a76::/64; 2001:470:7914:6969::/64;};

And for DHCP, here’s what it looks like

subnet 172.16.100.0 netmask 255.255.255.0 {
        range 172.16.100.100 172.16.100.150;
        option domain-name-servers 172.16.100.1;
        option subnet-mask 255.255.255.0;
        option routers 172.16.100.1;
        option domain-name "evn0.loc.illuriasecurity.com";
        option domain-search "loc.illuriasecurity.com evn0.loc.illuriasecurity.com";
}

host zvartnots {
    hardware ethernet d4:57:63:f1:5a:36;
    fixed-address 172.16.100.7;
}

host unifi0 {
    hardware ethernet 58:9c:fc:93:d1:0b;
    fixed-address 172.31.42.42;
}
[…] subnet 172.31.42.0 netmask 255.255.255.0 { range 172.31.42.100 172.31.42.150; option domain-name-servers 172.31.42.1; option subnet-mask 255.255.255.0; option routers 172.31.42.1; } subnet 192.168.69.0 netmask 255.255.255.0 { range 192.168.69.100 192.168.69.150; option domain-name-servers 192.168.69.1; option subnet-mask 255.255.255.0; option routers 192.168.69.1; }

So you’re wondering, what’s this unifi0? Well, that brings us to

T480s

This laptop has been gifted to me by [REDACTED] for my contributions to the Armenian government (which means when a server goes down and no one knows how to fix it, they called me and I showed up)

Here’s the hardware

root@t480s:~ # dmidecode -s system-version
ThinkPad T480s
root@t480s:~ # sysctl hw.model
hw.model: Intel(R) Core(TM) i5-8350U CPU @ 1.70GHz
root@t480s:~ # sysctl hw.physmem
hw.physmem: 25602347008
root@t480s:~ # zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot   224G   109G   115G        -         -    44%    48%  1.00x    ONLINE  -

The T480s has access to VLAN 100, 42, 69, but the host itself has access only to VLAN 100 (LAN), while the jails can exist on other VLANs.

So I have a Jail named unifi0 that runs the Unifi Management thingie.

Here’s what rc.conf of the host looks like

clear_tmp_enable="YES"
syslogd_flags="-ss"
sendmail_enable="NONE"
sshd_enable="YES"
ntpd_enable="YES"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"
zfs_enable="YES"

hostname="t480s.evn0.loc.illuriasecurity.com"

ifconfig_em0="up -rxcsum -txcsum"
vlans_em0="100 42 69"
ifconfig_em0_100="up"
ifconfig_em0_42="up"
ifconfig_em0_69="up"

cloned_interfaces="bridge0 bridge100 bridge42 bridge69"

create_args_bridge100="ether 8c:16:45:82:b4:10"
ifconfig_bridge100="addm em0.100 SYNCDHCP"
ifconfig_bridge100_ipv6="inet6 auto_linklocal"
rtsold_flags="-i -F -m bridge100"
rtsold_enable="YES"

create_args_bridge42=" ether 8c:16:45:82:b4:42"
create_args_bridge69=" ether 8c:16:45:82:b4:69"

ifconfig_bridge42="addm em0.42"
ifconfig_bridge69="addm em0.69"


jail_enable="YES"
jailer_dir="zfs:zroot/jailer"

ifconfig_bridge0="inet 10.1.0.1/24 up"
ngbuddy_enable="YES"
ngbuddy_private_if="nghost0"
dhcpd_enable="YES"

lldpd_enable="YES"

I used Jailer to create the unifi0 jail, here’s what the jail.conf looks like

# vim: set syntax=sh:
exec.clean;
allow.raw_sockets;
mount.devfs;

unifi0 {
  $id             = "6";
  devfs_ruleset   = 10;
  $bridge         = "bridge42";
  $domain         = "evn0.loc.illuriasecurity.com";
  vnet;
  vnet.interface = "epair${id}b";

  exec.prestart   = "ifconfig epair${id} create up";
  exec.prestart  += "ifconfig epair${id}a up descr vnet-${name}";
  exec.prestart  += "ifconfig ${bridge} addm epair${id}a up";

  exec.start      = "/sbin/ifconfig lo0 127.0.0.1 up";
  exec.start     += "/bin/sh /etc/rc";

  exec.stop       = "/bin/sh /etc/rc.shutdown jail";
  exec.poststop   = "ifconfig ${bridge} deletem epair${id}a";
  exec.poststop  += "ifconfig epair${id}a destroy";

  host.hostname   = "${name}.${domain}";
  path            = "/usr/local/jailer/unifi0";
  exec.consolelog = "/var/log/jail/${name}.log";
  persist;
  mount.fdescfs;
  mount.procfs;
}

Here are the important parts inside the jail

root@t480s:~ # cat /usr/local/jailer/unifi0/etc/rc.conf
ifconfig_epair6b="SYNCDHCP"
sendmail_enable="NONE"
syslogd_flags="-ss"
mongod_enable="YES"
unifi_enable="YES"
root@t480s:~ # cat /usr/local/jailer/unifi0/etc/start_if.epair6b 
ifconfig epair6b ether 58:9c:fc:93:d1:0b

Don’t you love it that you can see what’s inside the jail from the host? God I love FreeBSD!

Did I miss anything? I hope not.

Oh, for the homelabbers out there, the T480s is the one that runs things like Jellyfin if needed.

Finally, the tiny 

Raspberry Pi 4, Model B

I found this in a closed, so I decided to run it for TimeMachine.

I guess all you care about is rc.conf

hostname="tm0.evn0.loc.illuriasecurity.com"
ifconfig_DEFAULT="DHCP inet6 accept_rtadv"
sshd_enable="YES"
sendmail_enable="NONE"
sendmail_submit_enable="NO"
sendmail_outbound_enable="NO"
sendmail_msp_queue_enable="NO"
growfs_enable="YES"
powerd_enable="YES"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"
zfs_enable="YES"
rtsold_enable="YES"
samba_server_enable="YES"

And the Samba Configuration

[global]
# Network settings
workgroup = WORKGROUP
server string = Samba Server %v
netbios name = RPi4

# Logging
log file = /var/log/samba4/log.%m
max log size = 50
log level = 0

# Authentication
security = user
encrypt passwords = yes
passdb backend = tdbsam
map to guest = Bad User

min protocol = SMB2
max protocol = SMB3


# Apple Time Machine settings
vfs objects = catia fruit streams_xattr
fruit:metadata = stream
fruit:resource = stream
fruit:encoding = native
fruit:locking = none
fruit:time machine = yes

# File System support
ea support = yes
kernel oplocks = no
kernel share modes = no
posix locking = no
mangled names = no
smbd max xattr size = 2097152

# Performance tuning
read raw = yes
write raw = yes
getwd cache = yes
strict locking = no

# Miscellaneous
local master = no
preferred master = no
domain master = no
wins support = no

[tm]
comment = Time Machine RPi4
path = /usr/local/timemachine/%U
browseable = yes
read only = no
valid users = antranigv
vfs objects = catia fruit streams_xattr
fruit:time machine = yes
fruit:advertise_fullsync = true
fruit:time machine max size = 800G  # Adjust the size according to your needs
create mask = 0600
directory mask = 0700

That’s pretty much it.

Conclusion

I love running homebrew servers, home networks and home labs. I love that (almost) everything is FreeBSD. The switch itself runs Linux, and the Unifi Access Point also runs Linux, both of which I’m pretty happy with.

While most homelabbers used ESXi in the past, I’m happy to see that most people are moving to open source solutions like Proxmox and Xen, but I think that FreeBSD Jails and bhyve is much better. I still don’t have a need for bhyve at the moment, but I would use it if I needed hardware virtualization.

Most homelabbers would consider the lack of Web/GUI interfaces as a con, but I think that it’s a pro. If I need to “replicate” this network, all I need to do is to copy some text files and modify some IP addresses / Interface names.

I hope this was informative and that it would be useful for anyone in the future.

That’s all folks… 

Reply via email.

Installing FreeBSD with Root-on-ZFS on Vultr using iPXE

The title is pretty self explanatory, so let’s get to it, shall we?

I was configuring a server for a customer today, and one of the things I noticed is that FreeBSD was not available for bare-metal.

This got me a bit worried, because we use a lot of FreeBSD on Vultr… Well that’s a lie. We only use FreeBSD on Vultr.

I logged into our company account and noticed that our bare-metals does have FreeBSD as an icon for the image.

So I decided to check the docs and found this:

What operating system templates do you offer?

We offer many Linux and Windows options. We do not offer OpenBSD or FreeBSD images for Vultr Bare Metal. Use our iPXE boot feature if you need to install a custom operating system.

Well, that’s sad, but on the other hand, iPXE will be very useful. We can boot a memdisk such as mfsBSD and install FreeBSD from there.

To start, we need a VM that can host the mfsBSD img/ISO file. I have spun up a VM on Vultr running FreeBSD (altho it can run anything else, it wouldn’t matter), installed nginx on it, downloaded the file so we can boot from it. Here’s the copy-pasta

pkg install -y nginx
service nginx enable && service nginx start
fetch -o /usr/local/www/ \
https://mfsbsd.vx.sk/files/images/14/amd64/mfsbsd-se-14.0-RELEASE-amd64.img

This should be enough to get started. Oh, if you’re not on FreeBSD then the path might be different, like /var/www/nginx, or something alike. Check your nginx configuration for the details.

Now we need to write an iPXE script and add it into our Vultr iPXE scripts. Here’s what it looks like

#!ipxe

echo Starting MFSBSD
sanboot http://your.server.ip.address/mfsbsd-se-14.0-RELEASE-amd64.img
boot

Finally, we can create a bare-metal that uses our script for iPXE boot.

Don’t forget to choose the right location and plan.

After the machine is provisioned, you need to access the console and you will see the boot process.

The default root password is mfsroot.

To install FreeBSD, you can run bsdinstall. The rest will be familiar for you. Yes, you can use Root-on-ZFS. No, it can’t be in UEFI, you must use GPT (BIOS).

Good luck, and special thanks to Vultr for giving us the chance to use our favorite tools on the public cloud.

That’s all folks…

Reply via email.

Installing DFIR-IRIS on FreeBSD using Jails

This is a live blogging of the installation process of DFIR-IRIS on FreeBSD 14.0-RELEASE using Jails and Jailer.

The main requirements are:

  • Nginx
  • PostgreSQL
  • Python
  • Some random dependencies we saw in the Dockerfile

I assume you already have nginx up and running, we will just be setting up a vhost under the domain name dfir.cert.am. Don’t worry, this is INSIDE our infrastructure, you will not be able to connect to it 🙂

Initial Setup

First we create a jail named iris0, using Jailer:

jailer create iris0

Next we install the required software inside of the jail. Looks like everything is available in FreeBSD packages:

jailer console iris0
pkg install \ nginx \ python39 \ py39-pip \ gnupg \ 7-zip \ rsync \ postgresql12-client \ git-tiny \ libxslt \ rust \ acme.sh

Installing DFIR-IRIS

Since we’re using FreeBSD, we’ll be doing things the right way instead of the Docker way, so we will be running IRIS as a user, not as root.

pw user add iris -m

Next we setup some directories and checkout the repo

root@iris0:~ # pw user add iris -m
root@iris0:~ # su - iris iris@iris0:~ $ git clone --branch v2.4.7 https://github.com/dfir-iris/iris-web.git iris-web

Finally, we install some python dependencies using pip.

iris@iris0:~ $ cd iris-web/source
iris@iris0:~/iris-web/source $ pip install -r requirements.txt

Now we have to configure the .env file based on our needs, I will post my version of it, I hope it helps

# -- DATABASE
export POSTGRES_USER=postgres
export POSTGRES_PASSWORD=postgres
export POSTGRES_DB=iris_db
export POSTGRES_ADMIN_USER=iris
export POSTGRES_ADMIN_PASSWORD=longpassword

export POSTGRES_SERVER=localhost
export POSTGRES_PORT=5432

# -- IRIS
export DOCKERIZED=0
export IRIS_SECRET_KEY=verylongsecret
export IRIS_SECURITY_PASSWORD_SALT=verylongsalt
export IRIS_UPSTREAM_SERVER=app # these are for docker, you can ignore
export IRIS_UPSTREAM_PORT=8000

# -- WORKER
export CELERY_BROKER=amqp://localhost
# Set to your rabbitmq instance

# Change these as you need them.
# -- AUTH
#IRIS_AUTHENTICATION_TYPE=local
## optional
#IRIS_ADM_PASSWORD=MySuperAdminPassword!
#IRIS_ADM_API_KEY=B8BA5D730210B50F41C06941582D7965D57319D5685440587F98DFDC45A01594
#IRIS_ADM_EMAIL=admin@localhost
#IRIS_ADM_USERNAME=administrator
# requests the just-in-time creation of users with ldap authentification (see https://github.com/dfir-iris/iris-web/issues/203)
#IRIS_AUTHENTICATION_CREATE_USER_IF_NOT_EXIST=True
# the group to which newly created users are initially added, default value is Analysts
#IRIS_NEW_USERS_DEFAULT_GROUP=

# -- LISTENING PORT
#INTERFACE_HTTPS_PORT=443

Configuring HTTPS

We can use acme.sh to issue a TLS certificate from Lets Encrypt.

root@iris0:~ # acme.sh --set-default-ca --server letsencrypt
root@iris0:~ # acme.sh --issue -d dfir.cert.am --standalone
root@iris0:~ # acme.sh -i -d dfir.cert.am --fullchain-file /usr/local/etc/ssl/dfir.cert.am/fullchain.pem --key-file /usr/local/etc/ssl/dfir.cert.am/key.pem --reloadcmd 'service nginx reload'

Setup nginx

DFIR-IRIS provides a nginx configuration template at nginx.conf, we will be using that, with a little bit of modifications.

The final nginx.conf will look like this:

#user  nobody;
worker_processes  1;

# This default error log path is compiled-in to make sure configuration parsing
# errors are logged somewhere, especially during unattended boot when stderr
# isn't normally logged anywhere. This path will be touched on every nginx
# start regardless of error log location configured here. See
# https://trac.nginx.org/nginx/ticket/147 for more info. 
#
#error_log  /var/log/nginx/error.log;
#

#pid        logs/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    # Things needed/recommended by DFIR-IRIS
    map $request_uri $csp_header {
        default "default-src 'self' https://analytics.dfir-iris.org; script-src 'self' 'unsafe-inline' https://analytics.dfir-iris.org; style-src 'self' 'unsafe-inline';";
    }

    server_tokens off;
    sendfile    on;
    tcp_nopush  on;
    tcp_nodelay on;

    types_hash_max_size             2048;
    types_hash_bucket_size          128;
    proxy_headers_hash_max_size     2048;
    proxy_headers_hash_bucket_size  128;
    proxy_buffering                 on;
    proxy_buffers                   8 16k;
    proxy_buffer_size               4k;

    client_header_buffer_size   2k;
    large_client_header_buffers 8 64k;
    client_body_buffer_size     64k;
    client_max_body_size        100M;

    reset_timedout_connection   on;
    keepalive_timeout           90s;
    client_body_timeout         90s;
    send_timeout                90s;
    client_header_timeout       90s;
    fastcgi_read_timeout        90s;
    # WORKING TIMEOUT FOR PROXY CONF
    proxy_read_timeout          90s;
    uwsgi_read_timeout          90s;

    gzip off;
    gzip_disable "MSIE [1-6]\.";

    # FORWARD CLIENT IDENTITY TO SERVER
    proxy_set_header    HOST                $http_host;
    proxy_set_header    X-Forwarded-Proto   $scheme;
    proxy_set_header    X-Real-IP           $remote_addr;
    proxy_set_header    X-Forwarded-For     $proxy_add_x_forwarded_for;

    # FULLY DISABLE SERVER CACHE
    add_header          Last-Modified $date_gmt;
    add_header          'Cache-Control' 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
    if_modified_since   off;
    expires             off;
    etag                off;
    proxy_no_cache      1;
    proxy_cache_bypass  1;

    # SSL CONF, STRONG CIPHERS ONLY
    ssl_protocols               TLSv1.2 TLSv1.3;

    ssl_prefer_server_ciphers   on;
    ssl_certificate             /usr/local/etc/ssl/dfir.cert.am/fullchain.pem;
    ssl_certificate_key         /usr/local/etc/ssl/dfir.cert.am/key.pem;
    ssl_ecdh_curve              secp521r1:secp384r1:prime256v1;
    ssl_buffer_size             4k;

    # DISABLE SSL SESSION CACHE
    ssl_session_tickets         off;
    ssl_session_cache           none;
    server {
        listen          443 ssl
        server_name     dfir.cert.am;
        root            /www/data;
        index           index.html;
        error_page      500 502 503 504  /50x.html;

        add_header Content-Security-Policy $csp_header;
        
        # SECURITY HEADERS
        add_header X-XSS-Protection             "1; mode=block";
        add_header X-Frame-Options              DENY;
        add_header X-Content-Type-Options       nosniff;
        # max-age = 31536000s = 1 year
        add_header Strict-Transport-Security    "max-age=31536000: includeSubDomains" always;
        add_header Front-End-Https              on;

        location / {
            proxy_pass  http://localhost:8000;

            location ~ ^/(manage/templates/add|manage/cases/upload_files) {
                keepalive_timeout           10m;
                client_body_timeout         10m;
                send_timeout                10m;
                proxy_read_timeout          10m;
                client_max_body_size        0M;
                proxy_request_buffering off;
                proxy_pass  http://localhost:8000;
            }

            location ~ ^/(datastore/file/add|datastore/file/add-interactive) {
                keepalive_timeout           10m;
                client_body_timeout         10m;
                send_timeout                10m;
                proxy_read_timeout          10m;
                client_max_body_size        0M;
                proxy_request_buffering off;
                proxy_pass  http://localhost:8000;
            }
        }
        location /socket.io {
            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_http_version 1.1;
            proxy_buffering off;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "Upgrade";
            proxy_pass http://localhost:8000/socket.io;
        }
    }
}

Setup PostgreSQL

I assume you know how to do this 🙂 You don’t need to configure a separate user, by the looks of it, IRIS likes to do that itself. Thanks to Jails I was able to run a separate PostgreSQL instance in the iris0 jail.

P.S. If you are running PostgreSQL inside a jail, make sure that the following variables are set in your jail configuration

  sysvshm         = new;
  sysvmsg         = new;

Running DFIR-IRIS

Now that everything is up and running, we just need to run DFIR-IRIS and it will create the database, needed users, an administration account, etc.

su - iris
cd ~/iris-web/source
. ../.env
~/.local/bin/gunicorn app:app --worker-class eventlet --bind 0.0.0.0:8000 --timeout 180 --worker-connections 1000 --log-level=debug

Assuming everything is fine, now we can setup a rc.d service script to make sure it runs at boot.

For that I wrote two files, the service itself and a helper start.sh script

rc.d script at /usr/local/etc/rc.d/iris

#!/bin/sh

# PROVIDE: iris
# REQUIRE: NETWORKING
# KEYWORD: 

. /etc/rc.subr

name="iris"
rcvar="iris_enable"
load_rc_config ${name}

: ${iris_enable:=no}
: ${iris_path:="/usr/local/iris"}
: ${iris_gunicorn:="/usr/local/bin/gunicorn"}
: ${iris_env="iris_gunicorn=${iris_gunicorn}"}

logfile="${iris_path}/iris.log"
pidfile="/var/run/${name}/iris.pid"

iris_user="iris"
iris_chdir="${iris_path}/source"
iris_command="${iris_path}/start.sh"

command="/usr/sbin/daemon"
command_args="-P ${pidfile} -T ${name} -o ${logfile} ${iris_command}"

run_rc_command "$1"

and the helper script at /home/iris/iris-web/start.sh

#!/bin/sh

export HOME=$(getent passwd `whoami` | cut -d : -f 6)

. ../.env

${iris_gunicorn} app:app --worker-class eventlet --bind 0.0.0.0:8000 --timeout 180 --worker-connections 128

now we set some variables in rc.conf using sysrc and we can start the service.

sysrc iris_enable="YES"
sysrc iris_path="/home/iris/iris-web"
sysrc iris_gunicorn="/home/iris/.local/bin/gunicorn"

Finally, we can start DFIR-IRIS as a service.

service iris start

Aaaaand we’re done 🙂

Thank you for reading!

There are some issues that I’d like to tackle, for example, service iris stop doesn’t work, and it would be nice if we ported all of the dependencies into Ports, but for now, this seems to be working fine.

Special thanks to the DFIR-IRIS team for creating this cool platform!

That’s all folks…

Reply via email.

Antranig Vartanian

April 26, 2024

I love ZFS…

root@evn0:/var/log/named # du -h -d 1
1.4G    .
root@evn0:/var/log/named # du -A -h -d 1
7.4G    .

Reply via email.

Mirroring OmniOS: The Complete Guide; Part One

Chapter Ⅰ

I know that “Complete Guide” and “Part One” are oxymorons, but hey, be happy that I’m publishing in parts, otherwise I’d completely ignore this blog post.

Two weeks ago I decided to play with illumos again. I was speaking with a friend and we were sharing our frustrations regarding Open-Source contribution. We write the code, we submit, we get feedback, we submit again, and then we’re ghosted. It’s like the LinkedIn or Tinder version of Software Engineering.

Then I asked him about his best open-source experience and he told me “illumos of course!”.

I was amazed. I thought you had to be very technical in order to even build illumos, but turns out they have an amazing documentation on building illumos and OmniOS (an illumos distribution) has done work to make sure that the system can be self-hosted (i.e. The OS can build itself).

So, I decided to fire up OmniOS on our hackerspace server running FreeBSD inside a bhyve VM.

The installation went smoothly, but the IPS packages were slow to download, and I might be wrong (please correct me if I am) but IPS doesn’t seem to be keeping a local copy of the files. It always downloads. Is that configurable?

Regardless. I thought that the best way to contribute is to advocate. In order to do that I needed to make sure that IPS servers are fast in Armenia. Hence the mirroring project started.

Obey!

Requirements

Here are some terminology that I will use in this blog post, just so we are on the same page.

  • OmniOS: an illumos distribution
  • Origin: OmniOS’s IPS servers at pkg.omnios.org
  • Local: A copy of the Origin
  • Repository: A collection of software
  • Core: The Core Repository of OmniOS
  • Extras: The Extra Repository of OmniOS
  • IPS or PKG: The Image Packaging System and its utility, pkg
  • Zone: an illumos Zone (similar to FreeBSD Jails, Linux Containers, chroot) running on OmniOS

Now that we are on the same page, let’s talk about our setup and what we need.

  • An internet connection: duh!
  • A domain name: I decided to use pkg.omnios.illumos.am. Yes, I’m lucky like that.
  • A publicly accessible IP address.
  • A server: I am running OmniOS Stable (r151048) inside a VM. You can use bare-metal or a cloud VM if you want.
  • Storage: I am currently using around 50GB of storage, expect that to go around 300GB when we get to Part Three

Pre-Mirroring Setup

Before we setup our mirror, let’s make sure that we have a good infrastructure that we can maintain.

Here’s what we’ll create

  • A Zone that will act as the HTTP(s) server using nginx at IP address 10.10.0.80
  • A Zone that will do the mirroring using IPS tools at 10.10.0.51
  • An virtual dumb switch (etherstub) that will connect the Zones and the Global-Zone (a.k.a The Host) together. The GZ will have an address of 10.10.0.1
  • ZFS datasets for each Core and Extras Repository (for each release)

Please note that there are many ways to do this, for example, having everything in a Global Zone, running IPS mirroring and nginx in a single Zone, not using etherstub at all, etc. But I like this setup as it will allow us to “grow” in the future.

From now on, omnios# means that we’re in the Global Zone and zone0# means we’re inside a Zone named zone0.

Let’s start with setting up our etherstub and connecting our Global Zone to it

omnios# dladm create-etherstub switch0
omnios# dladm create-vnic -l switch0 vnic0
omnios# ipadm create-if vnic0
omnios# ipadm create-addr -T static -a 10.10.0.1/24 vnic0/switch0

Done!

Now, we will setup our Zones using the zadm utility. Install zadm by running

omnios# pkg install zadm

After installing zadm, we’ll create a dataset for our Zones

omnios# zfs create -o mountpoint=/zones rpool/zones

This assumes that your ZFS pool is named rpool.

Finally, we can create our Zones. Running

omnios# zadm create -b pkgsrc www0

will open your $EDITOR, where you need to modify some JSON, here’s what mine looks like!

{
   "autoboot" : "true",
   "brand" : "pkgsrc",
   "ip-type" : "exclusive",
"dns-domain" : "omnios.illumos.am", "net" : [ { "allowed-address" : "10.10.0.80/24", "defrouter" : "10.10.0.1", "global-nic" : "switch0", "physical" : "www0" } ], "pool" : "", "scheduling-class" : "", "zonename" : "www0", "zonepath" : "/zones/www0" }

After saving the file, zadm will install the Zone.

Now let’s setup our mirroring Zone. Do the same but change the Zone name to repo, the brand to lipkg (and -b lipkg) and set the IP address to 10.10.0.51/24.

All we need now is to forward the HTTP/HTTPS traffic to www0 Zone and allow all Zones to access the internet using NAT.

Create and edit the IPFilter’s NAT file at /etc/ipf/ipnat.conf, here’s an example configuration

map vioif0 10.10.0.0/24 -> 212.34.250.10

rdr vioif0 212.34.250.10/32 port 80 -> 10.10.0.80 port 80 tcp
rdr vioif0 212.34.250.10/32 port 443 -> 10.10.0.80 port 443 tcp

Make sure you set the correct interface name and the correct external IP address.

Finally, we can boot our Zones!

omnios# zadm boot www0
omnios# zadm boot repo

You should see the following output when you run zadm again

omnios# zadm
NAME              STATUS     BRAND       RAM    CPUS  SHARES
global            running    ipkg        56G      12       1
repo              running    lipkg         -       -       1
www0              running    pkgsrc        -       -       1

Great! Let’s setup the mirroring process.

Mirroring Setup

Let’s create a ZFS dataset for repos for each release

repo# zfs create -o mountpoint=/repo rpool/zones/repo/ROOT/repo      
repo# zfs create rpool/zones/repo/ROOT/repo/r151048      
repo# zfs create rpool/zones/repo/ROOT/repo/r151048/core 
repo# zfs create rpool/zones/repo/ROOT/repo/r151048/extra

And then we use the pkgrepo command to create a repository

repo# pkgrepo create /repo/r151048/core
repo# pkgrepo create /repo/r151048/extra

And finally, we can start receiving the packages from Origin to Local

repo# pkgrecv -s https://pkg.omnios.org/r151048/core/  -d /repo/r151048/core  '*'
repo# pkgrecv -s https://pkg.omnios.org/r151048/extra/ -d /repo/r151048/extra '*'

This will take a while depending on your internet connection speed and the load on OmniOS’s Origin. It’s like a good investment, we spend load and time now so we save traffic and time later 🙂

After it’s done, we need to set the publisher of these repos the same as Origin.

repo# pkgrepo set -s /repo/r151048/core   publisher/prefix=omnios
repo# pkgrepo set -s /repo/r151048/extra/ publisher/prefix=extra.omnios

And we’re done!

Now need to serve these repos using IPS’s depot server.

We will create two instances of the depotd server, one for core and one for extra.

  • r151048/core will run on 5148
  • r151048/extra will run on 1148
  • (in the future) r151050/core will run on 5150
  • (in the future) r151050/extra will run on 1150

We start with core

repo# svccfg -s pkg/server add r151048_core
repo# svccfg -s pkg/server:r151048_core addpg pkg application
repo# svccfg -s pkg/server:r151048_core setprop pkg/inst_root = /repo/r151048/core/
repo# svccfg -s pkg/server:r151048_core setprop pkg/port = 5148
repo# svccfg -s pkg/server:r151048_core setprop pkg/proxy_base = https://pkg.omnios.illumos.am/r151048/core

And we do the same for extra

repo# svccfg -s pkg/server add r151048_extra
repo# svccfg -s pkg/server:r151048_extra addpg pkg application
repo# svccfg -s pkg/server:r151048_extra setprop pkg/inst_root = /repo/r151048/extra/
repo# svccfg -s pkg/server:r151048_extra setprop pkg/port = 1148
repo# svccfg -s pkg/server:r151048_extra setprop pkg/proxy_base = https://pkg.omnios.illumos.am/r151048/extra

Finally, we enable the services

repo# svcadm enable  pkg/server:r151048_core pkg/server:r151048_extra
repo# svcadm restart pkg/server:r151048_core pkg/server:r151048_extra

Let’s check!

We’re good! Now let’s setup Nginx 🙂

The Web Server

This part is pretty easy, we login into www0, install nginx, and setup some paths. I will be posting a copy-pasta of my configs, I assume you can do the rest 🙂

www0# pkgin update
www0# pkgin install nginx

Thank you SmartOS! 🧡

In my nginx.conf, I added

include vhosts/*.conf;

and then in /opt/local/etc/nginx/vhosts I created a file
named pkg.omnios.illumos.am.conf, which looks like this

server {
        listen 80;
        server_name pkg.omnios.illumos.am;

        location /.well-known/acme-challenge/ {
          alias /opt/local/www/acme/.well-known/acme-challenge/;
        }

        location / {
            return 301 "https://pkg.omnios.illumos.am";
        }
}

server {
    listen       443 ssl;
    server_name  pkg.omnios.illumos.am;

    ssl_certificate      /etc/ssl/pkg.omnios.illumos.am/fullchain.pem;
    ssl_certificate_key  /etc/ssl/pkg.omnios.illumos.am/key.pem;
    location /r151048/core/ {
                proxy_pass http://10.10.0.51:5148/;
    }

    location /r151048/extra/ {
                proxy_pass http://10.10.0.51:1148/;
    }

    location / {
# This needs to be changed, later... add_header Content-Type text/plain; return 200 "ok..."; } }

Finally, we just need to enable nginx

www0# svcadm enable pkgsrc/nginx

and check!

Using the Local Repos

This part is actually pretty easy. We just need to remove everything that exists and add our own. I will be running this on a computer named dna0.

dna0# pkg set-publisher -M '*' -G '*' omnios
dna0# pkg set-publisher -M '*' -G '*' extra.omnios
dna0# pkg set-publisher -O https://pkg.omnios.illumos.am/r151048/core omnios
dna0# pkg set-publisher -O https://pkg.omnios.illumos.am/r151048/extra extra.omnios
dna0# pkg publisher PUBLISHER TYPE STATUS P LOCATION extra.omnios origin online F https://pkg.omnios.illumos.am/r151048/extra/ omnios origin online F https://pkg.omnios.illumos.am/r151048/core/

We’re good! 🙂

Fetching Updates

By the time I wanted to publish this I noticed that there’s a new OmniOS Weekly Update, so I thought, hey, maybe I should try updating the Local Repo as well… how do we do that?

Turns out I just need to pkgrecv again, and then run a refresh command.

pkgrecv -v -s https://pkg.omnios.org/r151048/core/ -d /repo/r151048/core/ '*'
pkgrepo -s /repo/r151048/core refresh

And looks like we’re good! Maybe we can setup a simple cronjob 🙂

Final Notes

This has been an amazing experience. Since I started using OmniOS two weeks ago, I’ve setup the mirror, I installed two OmniOS deployments on production for two organization, and I talked about it during our Armenian Hackers Radio Podcast. With this mirror completely setup, I can advocate even more!

I’d like to send my thanks (and later, my money) to the OmniOS team for the amazing work they’re doing, special thanks to andyf for answering all of my questions, neirac for pushing me to try more illumos in my life and everyone who contributed to the docs and blog posts that I used. I’ll leave some links below.

Finally, for the coming (two) posts I will talk about mirroring downloads.OmniOS.org (for ISO/USB/ZFS images) and the pkgsrc repository run by SmartOS/MNX.

Thank you for reading and thank you, illumos-community for being so nice ^_^

That’s all folks…

Links

Reply via email.

bhyve CPU Allocation Test for 256 core machine

During the last bhyve weekly call, Michael Dexter asked me to run the bhyve CPU Allocation Test that he wrote in order to see if number of CPUs in the guest makes the system boot longer.

Here’s a post with the details of the test and my findings.

The host machines runs the following

# uname -a
FreeBSD genomic.abi.am 13.2-RELEASE FreeBSD 13.2-RELEASE releng/13.2-n254617-525ecfdad597 GENERIC amd64

# sysctl hw.model hw.ncpu
hw.model: AMD EPYC 7702 64-Core Processor
hw.ncpu: 256

# dmidecode -t processor | grep 'Socket Designation'
        Socket Designation: CPU1
        Socket Designation: CPU2

# sysctl hw.physmem hw.realmem hw.usermem
hw.physmem: 2185602236416
hw.realmem: 2200361238528
hw.usermem: 2091107983360

Basically, it’s FreeBSD 13.2, with 2TB of RAM, 2 CPUs with 64 cores each, 2 threads each, totaling 256 vCores

The test runs a bhyve VM with minimal FreeBSD, that’s built with OccamBSD. The main changes are the following:

  • /boot/loader.conf has the line autoboot_delay="0"
  • There are no service enabled
  • /etc/rc.local has the line shutdown -p now

The machine boots and then it shuts down.

Here’s what I’ve got in the log file →

Host CPUs: 256
1 booted in 9 seconds
2 booted in 9 seconds
3 booted in 9 seconds
4 booted in 9 seconds
5 booted in 9 seconds
6 booted in 9 seconds
7 booted in 9 seconds
8 booted in 9 seconds
9 booted in 10 seconds
10 booted in 10 seconds
11 booted in 10 seconds
12 booted in 11 seconds
13 booted in 10 seconds
14 booted in 11 seconds
15 booted in 12 seconds
16 booted in 9 seconds
17 booted in 12 seconds
18 booted in 18 seconds
19 booted in 14 seconds
20 booted in 15 seconds
21 booted in 22 seconds
22 booted in 17 seconds
23 booted in 23 seconds
24 booted in 10 seconds
25 booted in 10 seconds
26 booted in 17 seconds
27 booted in 14 seconds
28 booted in 15 seconds
29 booted in 12 seconds
30 booted in 15 seconds
31 booted in 31 seconds
32 booted in 19 seconds
33 booted in 15 seconds
34 booted in 32 seconds
35 booted in 18 seconds
36 booted in 22 seconds
37 booted in 24 seconds
38 booted in 17 seconds
39 booted in 24 seconds
40 booted in 13 seconds
41 booted in 15 seconds
42 booted in 23 seconds
43 booted in 37 seconds
44 booted in 21 seconds
45 booted in 19 seconds
46 booted in 12 seconds
47 booted in 17 seconds
48 booted in 19 seconds
49 booted in 17 seconds
50 booted in 18 seconds
51 booted in 15 seconds
52 booted in 20 seconds
53 booted in 14 seconds
54 booted in 22 seconds
55 booted in 18 seconds
56 booted in 17 seconds
57 booted in 92 seconds
58 booted in 15 seconds
59 booted in 15 seconds
60 booted in 17 seconds
61 booted in 16 seconds
62 booted in 22 seconds
63 booted in 17 seconds
64 booted in 12 seconds
65 booted in 17 seconds

At the 66th core, bhyve crashes, with the following line

Booting the VM with 66 vCPUs
Assertion failed: (curaddr - startaddr < SMBIOS_MAX_LENGTH), function smbios_build, file /usr/src/usr.sbin/bhyve/smbiostbl.c, line 936.
Abort trap (core dumped)    

At this point, bhyve crashes with every ncpu+1, so I had to stop the loop from running.

I had to look into the topology of the CPUs, which FreeBSD can report using

sysctl -n kern.sched.topology_spec

<groups>
 <group level="1" cache-level="0">
  <cpu count="256" mask="ffffffffffffffff,ffffffffffffffff,ffffffffffffffff,ffffffffffffffff">0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255</cpu>
  <children>
   <group level="2" cache-level="0">

[...]

   </group>
  </children>
 </group>
</groups>

You can find the whole output here: kern.sched.topology_spec.xml.txt

The system that we need for production requires 240 vCores. This topology gave me the idea to run that manually, using the socket, cores and threads options →

bhyve -c 240,sockets=2,cores=60,threads=2 -m 1024 -H -A \
    -l com1,stdio \
    -l bootrom,BHYVE_UEFI.fd \
    -s 0,hostbridge \
    -s 2,virtio-blk,vm.raw \
    -s 31,lpc \
    vm0

And it booted all fine! 🙂

240 booted in 33 seconds

For production, however, I use vm-bhyve, so I’ve added the following to my configuration →

cpu="240"
cpu_sockets="2"
cpu_cores="60"
cpu_threads="2"
memory="1856G"

And yes, for those who are wondering, bhyve can virtualize 1.8T of vDRAM all fine 🙂

For my debugging nerds, I’ve also uploaded the bhyve.core file to my server, you may get it at bhyve-cpu-allocation–256.tgz

As long as this is helpful for someone out there, I’ll be happy. Sometimes I forget that not everyone runs massive clusters like we do.

That’s all folks…

Reply via email.