Tag Archives: FreeBSD

The FreeBSD-native-ish home lab and network

For many years my setup was pretty simple: A FreeBSD home server running on my old laptop. It runs everything I need to be present on the internet, an email server, a web server (like the one you’ve accessed right now to see this blog post) and a public chat server (XMPP/Jabber) so I can be in touch with friends.

For my home network, I had a basic Access Point and a basic Router.

Lately, my setup has become more… intense. I have IPv6 thanks to Hurricane Electric, the network is passed to my home network (which we’ll talk about in a bit), a home network with multiple VLANs, since friends who come home also need WiFi.

I decided to blog about the details, hoping it would help someone in the future.

I’ll start with the simplest one.

The Home Server

I’ve been running home servers for a long time. I believe that every person/family needs a home server. Forget about buying your kids iPads and Smartphones. Their first devices should be a real computer (sorry Apple, iOS devices are still just a toy) like a desktop/laptop and a home server. The home server doesn’t need to be on the public internet, but mine is, for variety of reasons. This blog being one of them.

I get a static IP address from my ISP, Ucom. After the management change that happened couple of years ago, Ucom has become a very typical ISP (think shitty), but they are the only ones that provide a static IP address, instead of setting it on your router, where you have to do port forwarding.

My home server, hostnamed pingvinashen (meaning the town of the penguins, named after the Armenian cartoon) run FreeBSD. Historically this machine has run Debian, Funtoo, Gentoo and finally FreeBSD.

Hardware wise, here’s what it is:

root@pingvinashen:~ # dmidecode -s system-product-name
Latitude E5470
root@pingvinashen:~ # sysctl hw.model
hw.model: Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz
root@pingvinashen:~ # sysctl hw.physmem
hw.physmem: 17016950784
root@pingvinashen:~ # zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot   420G   178G   242G        -         -    64%    42%  1.00x    ONLINE  -

While most homelabbers use hardware virtualization, I think that resources are a tight thing, and should be managed properly. Any company that markets itself as “green/eco-friendly” and uses hardware virtualization should do calculations using a pen and paper and prove if going native would save power/resources or not. (sometimes it doesn’t, usually it does)

I use containers, the old-school ones, Jails to be more specific.

I manage jails using Jailer, my own tool, that tries to stay out of your way when working with Jails.

Here are my current jails:

root@pingvinashen:~ # jailer list
NAME        STATE    JID  HOSTNAME              IPv4               GW
antranig    Active   1    antranig.bsd.am       192.168.10.42/24   192.168.10.1
antranigv   Active   2    antranigv.bsd.am      192.168.10.52/24   192.168.10.1
git         Stopped
huginn0     Active   4    huginn0.bsd.am        192.168.10.34/24   192.168.10.1
ifconfig    Active   5    ifconfig.bsd.am       192.168.10.33/24   192.168.10.1
lucy        Active   6    lucy.vartanian.am     192.168.10.37/24   192.168.10.1
mysql       Active   7    mysql.antranigv.am    192.168.10.50/24   192.168.10.1
newsletter  Active   8    newsletter.bsd.am     192.168.10.65/24   192.168.10.1
oragir      Active   9    oragir.am             192.168.10.30/24   192.168.10.1
psql        Active   10   psql.pingvinashen.am  192.168.10.3/24    192.168.10.1
rss         Active   11   rss.bsd.am            192.168.10.5/24    192.168.10.1
sarian      Active   12   sarian.am             192.168.10.53/24   192.168.10.1
syuneci     Active   13   syuneci.am            192.168.10.60/24   192.168.10.1
znc         Active   14   znc.bsd.am            192.168.10.152/24  192.168.10.1

You already get a basic idea of how things are. Each of my blogs (Armenian and English) has its own Jail. Since I’m using WordPress, I need a database, so I have a MySQL jail (which ironically runs MariaDB) inside of it.

I also have a Git server, running gitea, which is down at the moment as I’m doing maintanence. The Git server (and many other services) requires PostgreSQL, hence the existence of  a PostgreSQL jail. I run huginn for automation (RSS to Telegram, RSS to XMPP). My sister has her own blog, using WordPress, so that’s a Jail of its own. Same goes about my fiancée.

Other Jails are Newsletter using Listmonk, Sarian (the Armenian instance of lobste.rs) and a personal ZNC server.

As an avid RSS advocate, I also have a RSS Jail, which runs Miniflux. Many of my friends use this service.

Oragir is an instance of WriteFreely, as I advocate public blogging and ActivityPub. Our community uses that too.

The web server that forwards all this traffic from the public to the Jails is nginx. All it does is proxy_pass as needed. It runs on the host.

Other services that run on the host are DNS (BIND9), an email service running OpenSMTPd (which will be moved to a Jail soon), the chat service running prosody (which will be moved to a Jail soon) and finally, WireGuard, because I love VPNs.

Finally, there’s a IPv6-over-IPv4 tunnel that I use to obtain IPv6 thanks to Hurricane Electric.

Yes, I have a firewall, I use pf(4).

For the techies in the room, here’s what my rc.conf looks like.

# cat /etc/rc.conf
# Defaults
clear_tmp_enable="YES"
syslogd_flags="-ss"
sendmail_enable="NONE"
#local_unbound_enable="YES"
sshd_enable="YES"
moused_enable="YES"
ntpd_enable="YES"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"
zfs_enable="YES"

hostname="pingvinashen.am"

# Networking
defaultrouter="37.157.221.1"
gateway_enable="YES"


ifconfig_em0="up"
vlans_em0="37 1000" # 1000 -> WAN; 37 -> Home Router

ifconfig_em0_1000="inet 37.157.221.130 netmask 255.255.255.0"
ifconfig_em0_37="inet 192.168.255.2 netmask 255.255.255.0"

static_routes="home"
route_home="-net 172.16.100.0/24 -gateway 192.168.255.1"


cloned_interfaces="bridge0 bridge6 bridge10"
ifconfig_bridge10="inet 192.168.10.1 netmask 255.255.255.0"


## IPv6
ipv6_gateway_enable="YES"

gif_interfaces="gif0"
gifconfig_gif0="37.157.221.130 216.66.84.46"
ifconfig_gif0="inet6 2001:470:1f14:ef::2 2001:470:1f14:ef::1 prefixlen 128"
ipv6_defaultrouter="2001:470:1f14:ef::1"

ifconfig_em0_37_ipv6="inet6 2001:470:7914:7065::2 prefixlen 64"
ipv6_static_routes="home guest"
ipv6_route_home="-net 2001:470:7914:6a76::/64 -gateway 2001:470:7914:7065::1"
ipv6_route_guest="-net 2001:470:7914:6969::/64 -gateway 2001:470:7914:7065::1"

ifconfig_bridge6_ipv6="inet6 2001:470:1f15:e4::1 prefixlen 64"

ifconfig_bridge6_aliases="inet6 2001:470:1f15:e4::25 prefixlen 64 \
inet6 2001:470:1f15:e4::80 prefixlen 64      \
inet6 2001:470:1f15:e4::5222 prefixlen 64    \
inet6 2001:470:1f15:e4:c0fe::53 prefixlen 64 \
"


# VPN
wireguard_enable="YES"
wireguard_interfaces="wg0"

# Firewall
pf_enable="YES"

# Jails
jail_enable="YES"
jailer_dir="zfs:zroot/jails"

# DNS
named_enable="YES"

# Mail
smtpd_enable="YES"
smtpd_config="/usr/local/etc/smtpd.conf"

# XMPP
prosody_enable="YES"
turnserver_enable="YES"

# Web
nginx_enable="YES"
tor_enable="YES"

The gif0 interface is a IPv6-over-IPv4 tunnel. I have static routes to my home network, so I don’t go to my server over the ISP every time. This also gives me the ability to get IPv6 in my home network that is routed via my home server.

As you have guessed from this config file, I do have VLANs setup. So let’s get into that.

The Home Network

First of all, here’s a very cheap diagram

I have the following VLANs setup on the switch.

VLAN ID Purpose
1 Switch Management
1000 pingvinashen (home server) WAN
1001 evn0 (home router) WAN
37 pingvinashen ↔ evn0
42 Internal Management
100 Home LAN
69 Home Guest

Here are the active ports

Port VLANs Purpose
24 untagged: 1 Switch management, connects to Port 2
22 untagged: 1000 pingvinashen WAN, from ISP
21 untagged: 1001 Home WAN, from ISP
20 tagged: 1000, 37 To pingvinashen, port em0
19 untagged: 1001 To home router, port igb1
18 tagged: 42, 100, 69, 99 To home router, port igb2
17 untagged: 37 To home router, port igb0
16 tagged: 42, 100, 69 To Lenovo T480s
15 untagged: 100 To Raspberri Pi 4
2 untagged: 99 From Port 24, for switch management
1 untagged: 42; tagged: 100, 69; PoE To UAP AC Pro

The home router, hostnamed evn0 (named after the IATA code of Yerevan’s Zvartnots International Airport) runs FreeBSD as well, the hardware is the following

root@evn0:~ # dmidecode -s system-product-name
APU2
root@evn0:~ # sysctl hw.model
hw.model: AMD GX-412TC SOC                               
root@evn0:~ # sysctl hw.physmem
hw.physmem: 4234399744
root@evn0:~ # zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot  12.5G  9.47G  3.03G        -         -    67%    75%  1.00x    ONLINE  -

The home router does… well, routing. It also does DHCP, DNS, SLAAC, and can act as a syslog server.

Here’s what the rc.conf looks like

clear_tmp_enable="YES"
sendmail_enable="NONE"
syslogd_flags="-a '172.16.100.0/24:*' -H"
zfs_enable="YES"
dumpdev="AUTO"

hostname="evn0.illuriasecurity.com"

pf_enable="YES"
gateway_enable="YES"
ipv6_gateway_enable="YES"

sshd_enable="YES"

# Get an IP address from the ISP's GPON
ifconfig_igb1="DHCP"

# Internal routes with pingvinashen
ifconfig_igb0="inet 192.168.255.1 netmask 255.255.255.0"
ifconfig_igb0_ipv6="inet6 2001:470:7914:7065::1 prefixlen 64"

static_routes="pingvinashen"
route_pingvinashen="-net 37.157.221.130/32 -gateway 192.168.255.2"

ipv6_defaultrouter="2001:470:7914:7065::2"

# Home Mgmt, Switch Mgmt, Home LAN, Home Guest
ifconfig_igb2="up"
vlans_igb2="42 99 100 69"
ifconfig_igb2_42="inet 172.31.42.1 netmask 255.255.255.0"
ifconfig_igb2_99="inet 172.16.99.1 netmask 255.255.255.0"

ifconfig_igb2_100="inet 172.16.100.1 netmask 255.255.255.0"
ifconfig_igb2_100_ipv6="inet6 2001:470:7914:6a76::1 prefixlen 64"

ifconfig_igb2_69="inet 192.168.69.1 netmask 255.255.255.0"
ifconfig_igb2_69_ipv6="inet6 2001:470:7914:6969::1 prefixlen 64"

# DNS and DHCP
named_enable="YES"
dhcpd_enable="YES"

named_flags=""

# NTP
ntpd_enable="YES"

# Router Advertisement and LLDP
rtadvd_enable="YES"
lldpd_enable="YES"
lldpd_flags=""

Here’s pf.conf, because security is important.

ext_if="igb1"
bsd_if="igb0"
int_if="igb2.100"
guest_if="igb2.69"
mgmt_if="igb2.42"
sw_if="igb2.99"

ill_net="172.16.0.0/16"

nat pass on $ext_if from $int_if:network to any -> ($ext_if)
nat pass on $ext_if from $mgmt_if:network to any -> ($ext_if)
nat pass on $ext_if from $guest_if:network to any -> ($ext_if)

set skip on { lo0 }

block in all

pass on $int_if   from $int_if:network   to any
pass on $mgmt_if  from $mgmt_if:network  to any
pass on $sw_if    from $sw_if:network    to any
pass on $guest_if from $guest_if:network to any

block quick on $guest_if from any to { $int_if:network, $mgmt_if:network, $ill_net, $sw_if:network }

pass in on illuria0 from $ill_net to { $ill_net, $mgmt_if:network }

pass inet  proto icmp
pass inet6 proto icmp6
pass out   all   keep state

I’m sure there are places to improve, but it gets the job done and keeps the guest network isolated.

Here’s rtadvd.conf, for my IPv6 folks

igb2.100:\
  :addr="2001:470:7914:6a76::":prefixlen#64:\
  :rdnss="2001:470:7914:6a76::1":\
  :dnssl="evn0.loc.illuriasecurity.com,loc.illuriasecurity.com":

igb2.69:\
  :addr="2001:470:7914:6969::":prefixlen#64:\
  :rdnss="2001:470:7914:6969::1":

For DNS, I’m running BIND, here’s the important parts

listen-on     { 127.0.0.1; 172.16.100.1; 172.16.99.1; 172.31.42.1; 192.168.69.1; };
listen-on-v6  { 2001:470:7914:6a76::1; 2001:470:7914:6969::1; };
allow-query   { 127.0.0.1; 172.16.100.0/24; 172.31.42.0/24; 192.168.69.0/24; 2001:470:7914:6a76::/64; 2001:470:7914:6969::/64;};

And for DHCP, here’s what it looks like

subnet 172.16.100.0 netmask 255.255.255.0 {
        range 172.16.100.100 172.16.100.150;
        option domain-name-servers 172.16.100.1;
        option subnet-mask 255.255.255.0;
        option routers 172.16.100.1;
        option domain-name "evn0.loc.illuriasecurity.com";
        option domain-search "loc.illuriasecurity.com evn0.loc.illuriasecurity.com";
}

host zvartnots {
    hardware ethernet d4:57:63:f1:5a:36;
    fixed-address 172.16.100.7;
}

host unifi0 {
    hardware ethernet 58:9c:fc:93:d1:0b;
    fixed-address 172.31.42.42;
}
[…] subnet 172.31.42.0 netmask 255.255.255.0 { range 172.31.42.100 172.31.42.150; option domain-name-servers 172.31.42.1; option subnet-mask 255.255.255.0; option routers 172.31.42.1; } subnet 192.168.69.0 netmask 255.255.255.0 { range 192.168.69.100 192.168.69.150; option domain-name-servers 192.168.69.1; option subnet-mask 255.255.255.0; option routers 192.168.69.1; }

So you’re wondering, what’s this unifi0? Well, that brings us to

T480s

This laptop has been gifted to me by [REDACTED] for my contributions to the Armenian government (which means when a server goes down and no one knows how to fix it, they called me and I showed up)

Here’s the hardware

root@t480s:~ # dmidecode -s system-version
ThinkPad T480s
root@t480s:~ # sysctl hw.model
hw.model: Intel(R) Core(TM) i5-8350U CPU @ 1.70GHz
root@t480s:~ # sysctl hw.physmem
hw.physmem: 25602347008
root@t480s:~ # zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot   224G   109G   115G        -         -    44%    48%  1.00x    ONLINE  -

The T480s has access to VLAN 100, 42, 69, but the host itself has access only to VLAN 100 (LAN), while the jails can exist on other VLANs.

So I have a Jail named unifi0 that runs the Unifi Management thingie.

Here’s what rc.conf of the host looks like

clear_tmp_enable="YES"
syslogd_flags="-ss"
sendmail_enable="NONE"
sshd_enable="YES"
ntpd_enable="YES"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"
zfs_enable="YES"

hostname="t480s.evn0.loc.illuriasecurity.com"

ifconfig_em0="up -rxcsum -txcsum"
vlans_em0="100 42 69"
ifconfig_em0_100="up"
ifconfig_em0_42="up"
ifconfig_em0_69="up"

cloned_interfaces="bridge0 bridge100 bridge42 bridge69"

create_args_bridge100="ether 8c:16:45:82:b4:10"
ifconfig_bridge100="addm em0.100 SYNCDHCP"
ifconfig_bridge100_ipv6="inet6 auto_linklocal"
rtsold_flags="-i -F -m bridge100"
rtsold_enable="YES"

create_args_bridge42=" ether 8c:16:45:82:b4:42"
create_args_bridge69=" ether 8c:16:45:82:b4:69"

ifconfig_bridge42="addm em0.42"
ifconfig_bridge69="addm em0.69"


jail_enable="YES"
jailer_dir="zfs:zroot/jailer"

ifconfig_bridge0="inet 10.1.0.1/24 up"
ngbuddy_enable="YES"
ngbuddy_private_if="nghost0"
dhcpd_enable="YES"

lldpd_enable="YES"

I used Jailer to create the unifi0 jail, here’s what the jail.conf looks like

# vim: set syntax=sh:
exec.clean;
allow.raw_sockets;
mount.devfs;

unifi0 {
  $id             = "6";
  devfs_ruleset   = 10;
  $bridge         = "bridge42";
  $domain         = "evn0.loc.illuriasecurity.com";
  vnet;
  vnet.interface = "epair${id}b";

  exec.prestart   = "ifconfig epair${id} create up";
  exec.prestart  += "ifconfig epair${id}a up descr vnet-${name}";
  exec.prestart  += "ifconfig ${bridge} addm epair${id}a up";

  exec.start      = "/sbin/ifconfig lo0 127.0.0.1 up";
  exec.start     += "/bin/sh /etc/rc";

  exec.stop       = "/bin/sh /etc/rc.shutdown jail";
  exec.poststop   = "ifconfig ${bridge} deletem epair${id}a";
  exec.poststop  += "ifconfig epair${id}a destroy";

  host.hostname   = "${name}.${domain}";
  path            = "/usr/local/jailer/unifi0";
  exec.consolelog = "/var/log/jail/${name}.log";
  persist;
  mount.fdescfs;
  mount.procfs;
}

Here are the important parts inside the jail

root@t480s:~ # cat /usr/local/jailer/unifi0/etc/rc.conf
ifconfig_epair6b="SYNCDHCP"
sendmail_enable="NONE"
syslogd_flags="-ss"
mongod_enable="YES"
unifi_enable="YES"
root@t480s:~ # cat /usr/local/jailer/unifi0/etc/start_if.epair6b 
ifconfig epair6b ether 58:9c:fc:93:d1:0b

Don’t you love it that you can see what’s inside the jail from the host? God I love FreeBSD!

Did I miss anything? I hope not.

Oh, for the homelabbers out there, the T480s is the one that runs things like Jellyfin if needed.

Finally, the tiny 

Raspberry Pi 4, Model B

I found this in a closed, so I decided to run it for TimeMachine.

I guess all you care about is rc.conf

hostname="tm0.evn0.loc.illuriasecurity.com"
ifconfig_DEFAULT="DHCP inet6 accept_rtadv"
sshd_enable="YES"
sendmail_enable="NONE"
sendmail_submit_enable="NO"
sendmail_outbound_enable="NO"
sendmail_msp_queue_enable="NO"
growfs_enable="YES"
powerd_enable="YES"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"
zfs_enable="YES"
rtsold_enable="YES"
samba_server_enable="YES"

And the Samba Configuration

[global]
# Network settings
workgroup = WORKGROUP
server string = Samba Server %v
netbios name = RPi4

# Logging
log file = /var/log/samba4/log.%m
max log size = 50
log level = 0

# Authentication
security = user
encrypt passwords = yes
passdb backend = tdbsam
map to guest = Bad User

min protocol = SMB2
max protocol = SMB3


# Apple Time Machine settings
vfs objects = catia fruit streams_xattr
fruit:metadata = stream
fruit:resource = stream
fruit:encoding = native
fruit:locking = none
fruit:time machine = yes

# File System support
ea support = yes
kernel oplocks = no
kernel share modes = no
posix locking = no
mangled names = no
smbd max xattr size = 2097152

# Performance tuning
read raw = yes
write raw = yes
getwd cache = yes
strict locking = no

# Miscellaneous
local master = no
preferred master = no
domain master = no
wins support = no

[tm]
comment = Time Machine RPi4
path = /usr/local/timemachine/%U
browseable = yes
read only = no
valid users = antranigv
vfs objects = catia fruit streams_xattr
fruit:time machine = yes
fruit:advertise_fullsync = true
fruit:time machine max size = 800G  # Adjust the size according to your needs
create mask = 0600
directory mask = 0700

That’s pretty much it.

Conclusion

I love running homebrew servers, home networks and home labs. I love that (almost) everything is FreeBSD. The switch itself runs Linux, and the Unifi Access Point also runs Linux, both of which I’m pretty happy with.

While most homelabbers used ESXi in the past, I’m happy to see that most people are moving to open source solutions like Proxmox and Xen, but I think that FreeBSD Jails and bhyve is much better. I still don’t have a need for bhyve at the moment, but I would use it if I needed hardware virtualization.

Most homelabbers would consider the lack of Web/GUI interfaces as a con, but I think that it’s a pro. If I need to “replicate” this network, all I need to do is to copy some text files and modify some IP addresses / Interface names.

I hope this was informative and that it would be useful for anyone in the future.

That’s all folks… 

Reply via email.

Installing FreeBSD with Root-on-ZFS on Vultr using iPXE

The title is pretty self explanatory, so let’s get to it, shall we?

I was configuring a server for a customer today, and one of the things I noticed is that FreeBSD was not available for bare-metal.

This got me a bit worried, because we use a lot of FreeBSD on Vultr… Well that’s a lie. We only use FreeBSD on Vultr.

I logged into our company account and noticed that our bare-metals does have FreeBSD as an icon for the image.

So I decided to check the docs and found this:

What operating system templates do you offer?

We offer many Linux and Windows options. We do not offer OpenBSD or FreeBSD images for Vultr Bare Metal. Use our iPXE boot feature if you need to install a custom operating system.

Well, that’s sad, but on the other hand, iPXE will be very useful. We can boot a memdisk such as mfsBSD and install FreeBSD from there.

To start, we need a VM that can host the mfsBSD img/ISO file. I have spun up a VM on Vultr running FreeBSD (altho it can run anything else, it wouldn’t matter), installed nginx on it, downloaded the file so we can boot from it. Here’s the copy-pasta

pkg install -y nginx
service nginx enable && service nginx start
fetch -o /usr/local/www/ \
https://mfsbsd.vx.sk/files/images/14/amd64/mfsbsd-se-14.0-RELEASE-amd64.img

This should be enough to get started. Oh, if you’re not on FreeBSD then the path might be different, like /var/www/nginx, or something alike. Check your nginx configuration for the details.

Now we need to write an iPXE script and add it into our Vultr iPXE scripts. Here’s what it looks like

#!ipxe

echo Starting MFSBSD
sanboot http://your.server.ip.address/mfsbsd-se-14.0-RELEASE-amd64.img
boot

Finally, we can create a bare-metal that uses our script for iPXE boot.

Don’t forget to choose the right location and plan.

After the machine is provisioned, you need to access the console and you will see the boot process.

The default root password is mfsroot.

To install FreeBSD, you can run bsdinstall. The rest will be familiar for you. Yes, you can use Root-on-ZFS. No, it can’t be in UEFI, you must use GPT (BIOS).

Good luck, and special thanks to Vultr for giving us the chance to use our favorite tools on the public cloud.

That’s all folks…

Reply via email.

Installing DFIR-IRIS on FreeBSD using Jails

This is a live blogging of the installation process of DFIR-IRIS on FreeBSD 14.0-RELEASE using Jails and Jailer.

The main requirements are:

  • Nginx
  • PostgreSQL
  • Python
  • Some random dependencies we saw in the Dockerfile

I assume you already have nginx up and running, we will just be setting up a vhost under the domain name dfir.cert.am. Don’t worry, this is INSIDE our infrastructure, you will not be able to connect to it 🙂

Initial Setup

First we create a jail named iris0, using Jailer:

jailer create iris0

Next we install the required software inside of the jail. Looks like everything is available in FreeBSD packages:

jailer console iris0
pkg install \ nginx \ python39 \ py39-pip \ gnupg \ 7-zip \ rsync \ postgresql12-client \ git-tiny \ libxslt \ rust \ acme.sh

Installing DFIR-IRIS

Since we’re using FreeBSD, we’ll be doing things the right way instead of the Docker way, so we will be running IRIS as a user, not as root.

pw user add iris -m

Next we setup some directories and checkout the repo

root@iris0:~ # pw user add iris -m
root@iris0:~ # su - iris iris@iris0:~ $ git clone --branch v2.4.7 https://github.com/dfir-iris/iris-web.git iris-web

Finally, we install some python dependencies using pip.

iris@iris0:~ $ cd iris-web/source
iris@iris0:~/iris-web/source $ pip install -r requirements.txt

Now we have to configure the .env file based on our needs, I will post my version of it, I hope it helps

# -- DATABASE
export POSTGRES_USER=postgres
export POSTGRES_PASSWORD=postgres
export POSTGRES_DB=iris_db
export POSTGRES_ADMIN_USER=iris
export POSTGRES_ADMIN_PASSWORD=longpassword

export POSTGRES_SERVER=localhost
export POSTGRES_PORT=5432

# -- IRIS
export DOCKERIZED=0
export IRIS_SECRET_KEY=verylongsecret
export IRIS_SECURITY_PASSWORD_SALT=verylongsalt
export IRIS_UPSTREAM_SERVER=app # these are for docker, you can ignore
export IRIS_UPSTREAM_PORT=8000

# -- WORKER
export CELERY_BROKER=amqp://localhost
# Set to your rabbitmq instance

# Change these as you need them.
# -- AUTH
#IRIS_AUTHENTICATION_TYPE=local
## optional
#IRIS_ADM_PASSWORD=MySuperAdminPassword!
#IRIS_ADM_API_KEY=B8BA5D730210B50F41C06941582D7965D57319D5685440587F98DFDC45A01594
#IRIS_ADM_EMAIL=admin@localhost
#IRIS_ADM_USERNAME=administrator
# requests the just-in-time creation of users with ldap authentification (see https://github.com/dfir-iris/iris-web/issues/203)
#IRIS_AUTHENTICATION_CREATE_USER_IF_NOT_EXIST=True
# the group to which newly created users are initially added, default value is Analysts
#IRIS_NEW_USERS_DEFAULT_GROUP=

# -- LISTENING PORT
#INTERFACE_HTTPS_PORT=443

Configuring HTTPS

We can use acme.sh to issue a TLS certificate from Lets Encrypt.

root@iris0:~ # acme.sh --set-default-ca --server letsencrypt
root@iris0:~ # acme.sh --issue -d dfir.cert.am --standalone
root@iris0:~ # acme.sh -i -d dfir.cert.am --fullchain-file /usr/local/etc/ssl/dfir.cert.am/fullchain.pem --key-file /usr/local/etc/ssl/dfir.cert.am/key.pem --reloadcmd 'service nginx reload'

Setup nginx

DFIR-IRIS provides a nginx configuration template at nginx.conf, we will be using that, with a little bit of modifications.

The final nginx.conf will look like this:

#user  nobody;
worker_processes  1;

# This default error log path is compiled-in to make sure configuration parsing
# errors are logged somewhere, especially during unattended boot when stderr
# isn't normally logged anywhere. This path will be touched on every nginx
# start regardless of error log location configured here. See
# https://trac.nginx.org/nginx/ticket/147 for more info. 
#
#error_log  /var/log/nginx/error.log;
#

#pid        logs/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    # Things needed/recommended by DFIR-IRIS
    map $request_uri $csp_header {
        default "default-src 'self' https://analytics.dfir-iris.org; script-src 'self' 'unsafe-inline' https://analytics.dfir-iris.org; style-src 'self' 'unsafe-inline';";
    }

    server_tokens off;
    sendfile    on;
    tcp_nopush  on;
    tcp_nodelay on;

    types_hash_max_size             2048;
    types_hash_bucket_size          128;
    proxy_headers_hash_max_size     2048;
    proxy_headers_hash_bucket_size  128;
    proxy_buffering                 on;
    proxy_buffers                   8 16k;
    proxy_buffer_size               4k;

    client_header_buffer_size   2k;
    large_client_header_buffers 8 64k;
    client_body_buffer_size     64k;
    client_max_body_size        100M;

    reset_timedout_connection   on;
    keepalive_timeout           90s;
    client_body_timeout         90s;
    send_timeout                90s;
    client_header_timeout       90s;
    fastcgi_read_timeout        90s;
    # WORKING TIMEOUT FOR PROXY CONF
    proxy_read_timeout          90s;
    uwsgi_read_timeout          90s;

    gzip off;
    gzip_disable "MSIE [1-6]\.";

    # FORWARD CLIENT IDENTITY TO SERVER
    proxy_set_header    HOST                $http_host;
    proxy_set_header    X-Forwarded-Proto   $scheme;
    proxy_set_header    X-Real-IP           $remote_addr;
    proxy_set_header    X-Forwarded-For     $proxy_add_x_forwarded_for;

    # FULLY DISABLE SERVER CACHE
    add_header          Last-Modified $date_gmt;
    add_header          'Cache-Control' 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
    if_modified_since   off;
    expires             off;
    etag                off;
    proxy_no_cache      1;
    proxy_cache_bypass  1;

    # SSL CONF, STRONG CIPHERS ONLY
    ssl_protocols               TLSv1.2 TLSv1.3;

    ssl_prefer_server_ciphers   on;
    ssl_certificate             /usr/local/etc/ssl/dfir.cert.am/fullchain.pem;
    ssl_certificate_key         /usr/local/etc/ssl/dfir.cert.am/key.pem;
    ssl_ecdh_curve              secp521r1:secp384r1:prime256v1;
    ssl_buffer_size             4k;

    # DISABLE SSL SESSION CACHE
    ssl_session_tickets         off;
    ssl_session_cache           none;
    server {
        listen          443 ssl
        server_name     dfir.cert.am;
        root            /www/data;
        index           index.html;
        error_page      500 502 503 504  /50x.html;

        add_header Content-Security-Policy $csp_header;
        
        # SECURITY HEADERS
        add_header X-XSS-Protection             "1; mode=block";
        add_header X-Frame-Options              DENY;
        add_header X-Content-Type-Options       nosniff;
        # max-age = 31536000s = 1 year
        add_header Strict-Transport-Security    "max-age=31536000: includeSubDomains" always;
        add_header Front-End-Https              on;

        location / {
            proxy_pass  http://localhost:8000;

            location ~ ^/(manage/templates/add|manage/cases/upload_files) {
                keepalive_timeout           10m;
                client_body_timeout         10m;
                send_timeout                10m;
                proxy_read_timeout          10m;
                client_max_body_size        0M;
                proxy_request_buffering off;
                proxy_pass  http://localhost:8000;
            }

            location ~ ^/(datastore/file/add|datastore/file/add-interactive) {
                keepalive_timeout           10m;
                client_body_timeout         10m;
                send_timeout                10m;
                proxy_read_timeout          10m;
                client_max_body_size        0M;
                proxy_request_buffering off;
                proxy_pass  http://localhost:8000;
            }
        }
        location /socket.io {
            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_http_version 1.1;
            proxy_buffering off;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "Upgrade";
            proxy_pass http://localhost:8000/socket.io;
        }
    }
}

Setup PostgreSQL

I assume you know how to do this 🙂 You don’t need to configure a separate user, by the looks of it, IRIS likes to do that itself. Thanks to Jails I was able to run a separate PostgreSQL instance in the iris0 jail.

P.S. If you are running PostgreSQL inside a jail, make sure that the following variables are set in your jail configuration

  sysvshm         = new;
  sysvmsg         = new;

Running DFIR-IRIS

Now that everything is up and running, we just need to run DFIR-IRIS and it will create the database, needed users, an administration account, etc.

su - iris
cd ~/iris-web/source
. ../.env
~/.local/bin/gunicorn app:app --worker-class eventlet --bind 0.0.0.0:8000 --timeout 180 --worker-connections 1000 --log-level=debug

Assuming everything is fine, now we can setup a rc.d service script to make sure it runs at boot.

For that I wrote two files, the service itself and a helper start.sh script

rc.d script at /usr/local/etc/rc.d/iris

#!/bin/sh

# PROVIDE: iris
# REQUIRE: NETWORKING
# KEYWORD: 

. /etc/rc.subr

name="iris"
rcvar="iris_enable"
load_rc_config ${name}

: ${iris_enable:=no}
: ${iris_path:="/usr/local/iris"}
: ${iris_gunicorn:="/usr/local/bin/gunicorn"}
: ${iris_env="iris_gunicorn=${iris_gunicorn}"}

logfile="${iris_path}/iris.log"
pidfile="/var/run/${name}/iris.pid"

iris_user="iris"
iris_chdir="${iris_path}/source"
iris_command="${iris_path}/start.sh"

command="/usr/sbin/daemon"
command_args="-P ${pidfile} -T ${name} -o ${logfile} ${iris_command}"

run_rc_command "$1"

and the helper script at /home/iris/iris-web/start.sh

#!/bin/sh

export HOME=$(getent passwd `whoami` | cut -d : -f 6)

. ../.env

${iris_gunicorn} app:app --worker-class eventlet --bind 0.0.0.0:8000 --timeout 180 --worker-connections 128

now we set some variables in rc.conf using sysrc and we can start the service.

sysrc iris_enable="YES"
sysrc iris_path="/home/iris/iris-web"
sysrc iris_gunicorn="/home/iris/.local/bin/gunicorn"

Finally, we can start DFIR-IRIS as a service.

service iris start

Aaaaand we’re done 🙂

Thank you for reading!

There are some issues that I’d like to tackle, for example, service iris stop doesn’t work, and it would be nice if we ported all of the dependencies into Ports, but for now, this seems to be working fine.

Special thanks to the DFIR-IRIS team for creating this cool platform!

That’s all folks…

Reply via email.

Antranig Vartanian

April 26, 2024

I love ZFS…

root@evn0:/var/log/named # du -h -d 1
1.4G    .
root@evn0:/var/log/named # du -A -h -d 1
7.4G    .

Reply via email.

Mirroring OmniOS: The Complete Guide; Part One

Chapter Ⅰ

I know that “Complete Guide” and “Part One” are oxymorons, but hey, be happy that I’m publishing in parts, otherwise I’d completely ignore this blog post.

Two weeks ago I decided to play with illumos again. I was speaking with a friend and we were sharing our frustrations regarding Open-Source contribution. We write the code, we submit, we get feedback, we submit again, and then we’re ghosted. It’s like the LinkedIn or Tinder version of Software Engineering.

Then I asked him about his best open-source experience and he told me “illumos of course!”.

I was amazed. I thought you had to be very technical in order to even build illumos, but turns out they have an amazing documentation on building illumos and OmniOS (an illumos distribution) has done work to make sure that the system can be self-hosted (i.e. The OS can build itself).

So, I decided to fire up OmniOS on our hackerspace server running FreeBSD inside a bhyve VM.

The installation went smoothly, but the IPS packages were slow to download, and I might be wrong (please correct me if I am) but IPS doesn’t seem to be keeping a local copy of the files. It always downloads. Is that configurable?

Regardless. I thought that the best way to contribute is to advocate. In order to do that I needed to make sure that IPS servers are fast in Armenia. Hence the mirroring project started.

Obey!

Requirements

Here are some terminology that I will use in this blog post, just so we are on the same page.

  • OmniOS: an illumos distribution
  • Origin: OmniOS’s IPS servers at pkg.omnios.org
  • Local: A copy of the Origin
  • Repository: A collection of software
  • Core: The Core Repository of OmniOS
  • Extras: The Extra Repository of OmniOS
  • IPS or PKG: The Image Packaging System and its utility, pkg
  • Zone: an illumos Zone (similar to FreeBSD Jails, Linux Containers, chroot) running on OmniOS

Now that we are on the same page, let’s talk about our setup and what we need.

  • An internet connection: duh!
  • A domain name: I decided to use pkg.omnios.illumos.am. Yes, I’m lucky like that.
  • A publicly accessible IP address.
  • A server: I am running OmniOS Stable (r151048) inside a VM. You can use bare-metal or a cloud VM if you want.
  • Storage: I am currently using around 50GB of storage, expect that to go around 300GB when we get to Part Three

Pre-Mirroring Setup

Before we setup our mirror, let’s make sure that we have a good infrastructure that we can maintain.

Here’s what we’ll create

  • A Zone that will act as the HTTP(s) server using nginx at IP address 10.10.0.80
  • A Zone that will do the mirroring using IPS tools at 10.10.0.51
  • An virtual dumb switch (etherstub) that will connect the Zones and the Global-Zone (a.k.a The Host) together. The GZ will have an address of 10.10.0.1
  • ZFS datasets for each Core and Extras Repository (for each release)

Please note that there are many ways to do this, for example, having everything in a Global Zone, running IPS mirroring and nginx in a single Zone, not using etherstub at all, etc. But I like this setup as it will allow us to “grow” in the future.

From now on, omnios# means that we’re in the Global Zone and zone0# means we’re inside a Zone named zone0.

Let’s start with setting up our etherstub and connecting our Global Zone to it

omnios# dladm create-etherstub switch0
omnios# dladm create-vnic -l switch0 vnic0
omnios# ipadm create-if vnic0
omnios# ipadm create-addr -T static -a 10.10.0.1/24 vnic0/switch0

Done!

Now, we will setup our Zones using the zadm utility. Install zadm by running

omnios# pkg install zadm

After installing zadm, we’ll create a dataset for our Zones

omnios# zfs create -o mountpoint=/zones rpool/zones

This assumes that your ZFS pool is named rpool.

Finally, we can create our Zones. Running

omnios# zadm create -b pkgsrc www0

will open your $EDITOR, where you need to modify some JSON, here’s what mine looks like!

{
   "autoboot" : "true",
   "brand" : "pkgsrc",
   "ip-type" : "exclusive",
"dns-domain" : "omnios.illumos.am", "net" : [ { "allowed-address" : "10.10.0.80/24", "defrouter" : "10.10.0.1", "global-nic" : "switch0", "physical" : "www0" } ], "pool" : "", "scheduling-class" : "", "zonename" : "www0", "zonepath" : "/zones/www0" }

After saving the file, zadm will install the Zone.

Now let’s setup our mirroring Zone. Do the same but change the Zone name to repo, the brand to lipkg (and -b lipkg) and set the IP address to 10.10.0.51/24.

All we need now is to forward the HTTP/HTTPS traffic to www0 Zone and allow all Zones to access the internet using NAT.

Create and edit the IPFilter’s NAT file at /etc/ipf/ipnat.conf, here’s an example configuration

map vioif0 10.10.0.0/24 -> 212.34.250.10

rdr vioif0 212.34.250.10/32 port 80 -> 10.10.0.80 port 80 tcp
rdr vioif0 212.34.250.10/32 port 443 -> 10.10.0.80 port 443 tcp

Make sure you set the correct interface name and the correct external IP address.

Finally, we can boot our Zones!

omnios# zadm boot www0
omnios# zadm boot repo

You should see the following output when you run zadm again

omnios# zadm
NAME              STATUS     BRAND       RAM    CPUS  SHARES
global            running    ipkg        56G      12       1
repo              running    lipkg         -       -       1
www0              running    pkgsrc        -       -       1

Great! Let’s setup the mirroring process.

Mirroring Setup

Let’s create a ZFS dataset for repos for each release

repo# zfs create -o mountpoint=/repo rpool/zones/repo/ROOT/repo      
repo# zfs create rpool/zones/repo/ROOT/repo/r151048      
repo# zfs create rpool/zones/repo/ROOT/repo/r151048/core 
repo# zfs create rpool/zones/repo/ROOT/repo/r151048/extra

And then we use the pkgrepo command to create a repository

repo# pkgrepo create /repo/r151048/core
repo# pkgrepo create /repo/r151048/extra

And finally, we can start receiving the packages from Origin to Local

repo# pkgrecv -s https://pkg.omnios.org/r151048/core/  -d /repo/r151048/core  '*'
repo# pkgrecv -s https://pkg.omnios.org/r151048/extra/ -d /repo/r151048/extra '*'

This will take a while depending on your internet connection speed and the load on OmniOS’s Origin. It’s like a good investment, we spend load and time now so we save traffic and time later 🙂

After it’s done, we need to set the publisher of these repos the same as Origin.

repo# pkgrepo set -s /repo/r151048/core   publisher/prefix=omnios
repo# pkgrepo set -s /repo/r151048/extra/ publisher/prefix=extra.omnios

And we’re done!

Now need to serve these repos using IPS’s depot server.

We will create two instances of the depotd server, one for core and one for extra.

  • r151048/core will run on 5148
  • r151048/extra will run on 1148
  • (in the future) r151050/core will run on 5150
  • (in the future) r151050/extra will run on 1150

We start with core

repo# svccfg -s pkg/server add r151048_core
repo# svccfg -s pkg/server:r151048_core addpg pkg application
repo# svccfg -s pkg/server:r151048_core setprop pkg/inst_root = /repo/r151048/core/
repo# svccfg -s pkg/server:r151048_core setprop pkg/port = 5148
repo# svccfg -s pkg/server:r151048_core setprop pkg/proxy_base = https://pkg.omnios.illumos.am/r151048/core

And we do the same for extra

repo# svccfg -s pkg/server add r151048_extra
repo# svccfg -s pkg/server:r151048_extra addpg pkg application
repo# svccfg -s pkg/server:r151048_extra setprop pkg/inst_root = /repo/r151048/extra/
repo# svccfg -s pkg/server:r151048_extra setprop pkg/port = 1148
repo# svccfg -s pkg/server:r151048_extra setprop pkg/proxy_base = https://pkg.omnios.illumos.am/r151048/extra

Finally, we enable the services

repo# svcadm enable  pkg/server:r151048_core pkg/server:r151048_extra
repo# svcadm restart pkg/server:r151048_core pkg/server:r151048_extra

Let’s check!

We’re good! Now let’s setup Nginx 🙂

The Web Server

This part is pretty easy, we login into www0, install nginx, and setup some paths. I will be posting a copy-pasta of my configs, I assume you can do the rest 🙂

www0# pkgin update
www0# pkgin install nginx

Thank you SmartOS! 🧡

In my nginx.conf, I added

include vhosts/*.conf;

and then in /opt/local/etc/nginx/vhosts I created a file
named pkg.omnios.illumos.am.conf, which looks like this

server {
        listen 80;
        server_name pkg.omnios.illumos.am;

        location /.well-known/acme-challenge/ {
          alias /opt/local/www/acme/.well-known/acme-challenge/;
        }

        location / {
            return 301 "https://pkg.omnios.illumos.am";
        }
}

server {
    listen       443 ssl;
    server_name  pkg.omnios.illumos.am;

    ssl_certificate      /etc/ssl/pkg.omnios.illumos.am/fullchain.pem;
    ssl_certificate_key  /etc/ssl/pkg.omnios.illumos.am/key.pem;
    location /r151048/core/ {
                proxy_pass http://10.10.0.51:5148/;
    }

    location /r151048/extra/ {
                proxy_pass http://10.10.0.51:1148/;
    }

    location / {
# This needs to be changed, later... add_header Content-Type text/plain; return 200 "ok..."; } }

Finally, we just need to enable nginx

www0# svcadm enable pkgsrc/nginx

and check!

Using the Local Repos

This part is actually pretty easy. We just need to remove everything that exists and add our own. I will be running this on a computer named dna0.

dna0# pkg set-publisher -M '*' -G '*' omnios
dna0# pkg set-publisher -M '*' -G '*' extra.omnios
dna0# pkg set-publisher -O https://pkg.omnios.illumos.am/r151048/core omnios
dna0# pkg set-publisher -O https://pkg.omnios.illumos.am/r151048/extra extra.omnios
dna0# pkg publisher PUBLISHER TYPE STATUS P LOCATION extra.omnios origin online F https://pkg.omnios.illumos.am/r151048/extra/ omnios origin online F https://pkg.omnios.illumos.am/r151048/core/

We’re good! 🙂

Fetching Updates

By the time I wanted to publish this I noticed that there’s a new OmniOS Weekly Update, so I thought, hey, maybe I should try updating the Local Repo as well… how do we do that?

Turns out I just need to pkgrecv again, and then run a refresh command.

pkgrecv -v -s https://pkg.omnios.org/r151048/core/ -d /repo/r151048/core/ '*'
pkgrepo -s /repo/r151048/core refresh

And looks like we’re good! Maybe we can setup a simple cronjob 🙂

Final Notes

This has been an amazing experience. Since I started using OmniOS two weeks ago, I’ve setup the mirror, I installed two OmniOS deployments on production for two organization, and I talked about it during our Armenian Hackers Radio Podcast. With this mirror completely setup, I can advocate even more!

I’d like to send my thanks (and later, my money) to the OmniOS team for the amazing work they’re doing, special thanks to andyf for answering all of my questions, neirac for pushing me to try more illumos in my life and everyone who contributed to the docs and blog posts that I used. I’ll leave some links below.

Finally, for the coming (two) posts I will talk about mirroring downloads.OmniOS.org (for ISO/USB/ZFS images) and the pkgsrc repository run by SmartOS/MNX.

Thank you for reading and thank you, illumos-community for being so nice ^_^

That’s all folks…

Links

Reply via email.

bhyve CPU Allocation Test for 256 core machine

During the last bhyve weekly call, Michael Dexter asked me to run the bhyve CPU Allocation Test that he wrote in order to see if number of CPUs in the guest makes the system boot longer.

Here’s a post with the details of the test and my findings.

The host machines runs the following

# uname -a
FreeBSD genomic.abi.am 13.2-RELEASE FreeBSD 13.2-RELEASE releng/13.2-n254617-525ecfdad597 GENERIC amd64

# sysctl hw.model hw.ncpu
hw.model: AMD EPYC 7702 64-Core Processor
hw.ncpu: 256

# dmidecode -t processor | grep 'Socket Designation'
        Socket Designation: CPU1
        Socket Designation: CPU2

# sysctl hw.physmem hw.realmem hw.usermem
hw.physmem: 2185602236416
hw.realmem: 2200361238528
hw.usermem: 2091107983360

Basically, it’s FreeBSD 13.2, with 2TB of RAM, 2 CPUs with 64 cores each, 2 threads each, totaling 256 vCores

The test runs a bhyve VM with minimal FreeBSD, that’s built with OccamBSD. The main changes are the following:

  • /boot/loader.conf has the line autoboot_delay="0"
  • There are no service enabled
  • /etc/rc.local has the line shutdown -p now

The machine boots and then it shuts down.

Here’s what I’ve got in the log file →

Host CPUs: 256
1 booted in 9 seconds
2 booted in 9 seconds
3 booted in 9 seconds
4 booted in 9 seconds
5 booted in 9 seconds
6 booted in 9 seconds
7 booted in 9 seconds
8 booted in 9 seconds
9 booted in 10 seconds
10 booted in 10 seconds
11 booted in 10 seconds
12 booted in 11 seconds
13 booted in 10 seconds
14 booted in 11 seconds
15 booted in 12 seconds
16 booted in 9 seconds
17 booted in 12 seconds
18 booted in 18 seconds
19 booted in 14 seconds
20 booted in 15 seconds
21 booted in 22 seconds
22 booted in 17 seconds
23 booted in 23 seconds
24 booted in 10 seconds
25 booted in 10 seconds
26 booted in 17 seconds
27 booted in 14 seconds
28 booted in 15 seconds
29 booted in 12 seconds
30 booted in 15 seconds
31 booted in 31 seconds
32 booted in 19 seconds
33 booted in 15 seconds
34 booted in 32 seconds
35 booted in 18 seconds
36 booted in 22 seconds
37 booted in 24 seconds
38 booted in 17 seconds
39 booted in 24 seconds
40 booted in 13 seconds
41 booted in 15 seconds
42 booted in 23 seconds
43 booted in 37 seconds
44 booted in 21 seconds
45 booted in 19 seconds
46 booted in 12 seconds
47 booted in 17 seconds
48 booted in 19 seconds
49 booted in 17 seconds
50 booted in 18 seconds
51 booted in 15 seconds
52 booted in 20 seconds
53 booted in 14 seconds
54 booted in 22 seconds
55 booted in 18 seconds
56 booted in 17 seconds
57 booted in 92 seconds
58 booted in 15 seconds
59 booted in 15 seconds
60 booted in 17 seconds
61 booted in 16 seconds
62 booted in 22 seconds
63 booted in 17 seconds
64 booted in 12 seconds
65 booted in 17 seconds

At the 66th core, bhyve crashes, with the following line

Booting the VM with 66 vCPUs
Assertion failed: (curaddr - startaddr < SMBIOS_MAX_LENGTH), function smbios_build, file /usr/src/usr.sbin/bhyve/smbiostbl.c, line 936.
Abort trap (core dumped)    

At this point, bhyve crashes with every ncpu+1, so I had to stop the loop from running.

I had to look into the topology of the CPUs, which FreeBSD can report using

sysctl -n kern.sched.topology_spec

<groups>
 <group level="1" cache-level="0">
  <cpu count="256" mask="ffffffffffffffff,ffffffffffffffff,ffffffffffffffff,ffffffffffffffff">0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255</cpu>
  <children>
   <group level="2" cache-level="0">

[...]

   </group>
  </children>
 </group>
</groups>

You can find the whole output here: kern.sched.topology_spec.xml.txt

The system that we need for production requires 240 vCores. This topology gave me the idea to run that manually, using the socket, cores and threads options →

bhyve -c 240,sockets=2,cores=60,threads=2 -m 1024 -H -A \
    -l com1,stdio \
    -l bootrom,BHYVE_UEFI.fd \
    -s 0,hostbridge \
    -s 2,virtio-blk,vm.raw \
    -s 31,lpc \
    vm0

And it booted all fine! 🙂

240 booted in 33 seconds

For production, however, I use vm-bhyve, so I’ve added the following to my configuration →

cpu="240"
cpu_sockets="2"
cpu_cores="60"
cpu_threads="2"
memory="1856G"

And yes, for those who are wondering, bhyve can virtualize 1.8T of vDRAM all fine 🙂

For my debugging nerds, I’ve also uploaded the bhyve.core file to my server, you may get it at bhyve-cpu-allocation–256.tgz

As long as this is helpful for someone out there, I’ll be happy. Sometimes I forget that not everyone runs massive clusters like we do.

That’s all folks…

Reply via email.

FreeBSD Jail booting & running Devuan GNU+Linux with OpenRC

Two years ago I wrote a blog post named VoidLinux in FreeBSD Jail; with init, where we installed and “booted” VoidLinux in a FreeBSD Jail. I think it’s time to revise that post.

This time we will be using Devuan GNU+Linux, boot things using OpenRC and put some native FreeBSD binaries inside the Linux Jail.

Here’s what I’m running at the moment

root@srv0:~ # uname -v
FreeBSD 13.2-RELEASE releng/13.2-n254617-525ecfdad597 GENERIC

To bootstrap the Devuan system, we need debootstrap. Specifically, debootstrap that ships with Devuan Chimaera. We can start by installing debootstrap from ports/packages, and then we can modify the rest.

pkg install -y debootstrap

Now we need to fetch Devuan’s debootstrap, extract it, put some files into our debootstrap and set some symbolic links.

# Path might change over time, check https://pkginfo.devuan.org/ for the exact link
fetch http://deb.devuan.org/merged/pool/DEVUAN/main/d/debootstrap/debootstrap_1.0.123+devuan3_all.deb

# .deb files are messy, make a directory
mkdir debootstrap_devuan
mv debootstrap_1.0.123+devuan3_all.deb debootstrap_devuan/
cd debootstrap_devuan/
tar xf debootstrap_1.0.123+devuan3_all.deb
tar xf data.tar.gz

# We need chimaera (latest, symlink) and ceres (origin)
cp usr/share/debootstrap/scripts/ceres usr/share/debootstrap/scripts/chimaera /usr/local/share/debootstrap/scripts/

Now we can bootstrap our system. I will be using a ZFS filesystem, but this can be done without ZFS as well.

Keep in mind that my Jail’s path is going to be /usr/local/jails/devuan0, modify this path as needed 🙂

zfs create zroot/jails/devuan0

debootstrap --no-check-gpg --arch=amd64 chimaera /usr/local/jails/devuan0/ http://pkgmaster.devuan.org/merged/

The installation should start now but at some point there, we’ll get the following error:

I: Configuring libpam-runtime...
I: Configuring login...
I: Configuring util-linux...
I: Configuring mount...
I: Configuring sysvinit-core...
W: Failure while configuring required packages.
W: See /usr/local/jails/devuan0/debootstrap/debootstrap.log for details (possibly the package package is at fault)

DON’T PANIC! This is fine 🙂 We just need to chroot inside, fix this manually and install OpenRC


chroot /usr/local/jails/devuan0 /bin/bash
# Fix base packages
dpkg --force-depends -i /var/cache/apt/archives/*.deb
# Set Cache-Start
echo "APT::Cache-Start 251658240;" > /etc/apt/apt.conf.d/00chroot
# Install OpenRC
apt update
apt install openrc

We have almost everything ready. We just need to create a password database file that the jail(8) command uses internally.

cd /usr/local/jails/devuan0/etc/
echo "root::0:0::0:0:Charlie &:/root:/bin/bash" > master.passwd
pwd_mkdb -d ./ -p master.passwd
# Restore the Linux passwd file
cp passwd- passwd

We can also move our statically linked FreeBSD binaries into the Linux Jail so we can use them when needed

cp -a /rescue /usr/local/jails/devuan0/native

Now we just need our Jail configuration file. We can put that at /etc/jail.conf.d/devuan0.conf

(This assumes that you’re network is configured similar to “VNET Jail HowTo Part 2: Networking”

# vim: set syntax=sh:
exec.clean;
allow.raw_sockets;
mount.devfs;

devuan0 {
  # ID == epair index :)
  $id             = "0";
  $bridge         = "bridge0";
  # Set a domain :)
  $domain         = "bsd.am";
  vnet;
  vnet.interface = "epair${id}b";

  mount.fstab     = "/etc/jail.conf.d/${name}.fstab";

  exec.prestart   = "ifconfig epair${id} create up";
  exec.prestart  += "ifconfig epair${id}a up descr vnet-${name}";
  exec.prestart  += "ifconfig ${bridge} addm epair${id}a up";

  exec.start      = "/sbin/openrc default";

  exec.stop       = "/sbin/openrc shutdown";

  exec.poststop   = "ifconfig ${bridge} deletem epair${id}a";
  exec.poststop  += "ifconfig epair${id}a destroy";

  host.hostname   = "${name}.${domain}";
  path            = "/usr/local/jails/devuan0";

  # Maybe mkdir this path :)
  exec.consolelog = "/var/log/jail/${name}.log";

  persist;
  allow.socket_af;
}

As you have guessed, we also need an fstab file, that should go into /etc/jail.conf.d/devuan0.fstab

devfs       /usr/local/jails/devuan0/dev      devfs     rw                   0 0
tmpfs       /usr/local/jails/devuan0/dev/shm  tmpfs     rw,size=1g,mode=1777 0 0
fdescfs     /usr/local/jails/devuan0/dev/fd   fdescfs   rw,linrdlnk          0 0
linprocfs   /usr/local/jails/devuan0/proc     linprocfs rw                   0 0
linsysfs    /usr/local/jails/devuan0/sys      linsysfs  rw                   0 0
tmpfs       /usr/local/jails/devuan0/tmp      tmpfs     rw,mode=1777         0 0

Finally, let’s load some kernel modules (in case they haven’t yet)

service linux enable
service linux start
kldload netlink

Let’s start our Jail!

jail -c -f /etc/jail.conf.d/devuan0.conf

Is it running?

 # jls -N
 JID             IP Address      Hostname                      Path
 devuan0                         devuan0.bsd.am                /usr/local/jails/devuan0

Yes it is!

Now we can jexec into it and run things!

root@srv0:~ # jexec -l devuan0 /bin/bash
root@devuan0:~# uname -a
Linux devuan0.bsd.am 4.4.0 FreeBSD 13.2-RELEASE releng/13.2-n254617-525ecfdad597 GENERIC x86_64 GNU/Linux

The process tree looks neat as well!

root@devuan0:~# ps f
  PID TTY      STAT   TIME COMMAND
74682 pts/1    S      0:00 /bin/bash
78212 pts/1    R+     0:00  \_ ps f
48412 ?        Ss     0:00 /usr/sbin/cron
41190 ?        Ss     0:00 /usr/sbin/rsyslogd

Let’s do some networking things! Let’s setup networking and install OpenSSH.
(This assumes that you’re network is configured similar to “VNET Jail HowTo Part 2: Networking”)

# Setup network interfaces
/native/ifconfig lo0 inet 127.0.0.1/8 up
/native/ifconfig epair0b inet 10.0.0.10/24 up
/native/route add default 10.0.0.1

# Install and start OpenSSH server
apt-get --no-install-recommends install openssh-server
rc-service ssh start

You should be able to ping things now

~# ping -n -c 1 bsd.am
ping: WARNING: setsockopt(ICMP_FILTER): Protocol not available
PING  (37.252.73.34) 56(84) bytes of data.
64 bytes from 37.252.73.34: icmp_seq=1 ttl=55 time=2.60 ms

---  ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 2.603/2.603/2.603/0.000 ms

To make the networking configuration persistent, we can use the rc.local file that OpenRC executes at boot.

chmod +x /etc/rc.local
echo '/native/ifconfig lo0 inet 127.0.0.1/8 up' >> /etc/rc.local
echo '/native/ifconfig epair0b inet 10.0.0.10/24 up' >> /etc/rc.local
echo '/native/route add default 10.0.0.1' >> /etc/rc.local

Do you know what this means? It means that now you can have proper ZFS, DTrace and pf firewalling with Linux. Congrats, now you have clean waters.

That’s all folks…

P.S. I would like to thank my mentor, norayr, for showing me how to start/stop OpenRC manually, and the awesome folks at #devuan for their help.

Reply via email.

Incident Postmortem: BSD.am home server @ 3-4 July 2023

Incident Information

Between the hours of Mon Jul 3 03:05:59 2023 and Tue Jul 4 01:10:15 2023 the home server named BSD.am (also known as pingvinashen.am) was completely down.

The event was triggered by a battery issue due to high temperature at the apartment where the home server resides.

A battery swell caused the computer to shut down as it produced higher than normal heat into the system.

The event was detected by the monitoring system at mon.bsd.am which notified the operators using email and chat systems (XMPP).

This incident affected 100% of the users of the following services:

  • jabber.am public XMPP server
  • conference.jabber.am public XMPP MUC server
  • օրագիր.հայ public WriteFreely instance
  • սարեան.ցանցառներ.հայ public Lobste.rs instance
  • BIND.am public DNS server and its zones
  • Multiple hosted blogs, including this one you’re reading.
  • A private ZNC server for Armenian Hackers Community
  • git.bsd.am public Gitea server
  • A matterbridge instance connecting multiple communities
  • A Huginn instance automating tasks (such as RSS to Telegram, RSS to newsletter) for Armenian Hackers Communities
  • A newsletter instance running listmonk.app
  • A private Miniflux.app server for Armenian Hackers Community
  • FreeBSD Jail users’ meetup website

Multiple community members contacted the operator (yours truly) asking for an ETA.

Response

After receiving an email at Mon Jul 3 03:06:49 2023, the Chief Debugging Officer (yours truly) started analyzing the possible issue. According to Monit (mon.bsd.am) all the services were unavailable and the server was not reachable by IP (based on ICMP).

The usual possibility, network failure at the ISP level, was ruled out, as the second home server (arnet.am) was functioning properly.

The person closest to the server physically, was the operator’s sibling (lucy.vartanian.am), however she did not have the background in Unix system administration nor in hardware maintenance. Also, she was asleep.

Hours later the siblings (yours truly) organized a FaceTime call to debug the issues remotely.

The system did boot the kernel properly, however it would shutdown before the services could complete their startup.

Clearly, the machine needed to be shipped to the operator (yours truly) to be debugged at the spot.

So that’s what the team did.

IMG 6689
Precise addresses are removed for privacy

Recovery

At the operator’s (yours truly) location, the BIOS logs have listed that the system suffered from a ASF2 Force Off. This usually means a thermal problem.

The operator (yours truly) disassembled the laptop, hoping the system needs a little dust clean-up and a thermal paste update.

Turns out the problem was actually a swollen battery.

IMG 6683
IMG 6684
IMG 6685

After removing the battery, the system booted fine. Just to be sure that the swollen battery was the root cause, a complete system stress test was ran. No issues detected (Well, except “Missing Battery”).

The systems was returned to its residency, connected to the internet and all services were accessible again.

IMG 6690
Precise addresses are removed for privacy

Next Steps

  • Install a new battery in the future, as the laptop is not connected to a UPS
  • Make sure to test the hardware during environmental changes (too cold, too hot, etc)
  • Run a simple status page with an RSS feed in a separate environment and notify users

If you’re new here, then first of all I’d like to thank you for reading this IR Postmortem article.

Yes, this was an IR Postmortem of a home server of a tiny community in a tiny country. This was not about Amazon, Google, Netflix, etc.

I wrote this for two reasons.

First, I wanted to show you how awesome the actual internet is. You see, when Amazon dies, everything dies with it. Your startup infra, your website, your hobby projects, everything.

When my server dies, only my server dies. And that’s the beauty of the internet. If you can, please, keep that beauty going.

Second, I run a small security company, illuria, Inc., where we help companies harden their environment and recover from incidents. It’s been years since I wrote an IR postmortem personally (my team members who do that are way smarter than me!), and I thought it would be a nice exercise to write it all by myself 🙂

I hope you liked this.

That’s all folks…

Reply via email.

Antranig Vartanian

July 1, 2023

A customer asked me to help them setup a tiny lab with many open-source tools. They are planning to move from corporate services to open-source alternatives such as NextCloud, Gitea, etc.

Unfortunately, they run only Linux, Ubuntu to be more specific, and as a UNIX gentlemen, I didn’t want to put everything into a single host, so I decided to use containers, in this case, LXC, a.k.a Linux Containers.

How hard could it be?

Oh god, layers of abstraction on within the system that have no idea about each other.

Like, who would assume that LXC would automatically download and install dnsmasq and assign IP addresses without my knowledge, or that it would push rules into the firewall?

The more I use Linux Container, the more I understand why FreeBSD Jails / illumos Zones didn’t win.

People don’t want automation or control, they want “please do this for me as I don’t wanna do it myself” tools.

I’d expect at least a message post-installation that says “We have installed and configured dnsmasq, reconfigured some systemd things, modified the following file (which is not mentioned in any man page, so you can use Google instead of man/apropos) and will use IP address ranges that you didn’t approve”

Is this why Docker won? Is it because people DIDN’T want to learn how to do software packaging? I hope not. I wanna believe its because developers wanted to “think operationally”

Oh, and from a FreeBSD perspective, what’s even more weird is that

  1. there are no proper manual pages.
  2. the documentation is weird. It talks about a utility named lxc but I’m using 20 utilities named lxc-*, and I still cannot find the proper documentation for that
  3. it’s very much segmented. For example, on FreeBSD, we talk about which is better, jail.conf, BastilleBSD, pot, AppJail or Jailer. Here the same utility (lxc) that has multiple config files with no proper versioning, pretty complex manual pages and the not even examples or HowTos.

I’m looking at this and thinking ”oh well, if we build a proper tool, I bet we can win some of the market” until you realize, of course, that when people hear FreeBSD, they will be thinking ”it’s not Linux? maybe it’s not worth it, otherwise I would’ve heard about it”

I’m just angry here. Please ignore my rants.

Cheers y’all.

Reply via email.

FreeBSD package repo with specific versions

illuria’s ProfilerX runs on LureOS, which is our custom operating system based on FreeBSD.

To update the operating system we rely on two tools, pkg(8) for packages and freebsd-update for the base.

Initially, I’ve setup our poudriere and package repo in the FreeBSD way, so our URL looks like /FreeBSD:13:amd64/devel and /FreeBSD:13:amd64/prod. This is done by expanding the ${ABI} variable, similar to what FreeBSD does in FreeBSD.conf.

Initially, this worked fine, but now that there’s a new FreeBSD out there (13.2), I didn’t want to put the new packages in the old URL, but rather have a URL for each major.minor version. This is mostly for the enterprises who take their time to upgrade software.

Turns out the easiest way to do this is (after reading the pkg.conf(5) manual page) is to use the VERSION_MAJOR and VERSION_MINOR variables.

The new LureOS will use /${ABI}/${VERSION_MINOR}/repo, which will expand to /FreeBSD:13:amd64/1/devel, making it easier for us to extend life after a new release.

That’s all folks…

Reply via email.