Tag Archives: FreeBSD

bhyve CPU Allocation Test for 256 core machine

During the last bhyve weekly call, Michael Dexter asked me to run the bhyve CPU Allocation Test that he wrote in order to see if number of CPUs in the guest makes the system boot longer.

Here’s a post with the details of the test and my findings.

The host machines runs the following

# uname -a
FreeBSD genomic.abi.am 13.2-RELEASE FreeBSD 13.2-RELEASE releng/13.2-n254617-525ecfdad597 GENERIC amd64

# sysctl hw.model hw.ncpu
hw.model: AMD EPYC 7702 64-Core Processor
hw.ncpu: 256

# dmidecode -t processor | grep 'Socket Designation'
        Socket Designation: CPU1
        Socket Designation: CPU2

# sysctl hw.physmem hw.realmem hw.usermem
hw.physmem: 2185602236416
hw.realmem: 2200361238528
hw.usermem: 2091107983360

Basically, it’s FreeBSD 13.2, with 2TB of RAM, 2 CPUs with 64 cores each, 2 threads each, totaling 256 vCores

The test runs a bhyve VM with minimal FreeBSD, that’s built with OccamBSD. The main changes are the following:

  • /boot/loader.conf has the line autoboot_delay="0"
  • There are no service enabled
  • /etc/rc.local has the line shutdown -p now

The machine boots and then it shuts down.

Here’s what I’ve got in the log file →

Host CPUs: 256
1 booted in 9 seconds
2 booted in 9 seconds
3 booted in 9 seconds
4 booted in 9 seconds
5 booted in 9 seconds
6 booted in 9 seconds
7 booted in 9 seconds
8 booted in 9 seconds
9 booted in 10 seconds
10 booted in 10 seconds
11 booted in 10 seconds
12 booted in 11 seconds
13 booted in 10 seconds
14 booted in 11 seconds
15 booted in 12 seconds
16 booted in 9 seconds
17 booted in 12 seconds
18 booted in 18 seconds
19 booted in 14 seconds
20 booted in 15 seconds
21 booted in 22 seconds
22 booted in 17 seconds
23 booted in 23 seconds
24 booted in 10 seconds
25 booted in 10 seconds
26 booted in 17 seconds
27 booted in 14 seconds
28 booted in 15 seconds
29 booted in 12 seconds
30 booted in 15 seconds
31 booted in 31 seconds
32 booted in 19 seconds
33 booted in 15 seconds
34 booted in 32 seconds
35 booted in 18 seconds
36 booted in 22 seconds
37 booted in 24 seconds
38 booted in 17 seconds
39 booted in 24 seconds
40 booted in 13 seconds
41 booted in 15 seconds
42 booted in 23 seconds
43 booted in 37 seconds
44 booted in 21 seconds
45 booted in 19 seconds
46 booted in 12 seconds
47 booted in 17 seconds
48 booted in 19 seconds
49 booted in 17 seconds
50 booted in 18 seconds
51 booted in 15 seconds
52 booted in 20 seconds
53 booted in 14 seconds
54 booted in 22 seconds
55 booted in 18 seconds
56 booted in 17 seconds
57 booted in 92 seconds
58 booted in 15 seconds
59 booted in 15 seconds
60 booted in 17 seconds
61 booted in 16 seconds
62 booted in 22 seconds
63 booted in 17 seconds
64 booted in 12 seconds
65 booted in 17 seconds

At the 66th core, bhyve crashes, with the following line

Booting the VM with 66 vCPUs
Assertion failed: (curaddr - startaddr < SMBIOS_MAX_LENGTH), function smbios_build, file /usr/src/usr.sbin/bhyve/smbiostbl.c, line 936.
Abort trap (core dumped)    

At this point, bhyve crashes with every ncpu+1, so I had to stop the loop from running.

I had to look into the topology of the CPUs, which FreeBSD can report using

sysctl -n kern.sched.topology_spec

<groups>
 <group level="1" cache-level="0">
  <cpu count="256" mask="ffffffffffffffff,ffffffffffffffff,ffffffffffffffff,ffffffffffffffff">0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255</cpu>
  <children>
   <group level="2" cache-level="0">

[...]

   </group>
  </children>
 </group>
</groups>

You can find the whole output here: kern.sched.topology_spec.xml.txt

The system that we need for production requires 240 vCores. This topology gave me the idea to run that manually, using the socket, cores and threads options →

bhyve -c 240,sockets=2,cores=60,threads=2 -m 1024 -H -A \
    -l com1,stdio \
    -l bootrom,BHYVE_UEFI.fd \
    -s 0,hostbridge \
    -s 2,virtio-blk,vm.raw \
    -s 31,lpc \
    vm0

And it booted all fine! 🙂

240 booted in 33 seconds

For production, however, I use vm-bhyve, so I’ve added the following to my configuration →

cpu="240"
cpu_sockets="2"
cpu_cores="60"
cpu_threads="2"
memory="1856G"

And yes, for those who are wondering, bhyve can virtualize 1.8T of vDRAM all fine 🙂

For my debugging nerds, I’ve also uploaded the bhyve.core file to my server, you may get it at bhyve-cpu-allocation–256.tgz

As long as this is helpful for someone out there, I’ll be happy. Sometimes I forget that not everyone runs massive clusters like we do.

That’s all folks…

Reply via email.

FreeBSD Jail booting & running Devuan GNU+Linux with OpenRC

Two years ago I wrote a blog post named VoidLinux in FreeBSD Jail; with init, where we installed and “booted” VoidLinux in a FreeBSD Jail. I think it’s time to revise that post.

This time we will be using Devuan GNU+Linux, boot things using OpenRC and put some native FreeBSD binaries inside the Linux Jail.

Here’s what I’m running at the moment

root@srv0:~ # uname -v
FreeBSD 13.2-RELEASE releng/13.2-n254617-525ecfdad597 GENERIC

To bootstrap the Devuan system, we need debootstrap. Specifically, debootstrap that ships with Devuan Chimaera. We can start by installing debootstrap from ports/packages, and then we can modify the rest.

pkg install -y debootstrap

Now we need to fetch Devuan’s debootstrap, extract it, put some files into our debootstrap and set some symbolic links.

# Path might change over time, check https://pkginfo.devuan.org/ for the exact link
fetch http://deb.devuan.org/merged/pool/DEVUAN/main/d/debootstrap/debootstrap_1.0.123+devuan3_all.deb

# .deb files are messy, make a directory
mkdir debootstrap_devuan
mv debootstrap_1.0.123+devuan3_all.deb debootstrap_devuan/
cd debootstrap_devuan/
tar xf debootstrap_1.0.123+devuan3_all.deb
tar xf data.tar.gz

# We need chimaera (latest, symlink) and ceres (origin)
cp usr/share/debootstrap/scripts/ceres usr/share/debootstrap/scripts/chimaera /usr/local/share/debootstrap/scripts/

Now we can bootstrap our system. I will be using a ZFS filesystem, but this can be done without ZFS as well.

Keep in mind that my Jail’s path is going to be /usr/local/jails/devuan0, modify this path as needed 🙂

zfs create zroot/jails/devuan0

debootstrap --no-check-gpg --arch=amd64 chimaera /usr/local/jails/devuan0/ http://pkgmaster.devuan.org/merged/

The installation should start now but at some point there, we’ll get the following error:

I: Configuring libpam-runtime...
I: Configuring login...
I: Configuring util-linux...
I: Configuring mount...
I: Configuring sysvinit-core...
W: Failure while configuring required packages.
W: See /usr/local/jails/devuan0/debootstrap/debootstrap.log for details (possibly the package package is at fault)

DON’T PANIC! This is fine 🙂 We just need to chroot inside, fix this manually and install OpenRC


chroot /usr/local/jails/devuan0 /bin/bash
# Fix base packages
dpkg --force-depends -i /var/cache/apt/archives/*.deb
# Set Cache-Start
echo "APT::Cache-Start 251658240;" > /etc/apt/apt.conf.d/00chroot
# Install OpenRC
apt update
apt install openrc

We have almost everything ready. We just need to create a password database file that the jail(8) command uses internally.

cd /usr/local/jails/devuan0/etc/
echo "root::0:0::0:0:Charlie &:/root:/bin/bash" > master.passwd
pwd_mkdb -d ./ -p master.passwd
# Restore the Linux passwd file
cp passwd- passwd

We can also move our statically linked FreeBSD binaries into the Linux Jail so we can use them when needed

cp -a /rescue /usr/local/jails/devuan0/native

Now we just need our Jail configuration file. We can put that at /etc/jail.conf.d/devuan0.conf

(This assumes that you’re network is configured similar to “VNET Jail HowTo Part 2: Networking”

# vim: set syntax=sh:
exec.clean;
allow.raw_sockets;
mount.devfs;

devuan0 {
  # ID == epair index :)
  $id             = "0";
  $bridge         = "bridge0";
  # Set a domain :)
  $domain         = "bsd.am";
  vnet;
  vnet.interface = "epair${id}b";

  mount.fstab     = "/etc/jail.conf.d/${name}.fstab";

  exec.prestart   = "ifconfig epair${id} create up";
  exec.prestart  += "ifconfig epair${id}a up descr vnet-${name}";
  exec.prestart  += "ifconfig ${bridge} addm epair${id}a up";

  exec.start      = "/sbin/openrc default";

  exec.stop       = "/sbin/openrc shutdown";

  exec.poststop   = "ifconfig ${bridge} deletem epair${id}a";
  exec.poststop  += "ifconfig epair${id}a destroy";

  host.hostname   = "${name}.${domain}";
  path            = "/usr/local/jails/devuan0";

  # Maybe mkdir this path :)
  exec.consolelog = "/var/log/jail/${name}.log";

  persist;
  allow.socket_af;
}

As you have guessed, we also need an fstab file, that should go into /etc/jail.conf.d/devuan0.fstab

devfs       /usr/local/jails/devuan0/dev      devfs     rw                   0 0
tmpfs       /usr/local/jails/devuan0/dev/shm  tmpfs     rw,size=1g,mode=1777 0 0
fdescfs     /usr/local/jails/devuan0/dev/fd   fdescfs   rw,linrdlnk          0 0
linprocfs   /usr/local/jails/devuan0/proc     linprocfs rw                   0 0
linsysfs    /usr/local/jails/devuan0/sys      linsysfs  rw                   0 0
tmpfs       /usr/local/jails/devuan0/tmp      tmpfs     rw,mode=1777         0 0

Finally, let’s load some kernel modules (in case they haven’t yet)

service linux enable
service linux start
kldload netlink

Let’s start our Jail!

jail -c -f /etc/jail.conf.d/devuan0.conf

Is it running?

 # jls -N
 JID             IP Address      Hostname                      Path
 devuan0                         devuan0.bsd.am                /usr/local/jails/devuan0

Yes it is!

Now we can jexec into it and run things!

root@srv0:~ # jexec -l devuan0 /bin/bash
root@devuan0:~# uname -a
Linux devuan0.bsd.am 4.4.0 FreeBSD 13.2-RELEASE releng/13.2-n254617-525ecfdad597 GENERIC x86_64 GNU/Linux

The process tree looks neat as well!

root@devuan0:~# ps f
  PID TTY      STAT   TIME COMMAND
74682 pts/1    S      0:00 /bin/bash
78212 pts/1    R+     0:00  \_ ps f
48412 ?        Ss     0:00 /usr/sbin/cron
41190 ?        Ss     0:00 /usr/sbin/rsyslogd

Let’s do some networking things! Let’s setup networking and install OpenSSH.
(This assumes that you’re network is configured similar to “VNET Jail HowTo Part 2: Networking”)

# Setup network interfaces
/native/ifconfig lo0 inet 127.0.0.1/8 up
/native/ifconfig epair0b inet 10.0.0.10/24 up
/native/route add default 10.0.0.1

# Install and start OpenSSH server
apt-get --no-install-recommends install openssh-server
rc-service ssh start

You should be able to ping things now

~# ping -n -c 1 bsd.am
ping: WARNING: setsockopt(ICMP_FILTER): Protocol not available
PING  (37.252.73.34) 56(84) bytes of data.
64 bytes from 37.252.73.34: icmp_seq=1 ttl=55 time=2.60 ms

---  ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 2.603/2.603/2.603/0.000 ms

To make the networking configuration persistent, we can use the rc.local file that OpenRC executes at boot.

chmod +x /etc/rc.local
echo '/native/ifconfig lo0 inet 127.0.0.1/8 up' >> /etc/rc.local
echo '/native/ifconfig epair0b inet 10.0.0.10/24 up' >> /etc/rc.local
echo '/native/route add default 10.0.0.1' >> /etc/rc.local

Do you know what this means? It means that now you can have proper ZFS, DTrace and pf firewalling with Linux. Congrats, now you have clean waters.

That’s all folks…

P.S. I would like to thank my mentor, norayr, for showing me how to start/stop OpenRC manually, and the awesome folks at #devuan for their help.

Reply via email.

Incident Postmortem: BSD.am home server @ 3-4 July 2023

Incident Information

Between the hours of Mon Jul 3 03:05:59 2023 and Tue Jul 4 01:10:15 2023 the home server named BSD.am (also known as pingvinashen.am) was completely down.

The event was triggered by a battery issue due to high temperature at the apartment where the home server resides.

A battery swell caused the computer to shut down as it produced higher than normal heat into the system.

The event was detected by the monitoring system at mon.bsd.am which notified the operators using email and chat systems (XMPP).

This incident affected 100% of the users of the following services:

  • jabber.am public XMPP server
  • conference.jabber.am public XMPP MUC server
  • օրագիր.հայ public WriteFreely instance
  • սարեան.ցանցառներ.հայ public Lobste.rs instance
  • BIND.am public DNS server and its zones
  • Multiple hosted blogs, including this one you’re reading.
  • A private ZNC server for Armenian Hackers Community
  • git.bsd.am public Gitea server
  • A matterbridge instance connecting multiple communities
  • A Huginn instance automating tasks (such as RSS to Telegram, RSS to newsletter) for Armenian Hackers Communities
  • A newsletter instance running listmonk.app
  • A private Miniflux.app server for Armenian Hackers Community
  • FreeBSD Jail users’ meetup website

Multiple community members contacted the operator (yours truly) asking for an ETA.

Response

After receiving an email at Mon Jul 3 03:06:49 2023, the Chief Debugging Officer (yours truly) started analyzing the possible issue. According to Monit (mon.bsd.am) all the services were unavailable and the server was not reachable by IP (based on ICMP).

The usual possibility, network failure at the ISP level, was ruled out, as the second home server (arnet.am) was functioning properly.

The person closest to the server physically, was the operator’s sibling (lucy.vartanian.am), however she did not have the background in Unix system administration nor in hardware maintenance. Also, she was asleep.

Hours later the siblings (yours truly) organized a FaceTime call to debug the issues remotely.

The system did boot the kernel properly, however it would shutdown before the services could complete their startup.

Clearly, the machine needed to be shipped to the operator (yours truly) to be debugged at the spot.

So that’s what the team did.

IMG 6689
Precise addresses are removed for privacy

Recovery

At the operator’s (yours truly) location, the BIOS logs have listed that the system suffered from a ASF2 Force Off. This usually means a thermal problem.

The operator (yours truly) disassembled the laptop, hoping the system needs a little dust clean-up and a thermal paste update.

Turns out the problem was actually a swollen battery.

IMG 6683
IMG 6684
IMG 6685

After removing the battery, the system booted fine. Just to be sure that the swollen battery was the root cause, a complete system stress test was ran. No issues detected (Well, except “Missing Battery”).

The systems was returned to its residency, connected to the internet and all services were accessible again.

IMG 6690
Precise addresses are removed for privacy

Next Steps

  • Install a new battery in the future, as the laptop is not connected to a UPS
  • Make sure to test the hardware during environmental changes (too cold, too hot, etc)
  • Run a simple status page with an RSS feed in a separate environment and notify users

If you’re new here, then first of all I’d like to thank you for reading this IR Postmortem article.

Yes, this was an IR Postmortem of a home server of a tiny community in a tiny country. This was not about Amazon, Google, Netflix, etc.

I wrote this for two reasons.

First, I wanted to show you how awesome the actual internet is. You see, when Amazon dies, everything dies with it. Your startup infra, your website, your hobby projects, everything.

When my server dies, only my server dies. And that’s the beauty of the internet. If you can, please, keep that beauty going.

Second, I run a small security company, illuria, Inc., where we help companies harden their environment and recover from incidents. It’s been years since I wrote an IR postmortem personally (my team members who do that are way smarter than me!), and I thought it would be a nice exercise to write it all by myself 🙂

I hope you liked this.

That’s all folks…

Reply via email.

Antranig Vartanian

July 1, 2023

A customer asked me to help them setup a tiny lab with many open-source tools. They are planning to move from corporate services to open-source alternatives such as NextCloud, Gitea, etc.

Unfortunately, they run only Linux, Ubuntu to be more specific, and as a UNIX gentlemen, I didn’t want to put everything into a single host, so I decided to use containers, in this case, LXC, a.k.a Linux Containers.

How hard could it be?

Oh god, layers of abstraction on within the system that have no idea about each other.

Like, who would assume that LXC would automatically download and install dnsmasq and assign IP addresses without my knowledge, or that it would push rules into the firewall?

The more I use Linux Container, the more I understand why FreeBSD Jails / illumos Zones didn’t win.

People don’t want automation or control, they want “please do this for me as I don’t wanna do it myself” tools.

I’d expect at least a message post-installation that says “We have installed and configured dnsmasq, reconfigured some systemd things, modified the following file (which is not mentioned in any man page, so you can use Google instead of man/apropos) and will use IP address ranges that you didn’t approve”

Is this why Docker won? Is it because people DIDN’T want to learn how to do software packaging? I hope not. I wanna believe its because developers wanted to “think operationally”

Oh, and from a FreeBSD perspective, what’s even more weird is that

  1. there are no proper manual pages.
  2. the documentation is weird. It talks about a utility named lxc but I’m using 20 utilities named lxc-*, and I still cannot find the proper documentation for that
  3. it’s very much segmented. For example, on FreeBSD, we talk about which is better, jail.conf, BastilleBSD, pot, AppJail or Jailer. Here the same utility (lxc) that has multiple config files with no proper versioning, pretty complex manual pages and the not even examples or HowTos.

I’m looking at this and thinking ”oh well, if we build a proper tool, I bet we can win some of the market” until you realize, of course, that when people hear FreeBSD, they will be thinking ”it’s not Linux? maybe it’s not worth it, otherwise I would’ve heard about it”

I’m just angry here. Please ignore my rants.

Cheers y’all.

Reply via email.

FreeBSD package repo with specific versions

illuria’s ProfilerX runs on LureOS, which is our custom operating system based on FreeBSD.

To update the operating system we rely on two tools, pkg(8) for packages and freebsd-update for the base.

Initially, I’ve setup our poudriere and package repo in the FreeBSD way, so our URL looks like /FreeBSD:13:amd64/devel and /FreeBSD:13:amd64/prod. This is done by expanding the ${ABI} variable, similar to what FreeBSD does in FreeBSD.conf.

Initially, this worked fine, but now that there’s a new FreeBSD out there (13.2), I didn’t want to put the new packages in the old URL, but rather have a URL for each major.minor version. This is mostly for the enterprises who take their time to upgrade software.

Turns out the easiest way to do this is (after reading the pkg.conf(5) manual page) is to use the VERSION_MAJOR and VERSION_MINOR variables.

The new LureOS will use /${ABI}/${VERSION_MINOR}/repo, which will expand to /FreeBSD:13:amd64/1/devel, making it easier for us to extend life after a new release.

That’s all folks…

Reply via email.

libucl wrapper in Oberon-2 for Vishap Oberon Compiler

Like I said in my previous post, this is a long project and it relies on a lot of things 🙂

Wrapping libxo was fun, but wrapping libucl was way more complicated. However, it is done. It’s not a complete port, however, it has the basics to get started. The goes is to have all wrappers match the their libraries.

The source is at antranigv/voclibucl and here’s a screenshot of what it can do.

Screenshot 2023 04 08 at 6 46 14 PM

Next, I will be improving these wrappers and then work on lzc, a.k.a. Lib_ZFS_Core 😉

See you soon 🙂

Reply via email.

libxo wrapper in Oberon-2 for Vishap Oberon Compile

I’m working on a new project, which is still only 10% done. For that project I chose to use the Oberon–2 programming language and the Vishap Oberon Compiler.

After seeing libxo on FreeBSD, I’m not sure I can go back to write or printf, so I decided to write an Oberon wrapper for it.

I just finished the basics but it’s already usable for day-to-day outputs, containers/lists/instances and exit codes.

The source is at antranigv/voclibxo and here’s a screenshot of what it can do.

Screenshot 2023 04 05 at 4 40 45 PM

Next, I will be wrapping libucl in Oberon.

See you soon 🙂

Reply via email.

Antranig Vartanian

March 29, 2023

After weeks of thinking, I decided that I need to fork Jailer. Yes, I want to fork my own code. There are two reasons to do this.

  1. Keep the promise of Jailer being “very compatible with FreeBSD”
  2. Have a new version that pushes these limits of compatibility.

The fork is going to be named bant, which is Armenian for jail. I think we’re all tired of Greek names at this point 🙂

I’ll share the details of bant as soon as I have a prototype, which means at least couple of weeks.

Meanwhile, Jailer will be the very-compatible-with-FreeBSD version, that doesn’t brake things and allows new users to use Jails with ease.

Fingers crossed…

Reply via email.

Design Guidelines vs Pushing The Limits

One of the design guidelines of Jailer is don’t break FreeBSD. As in if someone installed and used Jailer, and then deleted the Jailer binary and libraries, their Jails would still run without any issues. We do this with minimal intervention, for example, jailer init patches FreeBSD’s /etc/rc.d/jail, but in a way that you wouldn’t feel the difference much. We don’t create new rc.conf variables, we just change couple of loops. In a way, you can keep these changes even if you delete Jailer so your system would be much improved. Obviously, we do sent these patches to FreeBSD src.

But I’m in front of an issue right now. On one side, I want to keep these guidelines, on the other, pushing the limit will allow me to improve Jailer way more than I expected.

These are the things that I think about before sleep, or during the shower. I gave a promise, that I will not break the Jail ecosystem. But what if, just what if, the ecosystem was broken in the first place?

Some of you might know, that we’ve been working on integrating libucl with Jail. The experiments have been going well, in such that I feel I want to integrate these experiments with Jailer already, even before they get into FreeBSD (and they might even not get in at all).

My dream of Jailer and its ecosystem is complex. I feel that these integration would do good on the long-term, but I want to keep the short term alive as well.

One idea is to fork Jailer, keep two versions of it. One version that’s FreeBSD compliant, and another one that is pushing the limits.

This is going to be an interesting week…

That’s all folks…

Reply via email.

Call For Testing: Jailer v0.1.1

Well, it’s finally here! After a week of sleepless work, I cleaned up the Jailer codebase and added many features (and removed some as well!) that I wanted since last year 🙂

If you are reading this, please consider testing Jailer on FreeBSD. The codebase is at illuria/jailer.

The README.md should have all the info that you need to run Jailer.

If find any issues, please report to illuria/jailer/issues or you can email me personally at antranigv [at] freebsd [dot] am

Here’s the roadmap for what’s coming next

  1. Complete NetGraph support using jng.
  2. Jailerfile, which will be something similar to Dockerfile, allowing developers to create consistent images.
  3. jailerd and jailerctl, for remote jailer automation. This will be an open-source port of what illuria has already developed.
  4. Distributed Jailer, where jailerctl list will show not just what’s on a remote machine, but on a remote datacenter, inspired by Triton. Again, we have this at illuria, but we need to create an open-source port.

This release is dedicated to

Thank you for reading 🙂

That’s all folks…

Reply via email.