MarsEdit 5.2: Search, Microposting, and Preview Improvements

A while back I asked Daniel Jalkut for a feature. Today, I saw this

MarsEdit 5.2: Search, Microposting, and Preview Improvements –:

Micropost Panel

New defaults are available under Settings -> Blogs -> Publishing to specify which Categories, Tags, and Post Kind should be used when publishing with the Micropost panel.

This made me so happy, as I’ve been loving MarsEdit for the last year or so. I know, I’m late to the party, but I can assure you, it’s still rockin’.

I might actually blog more now, but let’s not keep promises that we can’t keep, shall we?

Reply via email.

Installing FreeBSD with Root-on-ZFS on Vultr using iPXE

The title is pretty self explanatory, so let’s get to it, shall we?

I was configuring a server for a customer today, and one of the things I noticed is that FreeBSD was not available for bare-metal.

This got me a bit worried, because we use a lot of FreeBSD on Vultr… Well that’s a lie. We only use FreeBSD on Vultr.

I logged into our company account and noticed that our bare-metals does have FreeBSD as an icon for the image.

So I decided to check the docs and found this:

What operating system templates do you offer?

We offer many Linux and Windows options. We do not offer OpenBSD or FreeBSD images for Vultr Bare Metal. Use our iPXE boot feature if you need to install a custom operating system.

Well, that’s sad, but on the other hand, iPXE will be very useful. We can boot a memdisk such as mfsBSD and install FreeBSD from there.

To start, we need a VM that can host the mfsBSD img/ISO file. I have spun up a VM on Vultr running FreeBSD (altho it can run anything else, it wouldn’t matter), installed nginx on it, downloaded the file so we can boot from it. Here’s the copy-pasta

pkg install -y nginx
service nginx enable && service nginx start
fetch -o /usr/local/www/ \
https://mfsbsd.vx.sk/files/images/14/amd64/mfsbsd-se-14.0-RELEASE-amd64.img

This should be enough to get started. Oh, if you’re not on FreeBSD then the path might be different, like /var/www/nginx, or something alike. Check your nginx configuration for the details.

Now we need to write an iPXE script and add it into our Vultr iPXE scripts. Here’s what it looks like

#!ipxe

echo Starting MFSBSD
sanboot http://your.server.ip.address/mfsbsd-se-14.0-RELEASE-amd64.img
boot

Finally, we can create a bare-metal that uses our script for iPXE boot.

Don’t forget to choose the right location and plan.

After the machine is provisioned, you need to access the console and you will see the boot process.

The default root password is mfsroot.

To install FreeBSD, you can run bsdinstall. The rest will be familiar for you. Yes, you can use Root-on-ZFS. No, it can’t be in UEFI, you must use GPT (BIOS).

Good luck, and special thanks to Vultr for giving us the chance to use our favorite tools on the public cloud.

That’s all folks…

Reply via email.

Antranig Vartanian

May 11, 2024

We have moved the Vishap Oberon Compiler GitHub organization to vishapoberon, this is part of our new rebranding. The new domain will be vishap.oberon.am and we will finally have some ecosystem up and running, such as OberonByExample, official guide, docs, and compiler internals.

As a cautious hacker, I also created another organization that uses the old org name, since GitHub still allows org/repo hijacking.

Also, we have a new library coming soon, I think the scientific community will love it, as it computes 150x faster than the most common alternative.

Reply via email.

AI, LLMs and beginners

This AI thing has been going on for a while, specially the LLM part of it. I understand why there is hype for it, specially from VCs, and mostly from people who *checks notes* are not in the high-techs.

My students are using a lot of ChatGPT (and the others too) and I keep telling them to not use it, not because I don’t want them to use LLMs at all, but because LLMs suck. They are just an interface to a computer, and if you’ve ever done computer programming, you know that a computer does what you tell, not what you mean.

As a beginner (in Software Engineering, System Administration, etc) you still don’t know what you want a computer to do, that’s why you tell a program what you mean, instead of what it should do. We can see this problem everywhere. Here’s a real-life example from today.

I’m using the nginx web server, I’d like to allow only the domain example.com, reject everything else

What my student meant, is that, if you access the nginx web server via an IP address, then it should show nothing, if it’s a specific domain, such as example.com, then it should show the web page.

What ChatGPT understood is about access control and suggested the following

location / {
            root   /usr/local/www/nginx/;
            index  index.html index.htm;
            allow example.com;
            deny all;
        }

As a beginner, my student thought “well, that was easy!”, and then he kept wondering why he can’t access his web server, for 2 days.

And that is why you should not use ChatGPT (or any kind of an LLM) as a beginner.

As soon as you understand how a computer works, then go on, use whatever you want. Hell, even use JavaScript. But before using ChatGPT or JavaScript, please learn how a computer works first.

Reply via email.

Antranig Vartanian

May 6, 2024

Well, Twitter is officially useless. All I get is engagement posts like “Do you use X or Y?” and the X or the Y are options such as Coke or Pepsi.

I know that they are different things, but the right answer here is water, or tea, or coffee.

And I keep changing from “For You” to “Following” but for some reason Twitter (currently known as X) keeps changing it back to “For You”.

I had to log out. Sorry Twitter, you were an important part of our life, but not anymore.

On the other hand, my Mastodon feed is really nice. There are some political things here and there that irritate me, but I care about what my friends have to say, even if I don’t agree with everything.

Reply via email.

Installing DFIR-IRIS on FreeBSD using Jails

This is a live blogging of the installation process of DFIR-IRIS on FreeBSD 14.0-RELEASE using Jails and Jailer.

The main requirements are:

  • Nginx
  • PostgreSQL
  • Python
  • Some random dependencies we saw in the Dockerfile

I assume you already have nginx up and running, we will just be setting up a vhost under the domain name dfir.cert.am. Don’t worry, this is INSIDE our infrastructure, you will not be able to connect to it 🙂

Initial Setup

First we create a jail named iris0, using Jailer:

jailer create iris0

Next we install the required software inside of the jail. Looks like everything is available in FreeBSD packages:

jailer console iris0
pkg install \ nginx \ python39 \ py39-pip \ gnupg \ 7-zip \ rsync \ postgresql12-client \ git-tiny \ libxslt \ rust \ acme.sh

Installing DFIR-IRIS

Since we’re using FreeBSD, we’ll be doing things the right way instead of the Docker way, so we will be running IRIS as a user, not as root.

pw user add iris -m

Next we setup some directories and checkout the repo

root@iris0:~ # pw user add iris -m
root@iris0:~ # su - iris iris@iris0:~ $ git clone --branch v2.4.7 https://github.com/dfir-iris/iris-web.git iris-web

Finally, we install some python dependencies using pip.

iris@iris0:~ $ cd iris-web/source
iris@iris0:~/iris-web/source $ pip install -r requirements.txt

Now we have to configure the .env file based on our needs, I will post my version of it, I hope it helps

# -- DATABASE
export POSTGRES_USER=postgres
export POSTGRES_PASSWORD=postgres
export POSTGRES_DB=iris_db
export POSTGRES_ADMIN_USER=iris
export POSTGRES_ADMIN_PASSWORD=longpassword

export POSTGRES_SERVER=localhost
export POSTGRES_PORT=5432

# -- IRIS
export DOCKERIZED=0
export IRIS_SECRET_KEY=verylongsecret
export IRIS_SECURITY_PASSWORD_SALT=verylongsalt
export IRIS_UPSTREAM_SERVER=app # these are for docker, you can ignore
export IRIS_UPSTREAM_PORT=8000

# -- WORKER
export CELERY_BROKER=amqp://localhost
# Set to your rabbitmq instance

# Change these as you need them.
# -- AUTH
#IRIS_AUTHENTICATION_TYPE=local
## optional
#IRIS_ADM_PASSWORD=MySuperAdminPassword!
#IRIS_ADM_API_KEY=B8BA5D730210B50F41C06941582D7965D57319D5685440587F98DFDC45A01594
#IRIS_ADM_EMAIL=admin@localhost
#IRIS_ADM_USERNAME=administrator
# requests the just-in-time creation of users with ldap authentification (see https://github.com/dfir-iris/iris-web/issues/203)
#IRIS_AUTHENTICATION_CREATE_USER_IF_NOT_EXIST=True
# the group to which newly created users are initially added, default value is Analysts
#IRIS_NEW_USERS_DEFAULT_GROUP=

# -- LISTENING PORT
#INTERFACE_HTTPS_PORT=443

Configuring HTTPS

We can use acme.sh to issue a TLS certificate from Lets Encrypt.

root@iris0:~ # acme.sh --set-default-ca --server letsencrypt
root@iris0:~ # acme.sh --issue -d dfir.cert.am --standalone
root@iris0:~ # acme.sh -i -d dfir.cert.am --fullchain-file /usr/local/etc/ssl/dfir.cert.am/fullchain.pem --key-file /usr/local/etc/ssl/dfir.cert.am/key.pem --reloadcmd 'service nginx reload'

Setup nginx

DFIR-IRIS provides a nginx configuration template at nginx.conf, we will be using that, with a little bit of modifications.

The final nginx.conf will look like this:

#user  nobody;
worker_processes  1;

# This default error log path is compiled-in to make sure configuration parsing
# errors are logged somewhere, especially during unattended boot when stderr
# isn't normally logged anywhere. This path will be touched on every nginx
# start regardless of error log location configured here. See
# https://trac.nginx.org/nginx/ticket/147 for more info. 
#
#error_log  /var/log/nginx/error.log;
#

#pid        logs/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    # Things needed/recommended by DFIR-IRIS
    map $request_uri $csp_header {
        default "default-src 'self' https://analytics.dfir-iris.org; script-src 'self' 'unsafe-inline' https://analytics.dfir-iris.org; style-src 'self' 'unsafe-inline';";
    }

    server_tokens off;
    sendfile    on;
    tcp_nopush  on;
    tcp_nodelay on;

    types_hash_max_size             2048;
    types_hash_bucket_size          128;
    proxy_headers_hash_max_size     2048;
    proxy_headers_hash_bucket_size  128;
    proxy_buffering                 on;
    proxy_buffers                   8 16k;
    proxy_buffer_size               4k;

    client_header_buffer_size   2k;
    large_client_header_buffers 8 64k;
    client_body_buffer_size     64k;
    client_max_body_size        100M;

    reset_timedout_connection   on;
    keepalive_timeout           90s;
    client_body_timeout         90s;
    send_timeout                90s;
    client_header_timeout       90s;
    fastcgi_read_timeout        90s;
    # WORKING TIMEOUT FOR PROXY CONF
    proxy_read_timeout          90s;
    uwsgi_read_timeout          90s;

    gzip off;
    gzip_disable "MSIE [1-6]\.";

    # FORWARD CLIENT IDENTITY TO SERVER
    proxy_set_header    HOST                $http_host;
    proxy_set_header    X-Forwarded-Proto   $scheme;
    proxy_set_header    X-Real-IP           $remote_addr;
    proxy_set_header    X-Forwarded-For     $proxy_add_x_forwarded_for;

    # FULLY DISABLE SERVER CACHE
    add_header          Last-Modified $date_gmt;
    add_header          'Cache-Control' 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
    if_modified_since   off;
    expires             off;
    etag                off;
    proxy_no_cache      1;
    proxy_cache_bypass  1;

    # SSL CONF, STRONG CIPHERS ONLY
    ssl_protocols               TLSv1.2 TLSv1.3;

    ssl_prefer_server_ciphers   on;
    ssl_certificate             /usr/local/etc/ssl/dfir.cert.am/fullchain.pem;
    ssl_certificate_key         /usr/local/etc/ssl/dfir.cert.am/key.pem;
    ssl_ecdh_curve              secp521r1:secp384r1:prime256v1;
    ssl_buffer_size             4k;

    # DISABLE SSL SESSION CACHE
    ssl_session_tickets         off;
    ssl_session_cache           none;
    server {
        listen          443 ssl
        server_name     dfir.cert.am;
        root            /www/data;
        index           index.html;
        error_page      500 502 503 504  /50x.html;

        add_header Content-Security-Policy $csp_header;
        
        # SECURITY HEADERS
        add_header X-XSS-Protection             "1; mode=block";
        add_header X-Frame-Options              DENY;
        add_header X-Content-Type-Options       nosniff;
        # max-age = 31536000s = 1 year
        add_header Strict-Transport-Security    "max-age=31536000: includeSubDomains" always;
        add_header Front-End-Https              on;

        location / {
            proxy_pass  http://localhost:8000;

            location ~ ^/(manage/templates/add|manage/cases/upload_files) {
                keepalive_timeout           10m;
                client_body_timeout         10m;
                send_timeout                10m;
                proxy_read_timeout          10m;
                client_max_body_size        0M;
                proxy_request_buffering off;
                proxy_pass  http://localhost:8000;
            }

            location ~ ^/(datastore/file/add|datastore/file/add-interactive) {
                keepalive_timeout           10m;
                client_body_timeout         10m;
                send_timeout                10m;
                proxy_read_timeout          10m;
                client_max_body_size        0M;
                proxy_request_buffering off;
                proxy_pass  http://localhost:8000;
            }
        }
        location /socket.io {
            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_http_version 1.1;
            proxy_buffering off;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "Upgrade";
            proxy_pass http://localhost:8000/socket.io;
        }
    }
}

Setup PostgreSQL

I assume you know how to do this 🙂 You don’t need to configure a separate user, by the looks of it, IRIS likes to do that itself. Thanks to Jails I was able to run a separate PostgreSQL instance in the iris0 jail.

P.S. If you are running PostgreSQL inside a jail, make sure that the following variables are set in your jail configuration

  sysvshm         = new;
  sysvmsg         = new;

Running DFIR-IRIS

Now that everything is up and running, we just need to run DFIR-IRIS and it will create the database, needed users, an administration account, etc.

su - iris
cd ~/iris-web/source
. ../.env
~/.local/bin/gunicorn app:app --worker-class eventlet --bind 0.0.0.0:8000 --timeout 180 --worker-connections 1000 --log-level=debug

Assuming everything is fine, now we can setup a rc.d service script to make sure it runs at boot.

For that I wrote two files, the service itself and a helper start.sh script

rc.d script at /usr/local/etc/rc.d/iris

#!/bin/sh

# PROVIDE: iris
# REQUIRE: NETWORKING
# KEYWORD: 

. /etc/rc.subr

name="iris"
rcvar="iris_enable"
load_rc_config ${name}

: ${iris_enable:=no}
: ${iris_path:="/usr/local/iris"}
: ${iris_gunicorn:="/usr/local/bin/gunicorn"}
: ${iris_env="iris_gunicorn=${iris_gunicorn}"}

logfile="${iris_path}/iris.log"
pidfile="/var/run/${name}/iris.pid"

iris_user="iris"
iris_chdir="${iris_path}/source"
iris_command="${iris_path}/start.sh"

command="/usr/sbin/daemon"
command_args="-P ${pidfile} -T ${name} -o ${logfile} ${iris_command}"

run_rc_command "$1"

and the helper script at /home/iris/iris-web/start.sh

#!/bin/sh

export HOME=$(getent passwd `whoami` | cut -d : -f 6)

. ../.env

${iris_gunicorn} app:app --worker-class eventlet --bind 0.0.0.0:8000 --timeout 180 --worker-connections 128

now we set some variables in rc.conf using sysrc and we can start the service.

sysrc iris_enable="YES"
sysrc iris_path="/home/iris/iris-web"
sysrc iris_gunicorn="/home/iris/.local/bin/gunicorn"

Finally, we can start DFIR-IRIS as a service.

service iris start

Aaaaand we’re done 🙂

Thank you for reading!

There are some issues that I’d like to tackle, for example, service iris stop doesn’t work, and it would be nice if we ported all of the dependencies into Ports, but for now, this seems to be working fine.

Special thanks to the DFIR-IRIS team for creating this cool platform!

That’s all folks…

Reply via email.

Antranig Vartanian

April 26, 2024

I love ZFS…

root@evn0:/var/log/named # du -h -d 1
1.4G    .
root@evn0:/var/log/named # du -A -h -d 1
7.4G    .

Reply via email.

Mirroring OmniOS: The Complete Guide; Part One

Chapter Ⅰ

I know that “Complete Guide” and “Part One” are oxymorons, but hey, be happy that I’m publishing in parts, otherwise I’d completely ignore this blog post.

Two weeks ago I decided to play with illumos again. I was speaking with a friend and we were sharing our frustrations regarding Open-Source contribution. We write the code, we submit, we get feedback, we submit again, and then we’re ghosted. It’s like the LinkedIn or Tinder version of Software Engineering.

Then I asked him about his best open-source experience and he told me “illumos of course!”.

I was amazed. I thought you had to be very technical in order to even build illumos, but turns out they have an amazing documentation on building illumos and OmniOS (an illumos distribution) has done work to make sure that the system can be self-hosted (i.e. The OS can build itself).

So, I decided to fire up OmniOS on our hackerspace server running FreeBSD inside a bhyve VM.

The installation went smoothly, but the IPS packages were slow to download, and I might be wrong (please correct me if I am) but IPS doesn’t seem to be keeping a local copy of the files. It always downloads. Is that configurable?

Regardless. I thought that the best way to contribute is to advocate. In order to do that I needed to make sure that IPS servers are fast in Armenia. Hence the mirroring project started.

Obey!

Requirements

Here are some terminology that I will use in this blog post, just so we are on the same page.

  • OmniOS: an illumos distribution
  • Origin: OmniOS’s IPS servers at pkg.omnios.org
  • Local: A copy of the Origin
  • Repository: A collection of software
  • Core: The Core Repository of OmniOS
  • Extras: The Extra Repository of OmniOS
  • IPS or PKG: The Image Packaging System and its utility, pkg
  • Zone: an illumos Zone (similar to FreeBSD Jails, Linux Containers, chroot) running on OmniOS

Now that we are on the same page, let’s talk about our setup and what we need.

  • An internet connection: duh!
  • A domain name: I decided to use pkg.omnios.illumos.am. Yes, I’m lucky like that.
  • A publicly accessible IP address.
  • A server: I am running OmniOS Stable (r151048) inside a VM. You can use bare-metal or a cloud VM if you want.
  • Storage: I am currently using around 50GB of storage, expect that to go around 300GB when we get to Part Three

Pre-Mirroring Setup

Before we setup our mirror, let’s make sure that we have a good infrastructure that we can maintain.

Here’s what we’ll create

  • A Zone that will act as the HTTP(s) server using nginx at IP address 10.10.0.80
  • A Zone that will do the mirroring using IPS tools at 10.10.0.51
  • An virtual dumb switch (etherstub) that will connect the Zones and the Global-Zone (a.k.a The Host) together. The GZ will have an address of 10.10.0.1
  • ZFS datasets for each Core and Extras Repository (for each release)

Please note that there are many ways to do this, for example, having everything in a Global Zone, running IPS mirroring and nginx in a single Zone, not using etherstub at all, etc. But I like this setup as it will allow us to “grow” in the future.

From now on, omnios# means that we’re in the Global Zone and zone0# means we’re inside a Zone named zone0.

Let’s start with setting up our etherstub and connecting our Global Zone to it

omnios# dladm create-etherstub switch0
omnios# dladm create-vnic -l switch0 vnic0
omnios# ipadm create-if vnic0
omnios# ipadm create-addr -T static -a 10.10.0.1/24 vnic0/switch0

Done!

Now, we will setup our Zones using the zadm utility. Install zadm by running

omnios# pkg install zadm

After installing zadm, we’ll create a dataset for our Zones

omnios# zfs create -o mountpoint=/zones rpool/zones

This assumes that your ZFS pool is named rpool.

Finally, we can create our Zones. Running

omnios# zadm create -b pkgsrc www0

will open your $EDITOR, where you need to modify some JSON, here’s what mine looks like!

{
   "autoboot" : "true",
   "brand" : "pkgsrc",
   "ip-type" : "exclusive",
"dns-domain" : "omnios.illumos.am", "net" : [ { "allowed-address" : "10.10.0.80/24", "defrouter" : "10.10.0.1", "global-nic" : "switch0", "physical" : "www0" } ], "pool" : "", "scheduling-class" : "", "zonename" : "www0", "zonepath" : "/zones/www0" }

After saving the file, zadm will install the Zone.

Now let’s setup our mirroring Zone. Do the same but change the Zone name to repo, the brand to lipkg (and -b lipkg) and set the IP address to 10.10.0.51/24.

All we need now is to forward the HTTP/HTTPS traffic to www0 Zone and allow all Zones to access the internet using NAT.

Create and edit the IPFilter’s NAT file at /etc/ipf/ipnat.conf, here’s an example configuration

map vioif0 10.10.0.0/24 -> 212.34.250.10

rdr vioif0 212.34.250.10/32 port 80 -> 10.10.0.80 port 80 tcp
rdr vioif0 212.34.250.10/32 port 443 -> 10.10.0.80 port 443 tcp

Make sure you set the correct interface name and the correct external IP address.

Finally, we can boot our Zones!

omnios# zadm boot www0
omnios# zadm boot repo

You should see the following output when you run zadm again

omnios# zadm
NAME              STATUS     BRAND       RAM    CPUS  SHARES
global            running    ipkg        56G      12       1
repo              running    lipkg         -       -       1
www0              running    pkgsrc        -       -       1

Great! Let’s setup the mirroring process.

Mirroring Setup

Let’s create a ZFS dataset for repos for each release

repo# zfs create -o mountpoint=/repo rpool/zones/repo/ROOT/repo      
repo# zfs create rpool/zones/repo/ROOT/repo/r151048      
repo# zfs create rpool/zones/repo/ROOT/repo/r151048/core 
repo# zfs create rpool/zones/repo/ROOT/repo/r151048/extra

And then we use the pkgrepo command to create a repository

repo# pkgrepo create /repo/r151048/core
repo# pkgrepo create /repo/r151048/extra

And finally, we can start receiving the packages from Origin to Local

repo# pkgrecv -s https://pkg.omnios.org/r151048/core/  -d /repo/r151048/core  '*'
repo# pkgrecv -s https://pkg.omnios.org/r151048/extra/ -d /repo/r151048/extra '*'

This will take a while depending on your internet connection speed and the load on OmniOS’s Origin. It’s like a good investment, we spend load and time now so we save traffic and time later 🙂

After it’s done, we need to set the publisher of these repos the same as Origin.

repo# pkgrepo set -s /repo/r151048/core   publisher/prefix=omnios
repo# pkgrepo set -s /repo/r151048/extra/ publisher/prefix=extra.omnios

And we’re done!

Now need to serve these repos using IPS’s depot server.

We will create two instances of the depotd server, one for core and one for extra.

  • r151048/core will run on 5148
  • r151048/extra will run on 1148
  • (in the future) r151050/core will run on 5150
  • (in the future) r151050/extra will run on 1150

We start with core

repo# svccfg -s pkg/server add r151048_core
repo# svccfg -s pkg/server:r151048_core addpg pkg application
repo# svccfg -s pkg/server:r151048_core setprop pkg/inst_root = /repo/r151048/core/
repo# svccfg -s pkg/server:r151048_core setprop pkg/port = 5148
repo# svccfg -s pkg/server:r151048_core setprop pkg/proxy_base = https://pkg.omnios.illumos.am/r151048/core

And we do the same for extra

repo# svccfg -s pkg/server add r151048_extra
repo# svccfg -s pkg/server:r151048_extra addpg pkg application
repo# svccfg -s pkg/server:r151048_extra setprop pkg/inst_root = /repo/r151048/extra/
repo# svccfg -s pkg/server:r151048_extra setprop pkg/port = 1148
repo# svccfg -s pkg/server:r151048_extra setprop pkg/proxy_base = https://pkg.omnios.illumos.am/r151048/extra

Finally, we enable the services

repo# svcadm enable  pkg/server:r151048_core pkg/server:r151048_extra
repo# svcadm restart pkg/server:r151048_core pkg/server:r151048_extra

Let’s check!

We’re good! Now let’s setup Nginx 🙂

The Web Server

This part is pretty easy, we login into www0, install nginx, and setup some paths. I will be posting a copy-pasta of my configs, I assume you can do the rest 🙂

www0# pkgin update
www0# pkgin install nginx

Thank you SmartOS! 🧡

In my nginx.conf, I added

include vhosts/*.conf;

and then in /opt/local/etc/nginx/vhosts I created a file
named pkg.omnios.illumos.am.conf, which looks like this

server {
        listen 80;
        server_name pkg.omnios.illumos.am;

        location /.well-known/acme-challenge/ {
          alias /opt/local/www/acme/.well-known/acme-challenge/;
        }

        location / {
            return 301 "https://pkg.omnios.illumos.am";
        }
}

server {
    listen       443 ssl;
    server_name  pkg.omnios.illumos.am;

    ssl_certificate      /etc/ssl/pkg.omnios.illumos.am/fullchain.pem;
    ssl_certificate_key  /etc/ssl/pkg.omnios.illumos.am/key.pem;
    location /r151048/core/ {
                proxy_pass http://10.10.0.51:5148/;
    }

    location /r151048/extra/ {
                proxy_pass http://10.10.0.51:1148/;
    }

    location / {
# This needs to be changed, later... add_header Content-Type text/plain; return 200 "ok..."; } }

Finally, we just need to enable nginx

www0# svcadm enable pkgsrc/nginx

and check!

Using the Local Repos

This part is actually pretty easy. We just need to remove everything that exists and add our own. I will be running this on a computer named dna0.

dna0# pkg set-publisher -M '*' -G '*' omnios
dna0# pkg set-publisher -M '*' -G '*' extra.omnios
dna0# pkg set-publisher -O https://pkg.omnios.illumos.am/r151048/core omnios
dna0# pkg set-publisher -O https://pkg.omnios.illumos.am/r151048/extra extra.omnios
dna0# pkg publisher PUBLISHER TYPE STATUS P LOCATION extra.omnios origin online F https://pkg.omnios.illumos.am/r151048/extra/ omnios origin online F https://pkg.omnios.illumos.am/r151048/core/

We’re good! 🙂

Fetching Updates

By the time I wanted to publish this I noticed that there’s a new OmniOS Weekly Update, so I thought, hey, maybe I should try updating the Local Repo as well… how do we do that?

Turns out I just need to pkgrecv again, and then run a refresh command.

pkgrecv -v -s https://pkg.omnios.org/r151048/core/ -d /repo/r151048/core/ '*'
pkgrepo -s /repo/r151048/core refresh

And looks like we’re good! Maybe we can setup a simple cronjob 🙂

Final Notes

This has been an amazing experience. Since I started using OmniOS two weeks ago, I’ve setup the mirror, I installed two OmniOS deployments on production for two organization, and I talked about it during our Armenian Hackers Radio Podcast. With this mirror completely setup, I can advocate even more!

I’d like to send my thanks (and later, my money) to the OmniOS team for the amazing work they’re doing, special thanks to andyf for answering all of my questions, neirac for pushing me to try more illumos in my life and everyone who contributed to the docs and blog posts that I used. I’ll leave some links below.

Finally, for the coming (two) posts I will talk about mirroring downloads.OmniOS.org (for ISO/USB/ZFS images) and the pkgsrc repository run by SmartOS/MNX.

Thank you for reading and thank you, illumos-community for being so nice ^_^

That’s all folks…

Links

Reply via email.

macOS log(1): Finding out the previous name of BT device

I got a new mouse yesterday to use it with Mac Mouse Fix, an amazing application that “Makes Your $10 Mouse Better Than an Apple Trackpad!”. I can assure you it does.

The mouse connects via Bluetooth, a short-range wireless technology that even after 25 years, it’s either insecure, unstable or both. Sometimes it’s none, but only when the vendors of both sides are aware of each other.

Anyways. I connected the mouse and renamed it to “Antranig’s Mouse”, now all I need is a cat. An hour later a friend asked me which model was the mouse. I had no idea, but I thought, hey, the original name of the BT device was the model name, right? Maybe I can check that.

Luckily, macOS logs everything, and I mean everything, so I used the log(1) command to see what was the previous name.

Here’s the command to run and what the output looks like

log show --style compact --info --last 12h --predicate 'process == "bluetoothd" && subsystem == "com.apple.bluetooth”' | grep setName
2024-01-06 18:57:38.908 Df bluetoothd[375:8d0c1] [com.apple.bluetooth:CBStackController] setName: device 01903735-1591-7A71-C597-CE40C2ACB232, 'Dell Mouse MS5120W' -> 'Antranig's Mouse'

A simple explanation:

  • style compact: log has styles of output, there’s the default, which is long, and there’s compact, which is short. You can also set it to json.
  • info: type if information, it can also be default or debug.
  • last: time range, can be set to m, h, d for minutes, hours or days.
  • predicate: a macOS predicate, for more information check Predicate Programming Guide.
    • process: a process, in this case bluetoothd.
    • subsystem: a macOS subsystem, in this case com.apple.bluetooth. How did I know that? note sure, but my brains contains a lot of information.
  • grep: Unix grep(1), because we party like its 1969.

I also don’t remember how I knew that I should look for setName, but that’s life for you.

And of course, we get the output, the device was previously named Dell Mouse MS5120W

That’s all folks…

Reply via email.