Category Archives: Tech

iOS ships Dvorak, finally

I’m a huge fan of the Dvorak keyboard layout, but if there’s one thing I love more than “Evolved vs Engineered” solution debates, is that nothing wins the ”standardized” debate.

That being said, the main reason I never moved to Dvorak properly was always a device not having a proper keyboard. Sometimes it was my Android phone with a weird ROM, but most of times it was my iPhone.

However, I just learned that Apple shipped the Dvorak layout with iOS 16.

Here’s Lilith‘s iPhone running iOS 15

And here’s my iPhone running iOS 16

And I’ve gotta say, it’s not bad at all

That’s all folks…

Reply via email.

Antranig Vartanian

September 23, 2022

I’m running two VMs on my M1 MacBook Air. An x86_64 FreeBSD and x86_64 LureOS (illuria‘s OS), both are emulated.

and yet, somehow, according to macOS, my browser is Using Significant Energy.

To be honest, I believe macOS, but the real question is, how did we get to a place where a piece of software is consuming more power than a complete Operating System?

Reply via email.

The command command

According to the 2018 edition of The Open Group Base Specifications (Issue 7), there’s a command named command which executes commands.

Wait, macOS is OpenGroup UNIX 03 certified, right?

command running uname -a

I tried tracing back the history, macOS is mostly based on FreeBSD, as we can see in their open-source code.

So I started tracing back the FreeBSD code, and I found the current one.

I found the oldest commit about command in FreeBSD’s source tree, but it said

Import the 4.4BSD-Lite2 /bin/sh sources

builtins.def

So I opened up the SVN tree of CSRG, and there I found this

date and time created 91/03/07 20:24:04 by bostic

builtins.def

However, if I knew how to use SVNWeb, I’m pretty sure I’d navigate around the /old/sh directory.

It’s funny, how this line
# NOTE: bltincmd must come first!
Is both in the macOS code AND the CSRG code from 30 years ago.

That’s all folks…

Reply via email.

BSDCan 2022 Talks and Scary Thumbnail

I don’t know if it’s YouTube that chose this thumbnail or if it was someone from BSDCan, but I’ve gotta say, I love it! xD

But in all seriousness, you can find my talk “Own The Stack: FreeBSD from a Vendor’s Perspective by Antranig Vartanian (ft. Faraz Vahedi)” on YouTube.

There’s a whole playlist, with each talk more interesting than the other.

Looks like I know what I will be doing this weekend ☺️

Reply via email.

Meta-programming in Shell

Wikipedia defines meta-programming as:

programming technique in which computer programs have the ability to treat other programs as their data. It means that a program can be designed to read, generate, analyze or transform other programs, and even modify itself while running

Uncle Wiki

I had to write a “framework” at work where a shell program would run other shell programs “dynamically”. Let’s dig in!

As I mentioned in my earlier post Two Colons Equals Modules, you can “emulate” modules and functions in Shell (at least in FreeBSD’s /bin/sh) by using ::, so it would be module::function

Here we will do the same, however we will do hook::module.

The goal is to have a Shell program that would take a pid as an argument and do something with that PID, say print a group of information, maybe use DTrace to trace it, etc.

Let’s start by writing our main program.

#!/bin/sh
set -m

usage()
{
  echo "${0##*/} pid"
}

# print usage if argc < 1
[ "${#}" -lt "1" ] && usage && exit 1

# load scripts
load_scripts()
{
  for ctl in ./*.ctl.sh;
  do
    . "${ctl}"
  done
}

# stop the runner by killing the PIDs
runner_stop()
{
  IFS=":"
  for pid in $1;
  do
    kill $pid
  done
  exit
}

# Stop the runner if user sends an input
# This is useful if the runner is executed via a controller
wait_input()
{
  read command
  runner_stop ${PIDS}
}

# a.k.a. main()
runner_start()
{
  # make sure the process exists
  _pid="$1"
  ps -p "${_pid}" 1>/dev/null
  [ $? != 0 ] && exit 2

  # initiate scripts
  load_scripts

  # change IFS to :
  # loop over $SCRIPTS and execute the add hook
  IFS=":"
  for ctl in ${SCRIPTS};
  do
    add::${ctl} "${_pid}"
  done

  # now that we know the commands, loop over them too!
  # inside the loop set IFS to "," to set args
  for cmd in ${COMMAND};
  do
    IFS=","
    set -- "${cmd}"
    run::$1 $2
  done

  # Add trap for signals
  trap "runner_stop ${PIDS}" EXIT SIGINT SIGPIPE SIGHUP 0
  # reset IFS
  unset IFS
  wait_input
}

RUNNERDIR=$(dirname "$0")
(cd $RUNNERDIR && runner_start "$1")

Let’s digest a bit of that. First, we check if the number of arguments provided is less than 1

[ "${#}" -lt "1" ] && usage && exit 1

then we call usage and we exit with return code 1

The load_scripts function will load a bunch of scripts (from the same directory) as long as the scripts are suffixed .ctl.sh

Here’s an example script, say fds.ctl.sh, which will print File Descriptors used by the process, we will use procstat internally.

#!/bin/sh

add::fds()
{
  COMMAND="fds,$1:$COMMAND"
}

run::fds()
{
  procstat --libxo=xml -w 5 -f "$1" &
  PIDS="$!:$PIDS"
}

export SCRIPTS="fds:$SCRIPTS"

Here’s where meta-programming comes into use (I think), we have a variable named $SCRIPTS, which is modified to add the script name into it, $PATH-style, and two functions, add::fds and run::fds. As you have guessed add:: and run:: are the hook names.

I’ll add another script, it will use procstat as well, but this time we will print the resource usage

#!/bin/sh

add::resource()
{
  COMMAND="resource,$1:$COMMAND"
}

run::resource()
{
  procstat --libxo=xml -w 5 -r "$1" &
  PIDS="$!:$PIDS"
}

export SCRIPTS="resource:$SCRIPTS"

The same applies here, one variable, $SCRIPTS and two functions, add::resource and run::resource.

Which means, after loading our scripts all four functions will be loaded into our program and the environment variable $SCRIPTS will have the value resource:fds:

Good? Okay let’s continue.

Since we used : to separate the name of the scripts we must set IFS to :, and we start looping over $SCRIPTS. Now we just run add::${ctl}, which would be add::fds and add::resource. We also pass the ${_pid} variable, if we need to

These two functions would do more meta-programming by setting the $COMMAND variable to script_name,arguments:$COMMAND, again PATH-style.

Which means that the $COMMAND variable has the value fds,89913:resource,89913:

The next bit is a bit tricky, since we’ve set $COMMAND to prog0,arg1:prog1,arg1,arg2: (well, not really arg2, but we could’ve) then we need to

  1. Use “,” as IFS
  2. Tell sh to set the positional parameters, so prog0 becomes $1 and arg1 becomes $2, etc.

and now we execute run::$1 $2, which would be run::fds 89913 then run::resource 89913.

I think I can make this better by running run::$@, where $@ is basically all the parameters, but will test that later

– antranigv at 6am reading the code that he wrote drunk

In the end, we add some signal trapping, we reset IFS and we just wait for an input.

Okay, so we now have a piece of software that reads other programs and modifies itself while running. We have a meta-program!

Let’s give it a run.

# ./runner.sh 89913
<procstat version="1"><files><89913><procstat version="1"><rusage><89913><process_id>89913</process_id><command>miniflux</command><user time>01:37:54.339245</user time><system time>00:19:43.630210</system time><maximum RSS>61236</maximum RSS><integral shared memory>5917491656</integral shared memory><integral unshared data>1310633336</integral unshared data><integral unshared stack>114278656</integral unshared stack><process_id>89913</process_id><command>miniflux</command><files><fd>text</fd><fd_type>vnode</fd_type><vode_type>regular</vode_type><fd_flags>read</fd_flags><ref_count>-</ref_count><offset>-</offset><protocol>-</protocol><path>/usr/local/jails/rss/usr/local/bin/miniflux</path><page reclaims>16939</page reclaims><page faults>7</page faults><swaps>0</swaps><block reads>5</block reads><block writes>1</block writes><messages sent>12603917</messages sent></files><files><fd>cwd</fd><fd_type>vnode</fd_type><vode_type>directory</vode_type><fd_flags>read</fd_flags><ref_count>-</ref_count><offset>-</offset><protocol>-</protocol><path>/usr/local/jails/rss/root</path><messages received>14057863</messages received><signals received>807163</signals received><voluntary context switches>79530890</voluntary context switches><involuntary context switches>5489854</involuntary context switches></files><files><fd>root</fd><fd_type>vnode</fd_type><vode_type>directory</vode_type><fd_flags>read</fd_flags><ref_count>-</ref_count><offset>-</offset><protocol>-</protocol><path>/usr/local/jails/rss</path></89913></rusage></procstat></files><files><fd>jail</fd><fd_type>vnode</fd_type><vode_type>directory</vode_type><fd_flags>read</fd_flags><ref_count>-</ref_count><offset>-</offset><protocol>-</protocol><path>/usr/local/jails/rss</path></files><files><fd>0</fd><fd_type>vnode</fd_type><vode_type>character</vode_type><fd_flags>read</fd_flags><fd_flags>write</fd_flags><ref_count>4</ref_count><offset>0</offset><protocol>-</protocol><path>/usr/local/jails/rss/dev/null</path></files><files><fd>1</fd><fd_type>pipe</fd_type><fd_flags>read</fd_flags><fd_flags>write</fd_flags><ref_count>2</ref_count><offset>0</offset><protocol>-</protocol><path>-</path></files><files><fd>2</fd><fd_type>pipe</fd_type><fd_flags>read</fd_flags><fd_flags>write</fd_flags><ref_count>2</ref_count><offset>0</offset><protocol>-</protocol><path>-</path></files><files><fd>3</fd><fd_type>kqueue</fd_type><fd_flags>read</fd_flags><fd_flags>write</fd_flags><ref_count>2</ref_count><offset>0</offset><protocol>-</protocol><path>-</path></files><files><fd>4</fd><fd_type>pipe</fd_type><fd_flags>read</fd_flags><fd_flags>write</fd_flags><fd_flags>nonblocking</fd_flags><ref_count>2</ref_count><offset>0</offset><protocol>-</protocol><path>-</path></files><files><fd>5</fd><fd_type>pipe</fd_type><fd_flags>read</fd_flags><fd_flags>write</fd_flags><fd_flags>nonblocking</fd_flags><ref_count>1</ref_count><offset>0</offset><protocol>-</protocol><path>-</path></files><files><fd>6</fd><fd_type>socket</fd_type><fd_flags>read</fd_flags><fd_flags>write</fd_flags><fd_flags>nonblocking</fd_flags><ref_count>3</ref_count><offset>0</offset><protocol>TCP</protocol><sendq>0</sendq><recvq>0</recvq><path>192.168.10.5:63835 192.168.10.3:5432</path></files><files><fd>7</fd><fd_type>socket</fd_type><fd_flags>read</fd_flags><fd_flags>write</fd_flags><fd_flags>nonblocking</fd_flags><ref_count>3</ref_count><offset>0</offset><protocol>TCP</protocol><sendq>0</sendq><recvq>0</recvq><path>::.8080 ::.0</path></files></89913></files></procstat>

Why XML? Because libxos JSON output is not “real” JSON when procstat‘s running in repeat mode, but that’s a story for another day.

All code examples can be found as a GitHub Gist.

That’s all folks…

Reply via email.

Moving (back) to WordPress

Our story starts 2-3 weeks ago, when my younger sister asked me to open a blog for her (it runs in the family, I think). Like any sane person, I created a FreeBSD Jail, configured its networking and followed an online article on how to deploy WordPress on FreeBSD. That’s the proper way to do it, right?

And I fell in love! Last time I used WordPress was in 2018, but this time it felt different, I’m not sure why (yet), but it feels like it came back to its roots. It has a simple screen that helps you to write.

Usually, I would say that tools don’t matter, and yet, I (narcissistically) rant about tools and that Docker is awful FreeBSD Jails is amazing. But I think that tools matters, they always mattered, it’s just that, we say such things in order to not sound like gatekeepers to newcomers.

Next was my girlfriend Lilith, she migrated from Blogspot to WordPress, she also blogged about that. Also deployed in a FreeBSD Jail in my home server.

Before making such decisions I look at data, so I downloaded my posts and did Unix magic.

$ xmlstarlet select -t -v '/ul/li/span' posts.txt | cut -d ' ' -f 3 | sort | uniq -c | sort
   2 2017
   2 2019
   3 2018
   7 2020
  14 2022
  29 2016
  35 2014
  47 2021
  90 2015

This comes from my Armenian blog, where I used to have WordPress, and yes, during 2015 I was very busy AND I used WordPress.

My Armenian blog is my “lab”, so I moved it not only to WordPress, but to a whole new domain, a Unicode domain with a Unicode TLD, անդրանիկ.հայ.

My English blog used to live at https://antranigv.am/weblog_en/, so I migrated it to a new subdomain, weblog.antranigv.am.

Migrating from Hugo to WordPress is not what I want to talk about, that’s a story for another day and that day is tomorrow!

I did some basic things on WordPress, such as disabling comments, which incidentally Rubenerd just blogged about and I added “Reply via email” as noted in Kev’s blog.

Oh and I added a plugin that publishes all the articles to ActivityPub, now you can follow @antranigv@weblog.antranigv.am from Fediverse (e.g. Mastodon, Pleroma, etc).

In the first week I blogged more on my Armenian blog than I did in months.

Expect a flood of posts (well, not really, more like 2-3 posts a day).

That’s all folks…

Reply via email.

Blogging on static generated sites with OPML and XSLT in 2022

This is gonna be a long blog-post, so bare with me.

I started blogging in February of 2014. A friend of mine, my mentor, norayr, got me my first domain, pingvinashen.am (literally means: the town of penguins), because I did not have money back then. I started hosting WordPress on it, tried to blog as much as I can, anything from technical knowledge to personal opinions.

Over the years I moved from a WordPress blog to a statically generated website using Hugo, and then I started an English blog with the same framework as well.

This worked all fine for me, because everyone was doing the same and it did fulfill my needs anyway.

Until I realized that many of my title-less posts (actually, the title was just »»») were kinda “icky”.

So I started researching about the origins of blogging. Now, I already knew about Adam Curry and how he started the PodCasting “industry”, but I never knew about blogging itself.

Obviously, I found DaveNet, the oldest running blog.

Currently, Scripting.com (Dave Winer’s updated blog) has been running for: 27 years, 6 months, 9 days, 21 hours, 20 minutes, 42 seconds. (taken from his website).

Learning more about Dave, I learned a lot about the origin of OPML, which stands for Outline Processor Markup Language. I knew a bit about it since it’s the standard format to export and import RSS/Atom feeds into news aggregators, but I never actually KNEW what it was about. If you are interested, checkout opml.org/Spec2.opml.

My interest of OPML got boomed when I saw my favorite blogger, Rubenerd, was using it for his Omake page.

Okay, so you can host OPML pages WITH styling using XSLT, the Extensible Stylesheet Language Transformations language.

As you know, I’m a huge fan of XML, while it’s not as “modern” as JSON or “cool” as YAML, I think it has a proper place for its usage. This seems to be one of them.

I started copying Rubenerd’s XSL file and ended up with this, which is not close to it anymore. I learned about recursive calling, templating with matches, etc.

First, I would love to tell you that my homepage is finally made in OPML+XSLT. Here’s my process:

  • Write the content using an outliner and export as OPML
  • Use xsltproc (part of macOS base, BTW) to generate an HTML output
  • Export that HTML to where-ever you want.

Does this sound similar to static site generators? Because it is, except that static site generators have their own templating language, while in this case, I’m using XSLT.

Okay, let’s talk more about the details.

First, I was using Zavala, my outliner of choice for Apple-ecosystem. First issue was that the links in there are markdown (which is [text](link)). The second issue was that (and I’m not sure about this) I was not able to edit the attributes of a node/outline.

I wanted to use Drummer, but I didn’t want to log-in with Twitter to use it. I’ve had issues with Twitter in the past, where they deleted my 6-year-old account in 2015.

Luckily, there’s a version called Electric Drummer (hereafter D/E), it’s a bit outdated, but it was good enough for my needs.

First thing first, I “converted” my homepage to OPML.

After that I wrote the XSLT code.

The xsltproc tool is actually very interesting, the usage is pretty simple and it follows the standards very well. The error messages are pretty human readable.

On my first try, I had an issue with links, as in <a HREF=""> tags, because XSLT does not allow < in the field by default. So my idea was to see what would D/E do after saving. Turns out it would convert to HTML encoded text, i.e. &lt;a href=&quot;https://antranigv.am/&quot;&gt;antranigv&lt;/a&gt; . Which meant that I could use disable-output-escaping to achieve my <a> tag needs.

This got me thinking, maybe I could also use the HTML <img> tag?

Technically, there’s no way to add an image tag in D/E, however, you can script your way around it, so here’s what I did:

<outline text="Add image" created="Mon, 11 Apr 2022 23:37:14 GMT">
    <outline text="url = dialog.ask(&quot;Enter image URL&quot;, &quot;&quot;, &quot;&quot;)" created="Tue, 12 Apr 2022 21:45:09 GMT"/>
    <outline text="op.insert('&lt;img src=&quot;' + url + '&quot;&gt;', right)" created="Tue, 12 Apr 2022 21:46:01 GMT"/>
</outline>

Basically, I used Dialog to ask the user for the link and then paste the outline as a new first child of the bar cursor.

After that I just do xsltproc -o index.html opml.xsl index.opml. Wait, can’t I just include the XSL page into OPML like Rubenerd’s Omake? Yes, I can, but I’m not sure how things will work out in other people’s browser, so I just generate the HTML file locally and publish it remotely.

In an ideal world, I would use these technologies for my day-to-day blogging with a bit of change.

  • I would either do some changes in E/D, e.g.
    • Add a dialog.form, which is similar to dialog.ask, where the input can be a text field instead of a single line (more on that later)
    • Make it understand Operating System commands using Shell (execute publish.sh) or add more Node-like JS in it.
  • Or, do changes in Zavala to support HTML links, HTML image tags. I would love this more, because it’s native to macOS. I’ve been playing around with Swift lately, I’ll try this next month.

Assuming I would use this for my day-to-day blogging software, how would this look like? Well, I started experimenting, this is what I got for now.

The nice thing about Drummer is that it adds the calendarMonth and calendarDay types automatically.

The last missing piece for me would be the ability to add a code block. Ideally, I would use dialog, but oh boy it does not understand \n or \r, which meant doing a very dirty hack. If anyone knows a better way, please let me know.

First, I wrote a Drummer script that takes in the code encoded as base64, decodes it, replaces the newlines with <br/>, and puts them in a <code><pre> tag as a new first child of the bar cursor. Here’s the script:

Like I said, in an ideal world 🙂

So, here are my conclusions.

I started tinkering with all this because I wanted title-less posts like Dave (here’s an example of how that would look like in RSS). I learned a lot about OPML and XSL, I got motivated by Rubenerd to write my own XSL which ended up looking like a mini-hugo.

I think I will spend some time making patches to Electric Drummer and Zavala, and I will try building a PoC for blogging.

I think XSLT is very interesting in this day and age, it has a huge potential when used correctly and most importantly, there’s a lot of history behind it.

The questions is, where do we go from here? Should I do this because it’s old-school and cool, or should I find another way to blog more with title-less posts?

All that aside, this was very fun.

Thank you for reading.

P.S. If you have any questions, ideas, suggestions or want to chat with me, I’m always available.

That’s all folks…

Reply via email.

Git Remote URL, the lazy way

I develop and run my code on different machines. We have a centralized storage in our infrastructure which can be mounted via NFS, SSHFS or SMB.

The “problem” is that the remote servers, which also mount my remote home (automatically, thanks to AutoFS), don’t have my keys, they never should, the keys are only in one place, my laptop. The keys that I need to commit are my SSH keys for Git push and my GPG keys for signing those commits.

The usual problem was when I git pull on the development server needs to be an HTTP URL, internally accessible for the mentioned server. Which mean that I can’t git push from my laptop, because we don’t allow pushing via HTTP(s).

At the same time, if I set the URL to an SSH URL, then git pull will not work on the development server, because it does not have my SSH keys.

I know that I can set multiple remotes in git, but how about setting a different URL for a remote’s push and pull?

Turns out that’s doable!

It’s also very simple:

First, we clone using the accessible HTTP URL

git clone https://my.git.server.local/myuser/myrepo.git

Then we set the URL only for pushing

git remote set-url --push origin git@my.git.server.local:myuser/myrepo.git

And now let’s check the remote addresses

% git remote -v
origin  https://my.git.server.local/myuser/myrepo.git (fetch)
origin  git@my.git.server.local:myuser/myrepo.git (push)

Yey, exactly what I needed!

That’s all folks…

Reply via email.

Tweeting with Huginn and Twurl

After deploying Huginn I wanted to connect my Twitter accounts, so every blog post would be automatically tweeted.

The problem is, there seems to be an issue in the Twitter Service either in Huginn or at Twitter. which I’m sure someone is working on fixing it.

However, I was able to find a workaround by using Twurl, a curl-like application for interacting with the Twitter API.

You will need to do a couple of things.

  • Make sure you have a Twitter app API (click here for more info)
  • Make sure you have ENABLE_INSECURE_AGENTS set to true in Huginn’s .env
  • Install twurl. It was not available as a FreeBSD package, I installed it using RubyGems: gem install twurl

Next, you need to authorize your app (commands from Twurl’s README.md) with the same Unix user that’s running Huginn;

twurl authorize --consumer-key key       \
                --consumer-secret secret

And now we need to set up a new Shell Command Agent.

Now, I had to spend a lot of time to make it work, the command-line options are very… sensitive.

This is what I ended up with;

{
  "path": "/",
  "command": [
    "/usr/local/bin/twurl",
    "/2/tweets",
    "-A",
    "Content-type: application/json",
    "-d",
    "{\"text\": \"{{title}} | {{url}}\\n{{content | strip_html | replace: '\\n', ' ' | truncate: 128}}\"}",
    "-t"
  ],
  "unbundle": "true",
  "suppress_on_failure": false,
  "suppress_on_empty_output": "false",
  "expected_update_period_in_days": 15
}

Let’s go one by one.

The path does not matter, as we’re not interacting with files.

I am running FreeBSD so my twurl command path would be /usr/local/bin/twurl. You may run which twurl to find yours.

The /2/tweets is the resource we’re sending a request to, -A is for headers, and -d specifies the data of the body (hence, implies it’s a POST method).

My sources are RSS feeds, so I’m using things like {{title}} and {{url}}, you can do whatever you want. Since I’m inserting a JSON in the JSON, I had to use \\n so it converts to \n in the command. Be careful about that.

In the end, -t will “trace” the command, so we can see (if needed) the POST request as well as the result.

The unbundle parameter tells Huginn to not use the Ruby Gems that are in Huginn, instead, the command is run in a clean environment, outside of Huginn’s bundler context.

I left everything else as is.

Now, you can tweet from Huginn.

NOTE: You can use the -u flag with twurl to specify which account to use, refer to Changing your default profile for more info about that.

And now, all works fine.

That’s all folks…

Reply via email.