Addressing failed setrlimit calls in sudo

Written in the mid-afternoon in English • Tags: ,

After installing sudo 1.8.29 from pkgsrc (security/sudo) I started frequently seeing this warning message:

sudo: setrlimit(3): Invalid argument

It took a few rounds, but eventually I applied an acceptable patch for the pkgsrc-2019Q4 release. Later an upstream workaround was committed and included in the sudo 1.8.30 release. (more…)

»
The Proxmox wiki has instructions for importing the CA certificate. Instead of following the OS X instructions to the letter and importing the host certificate of each cluster node, just import the pve-root-ca.pem file in Keychain Access (File > Import Items), then open the item and mark it trusted (e.g. Always trust).

Regenerating Proxmox certificates

Written early in the afternoon in English • Tags: ,

The new requirements for trusted certificates on macOS Catalina and iOS 13 blocked me from accessing the web UI on Proxmox installations (NET::ERR_CERT_REVOKED). Fresh installations would work, as Proxmox has been updated to generate “better” certificates. Existing installations, unfortunately, are not automatically fixed on upgrading to Proxmox 6.

Certificate management on Proxmox is handled with pvenode(1) — except when it isn’t. There is no functionality there for regenerating the self-signed certificates. An older wiki page for HTTPS certificate configuration provided some useful hints: pvecm(1) has an updatecerts command. It won’t, however, regenerate existing (unexpired) certificates.

Against the warnings on the Certificate mangement page I thought I’d try removing the apparently relevant files manually:

cd /etc/pve
rm pve-root-ca.pem priv/pve-root-ca.key nodes/*/pve-ssl.{key,pem}

Then I regenerated the certificates and restarted pveproxy(8) on each node:

pvecm updatecerts --force
systemctl restart pveproxy

Refreshing the page in the browser restores access to the web UI.

Fixed configure script in tcsh

Written at lunch time in English • Tags: ,

I noticed that tcsh 6.22.02 has a broken configure script:

./configure: gl_HOST_CPU_C_ABI_32BIT: not found

This looked like an unexpanded m4 macro to me. I was unable to reproduce the error if I ran autoreconf under Debian buster, so I switched to a NetBSD host and tried there. Indeed, running autoreconf resulted in the same broken configure script.

Upon closer inspection, it turns out that devel/gettext-m4 had been updated to a new version in pkgsrc without noticing that generated configure scripts now throw an error. The cause was a missing file (host-cpu-c-abi.m4).

For some reason the package Makefile has a hardcoded list of files to install from gettext-tools/gnulib-m4 (as opposed to calling the install target via make). The reason is probably to avoid unnecessary or irrelevant files. However, this means that any relevant changes to gettext-tools/gnulib-m4/Makefile can easily go unnoticed.

I’ve added the missing file to the list, but I worry that this approach is prone to errors. Perhaps some easy check could be added and noted in the package Makefile to detect problems, e.g. generating a sample configure script before committing a version update.

»
I’ve released roller 1.21 for ease of packaging in pkgsrc. The only change is to match the new option names in pflogsumm 1.1.5.
»
I’ve fixed matching of IPv6 addresses in sysutils/pflogsumm and also updated it to version 1.1.5 in pkgsrc. Note that the naming of options has changed from using underscores to using hyphens.
»
I fished out a couple of upstream commits (patch #1 and patch #2) for net/mtr to silence the Error decoding localhost address messages.
»
I applied a small patch to mail/postgrey to silence an error about the PID file when stopping the service.
»
I fished out an upstream commit to graphics/gd to address CVE-2018-1000222. While there, I also restored the option to make linking with libtiff optional.

Network speed and IRQ affinity

Written at evening time in English • Tags: , ,

By default many Linux network interface card drivers set their SMP affinity mask to either all zeroes or all ones (“ff” — the length of the mask depends on the number of CPUs on the system). The former results in all queues and interfaces running on CPU ID 0, which can become a performance bottleneck due to insufficient computing power. The latter results in all queues and interfaces being scheduled on multiple CPUs, which can become a performance bottleneck due to increased CPU memory cache misses. (more…)