Get the GPG key from a GPG keyserver for a Debian package repository

Let’s say you’ve upgraded to Debian Trixie

Some apt repos suddenly are refusing to update their package lists during apt update because of missing package signing keys;

Sub-process /usr/bin/sqv returned an error code (1), error message is: Missing key 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF, which is needed to verify signature.<br>All packages are up to date.<br>Warning: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. OpenPGP signature verification failed: https://download.mono-project.com/repo/debian stable-buster InRelease: Sub-process /usr/bin/sqv returned an error code (1), error message is: Missing key 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF, which is needed to verify signature.

apt-key was deprecated, and isn’t on your system anymore, and nearly all of the guides you can find on this topic for some reason involve using apt-key to fetch the key from a keyserver before exporting it to the “modern” /etc/apt/trusted.gpg.d/, but apt-key is already deprecated so.. you don’t have apt-key.

You can just use gpg directly instead of apt-key; so here, downloading the mono repo (yeah ok, I know, don’t ask) gpg key (with the fingerprint 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF) from keyserver.ubuntu.com into an intermediate temporary keyring and then export it directly into the appropriate format / location in the correct format etc:

gpg --no-default-keyring --keyserver keyserver.ubuntu.com --keyring temp-keyring.gpg --recv-key 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
gpg --no-default-keyring --keyring temp-keyring.gpg --output /etc/apt/trusted.gpg.d/mono-repo.gpg --export

You can also be explicit about which arch and key must be used by a given repo by adding the arch argument and the appropriate gpg keyfile signed-by argument.

So, for example, the repo definition for the mono project

deb https://download.mono-project.com/repo/debian stable-buster main

Instead would become

deb [ arch=amd64 signed-by=/etc/apt/trusted.gpg.d/mono-repo.gpg ] https://download.mono-project.com/repo/debian stable-buster main

Working with Me, or, Why You Should Probably Stop Contacting Me Directly

This is a hopefully more ordered, more complete explanation of my rambling thread on fedi on the 1st, so that I can more easily share this “development” and my reasoning with people in different forums without writing all this wall of text into some slack group or mailing list or other, because while it already feels self-indulgent as it is to blog about this, if I don’t communicate this in some way I’m hardly going to get the outcome I want and some of you might get vexed at not being able to reach me directly any more seemingly unexpectedly without understanding why.

Given how high the stakes can be when people are trying to urgently get in touch with me, I also don’t want any of this to be a surprise, so I feel obliged to explain what’s changing and why.

TL;DR Version:

If you emailed me at my Netcalibre / LCHost addresses before 19/12/2025 and haven’t had a reply you aren’t getting one, sorry.

My direct email is not the fastest or best way to get help. If it’s work related, please email a role address instead – i.e. sales, support (there’s a list of the more common ones below). Please, do not CC me on emails to role addresses. If I need to see it, the ticketer will tell me about it.

If you have a mobile number ending 80 for me, delete it – that number is no longer current, just forwards to the IVR, and I do not have, nor plan to get, a replacement “work mobile”.

If it’s urgent or out of hours, email the details to the right role address and then use the main company number 020 3026 2626 to get help – it’s a very short menu, I promise.

The Full Version:

I Am but One Flawed Human

I wouldn’t describe myself as a control freak (certainly I’ve never been tyrannical or obsessive about having control ‘in of itself’), but I certainly obsess about things being done correctly in a way that has previously made it harder for me to trust other people to get things done. One of my co-directors has previously – and probably somewhat charitably – described that I “allow perfect to be the enemy of good” which I will accept is as succinct an explanation I can imagine of how my character flaws present.

The last two years have seen me add, for the first time, new director-shareholders, and with us now having an actual technical team.. well, to say that these changes have required some personal growth in being able to trust and delegate is quite the understatement. It has not been a trivial or easy task. I have not always been easy for my fellow directors to work with. I am lucky that they can see through that to the value in us working together.

The evidence to date, then, has proven that while the results weren’t always immediately perfect, ceding control and delegating more has not, in fact, resulted in the world ending and has unarguably paid significant dividends for the business which shows no signs of slowing down, so now, really, as we approach the 25th anniversary of me starting a Linux shell hosting company in my spare time at university, I suppose I have no serious alternative but to continue leaning into this change.

Scaling into the Future

At the heart of all of this is the perhaps obvious-when-you-write-it-down principle that I, personally, do not scale.

This is not exactly news to anyone, myself included, nor is it the sort of earth shattering incredible business management insight upon which someone can build a career in book tours and business seminars, but it is perhaps something I had not felt able to – or perhaps prepared – to accept, at least in a.. practical sense.

Swerving any serious introspection about this particular elephant in the room in the past generally involved quipping that I needed to be able to clone myself (something which I think we can probably all agree would more likely than not be a terrible development for everyone).

To continue growing like we are, we will have to build a team around our directors who can – to begin with – operate as an extension of ourselves, but in the end ultimately replace us in several areas of the business so that we can focus entirely on complex contracts, projects and actual Direction – and be able to take a holiday without feeling compelled to open a laptop every single day (this will be a very hard habit for me to break). Now then, I instead quip about working to put myself out of a job.

So: I am starting this year by being realistic about the state of my mailbox as 2025 came to a close, reflecting on how it got into that state, thinking about the current and likely future demands on my time, and ultimately communicating the changes in how I work to ensure everyone gets the best possible experience out of their dealings with Netcalibre going forward as the business continues to grow.

I recognise that part of the reason I had any customers at all before the arrival of people here who understood Sales and Marketing is because people were mad enough to want to work specifically with me. These relationships are something I value and the burden is going to be on us – me – to ensure that you don’t feel short-changed in all of this.

One of my key objectives is that you should, if anything, only notice an improvement in service by not contacting me directly – even if I ultimately end up being heavily involved in your work, there will be a team of people around me to herd cats, organise meetings, take minutes, prepare quotations, send invoices (not always a given just a few short years ago!) and just generally manage everything in a way that will ensure that things happen when they should.

You know, almost like a real business.

You Should Probably Stop Emailing Me..

If you emailed me before 19/12/2025 and haven’t had a reply you aren’t getting one, sorry.

To actually make a change meant doing something with the backlog of messages I was never going to get to, and yesterday I tossed everything pre 19/12/2025 into an archive folder “as is” (with 10,768 unreads, now that Outlook has actually counted all of them).

I’ve now cleared out any remaining “holiday period” stuff (i.e. post 19/12/2025) – taking me to the vaunted “Inbox Zero” for perhaps the first time ever – and will try to be stricter from now on in forwarding any suitable work related stuff into the ticketer rather than replying directly.

This means – more than ever before – that my direct email will not be the fastest way to get help, as I will just forward general business/service/product questions/requests into the ticketer for the team to collect – even if I know your request will need my involvement or input.

So if it’s a general work related request, please – please – email (only!) a role address instead so that the team can help you – i.e:
accounts (you have questions relating to being billed for a thing)
access (you need to get in somewhere we control in order to do a thing),
hands (you want us to go somewhere and do a thing),
sales (you want a quote for, or to buy, a thing),
support (you need help with some other thing not covered above),
(all at netcalibre dot uk)
Please, do not CC me on emails to role addresses. If I need to see it, the ticketer will tell me about it, and me being added to the CC just tends to mean I get two copies of every email for cases I am actively involved in.

In the unlikely event that you feel the need to escalate, ask for it in the ticket and it will happen. There is no KPI for “making the least escalations”, so nobody is in any way incentivised to try and prevent it.

If that doesn’t work for your case for some reason, then mail or call your account manager (this might be news to you: we have people who fill that role now!).

If that doesn’t work, or you don’t have an account manager, then email me, or call in and ask to speak with me.. which does however bring me onto..

..and You Will Probably Have to Stop Calling Me

Where e-mailing me directly is certainly bad for your request, 500+ people (a sheer guess based on the size of my own phonebook) having my mobile number and being able to call me whenever is bad for everyone’s requests.

Putting aside the fact that I am not always going to be the person on call on a given night, my work focus is too easily derailed, and for long periods of time at that, by people being able to call me directly, and so it’s had to stop. Sorry. I do honestly wish I wasn’t this person, but at this point I don’t see this changing.

A 5 panel comic from MonkeyUser.com.  The first three panels show a stick figure working at a computer, visualising in their head an increasingly complex process flow of some sort as you might when focusing on a particularly complex task.  In the third panel, a voice calls across the room - "Hey! Do you have 1 sec?"  In the fourth panel, the stick figure turns towards the voice, all of the mental visualisation disappears in a puff of smoke, and the voice then says "Nevermind".  The fifth, double wide panel, shows the stick figure in their resulting state of confusion - "What was I doing?" - floating in a void surrounded by elements of their mental flow chart, their screen, keyboard, chair, and a cat, door and cardboard box.
Basically this, but with a mobile phone instead of a voice from across the room.
Focus on MonkeyUser.com

If you still have a mobile number ending 80 for me in your phonebook, delete it. It’s no longer current and isn’t coming back, and just forwards to our main office number since a couple months ago. I very rarely might read messages that are sent to it, like, once a month, as I gradually ‘exit’ from using that number and unpick it from my life after using it for probably 20 years.

In short, I do not have a ‘work mobile’ and will likely not have one again.

If it’s work related (or an out of ours emergency), please call the main number 020 3026 2626 – our IVR is very short day or night and you’ll be guided to the right help in one step.

How This Helps My Team

Our team is already very good but the only way they get better is by maximising their training opportunities which means everything has to go through them so that they get every possible chance to learn something new – either shadowing as we handle your case, or in my weekly teaching sessions where we work through the team’s list of topics they want to learn more about.

These sessions are open to everyone in the business, not just people with engineer in their job title, because I am a big believer in trying to ensure everyone in the business understands as much as possible about the products and services they are working with. Levelling up everyone’s expertise makes it much more likely that errors and omissions can be avoided or at least detected by anyone, and allows everyone to transact with our customers more confidently.

How This Helps You

Our support team in particular has a 30 minute target SLA during working ours for “triage-by-human” as a minimum for all tickets, an SLA they have hit 99.35% of the time for the last 90 days at time of writing this post.

I think it’s fair to say that this in itself is a point of some pride for our team and I guarantee that this is much quicker than the average response time for an email sent to me.

Three graphical semi-circular dials with a red-yellow-amber-green colour scale with the "needle" firmly planted in the green zone in each case, showing Response SLA statistics for 7, 30 and 90 days, at 99.54%, 99.68% and 99.35% respectively.
Response SLA dials from our helpdesk, as at 02/01/2026 18:45.

I am hopeful this stops you being dissuaded from bringing stuff to us because of being worried about “bothering me”, which I have heard a few times from people towards the end of 2025. I don’t want to inadvertently be driving business away if people think the only way to transact with us would be to actually have to “bother me”, because it’s not been the case for the last two years.

We have built a great team of people who clearly enjoy their work and who genuinely want to help and – frankly – are always going to be better as a team than me individually at answering emails. I implore you to send your emails to them, or to call them.

What Isn’t Changing

Some of you have specifically contracted me through Netcalibre to fulfil a role inside your business. That does not change, and particularly anything “private and confidential” should still flow through my direct mailbox.

Anything complicated or high-stakes will still have my involvement, at the very least in the background.

I’m not going to become a Director of PowerPoint and people will still have to pry my terminal client out of my cold, dead hands.

The team can (almost always) reach me if they need my help.

My commitment to “do it right or make it right” has – and is – of course, the shared commitment of our entire team.

If you know me personally and need my number for social reasons, then private mention me on fedi or drop me an email, of course.

Don’t Suffer in Silence

If you’re worried or not sure about something, or in the unlikely event that something doesn’t seem to be going right, you can always ask whoever’s dealing with your case to double-check with me so that we can put those concerns to rest. Nobody will be offended.

If anything at all is worse from these changes, then please let me know, and we will fix it.

List all Proxmox VM snapshots on a given Ceph pool

It’s useful to occasionally review all of the snapshots that exist on your Ceph pool so that you can identify old ones that can probably be deleted – not just to free up space, but to ensure you are not wasting performance by having too many layers of unneeded snapshots hanging around.

The below script will make this easy and fast to do.

It is a refinement of Proxmox Forum user 43n12y’s excellent suggestion, and it looks like this in use – shown here being used in human-readable table mode, rather than the default JSON output – the JSON output is intended to be fed to some other script to carry out automated follow-up actions like sending an email, and/or deleting snapshots over a certain age automatically, etc:

root@vm00:~/scripts# ./snapshots.sh -p MYCEPHPOOL -t
Node VMID VM Name Snapshot Dates
vm01 1033 cpanel1.XXXXXXX.net-CL8 "2024-06-07"
vm01 121 fw0.XXXXXX.YYYYYYY.net "2024-08-27","2024-08-27","2024-08-27","2024-08-27","2024-08-27"
vm00 2004 unifi.ZZZZZZZ.co.uk "2024-09-14"
vm01 2007 vm-XXXXX.YYYYY.net "2024-07-19"
#/bin/bash

# Help Function
Help()
{
        echo
        echo "Show information about VM snapshots that exist on a given Ceph pool"
        echo
        echo "Syntax: $0 -p POOLNAME"
        echo
        echo "Required:"
        echo "-p NAME   Specify the pool name"
        echo
        echo "Options:"
        echo "-h        Print this help message"
        echo "-t        Print out as a table - i.e. don't output as the default JSON"
        echo
        echo "To get a list of pool names, you can use:"
        echo "ceph osd pool ls"
        echo
}

unset -v pool
unset -v text

while getopts hp:t opt; do
        case $opt in
                h) Help ; exit ;;
                p) pool=$OPTARG ;;
                t) text=true ;;
                *) Help ; exit ;;
        esac
done

: ${pool:?-p is required: You must specify the Ceph pool with -p NAME}

tmpfile=$(mktemp /tmp/snapshot-search-$pool.XXXXXX)

pvesh get /cluster/resources --type vm --output-format json-pretty >> ${tmpfile}

if [ $text ]; then
        unset -v tabledata
fi

for vmid in $(rbd ls -l -p $pool | grep "^vm-.*@" | cut -f 2 -d "-" | uniq); do
        vmname=$(jq -r ".[] | select ( .vmid == ${vmid}) | .name" ${tmpfile})
        node=$(jq -r ".[] | select ( .vmid == ${vmid}) | .node" ${tmpfile})
        filter=".[] | select ( .name != \"current\" ) + {\"vmname\": \"${vmname}\",\"vmid\": \"${vmid}\"} | .snaptime |= strftime(\"%Y-%m-%d\")"
        if [ $text ]; then
                unset -v snapdates
                snapdates=$(pvesh get "/nodes/$node/qemu/$vmid/snapshot" --output-format=json | jq -r '[.[] | select(.snaptime) | (.snaptime | tonumber | todate[:10])] | @csv');
                tabledata="$tabledata$node!$vmid!$vmname!$snapdates\n"
        else
                pvesh get /nodes/${node}/qemu/${vmid}/snapshot --output-format json-pretty | jq "${filter}"
        fi
done

if [ $text ]; then
        printf $tabledata | column -t -s "!" -N "Node,VMID,VM Name,Snapshot Dates" -T "Node","VM Name" -W "Snapshot Dates"
fi

rm ${tmpfile}

Mount a Proxmox Backup Server using a non-standard TCP port

Let’s say for sake of argument you have multiple PBS hosts behind a NAT router and for whatever reason you’re not going to, or don’t need to, run a VPN between every PVE host that needs to push backups to a PBS behind that NAT router.

By default, PBS runs on port tcp/8007 and is architected such that PVE hosts “push” (connect) to the PBS host.

If you want to mount a PBS backup location using a port other than tcp/8007, you will need to use the command line on the PVE host to do so.

Hypothetical Scenario

Let’s say we have a PBS host (simply called “pbs”) that is accessible – using a NAT port forward – via the IP 100.64.100.10 using port tcp/8099.

On that PBS host I have a datastore named NVME0, and I have created a namespace called “mynamespace”.

The PBS host fingerprint (viewable at Configuration > Certificates > and double click the cert, or Datastore > NVME0 > “Show Connection Information”) is a0:b1:c2:d3:e4:f5:a6:b7:c8:d9:e0:d1:e2:f3:a4:b5:c6:d7:e8:f9:a0:b1:c2:d3:e4:f5:a6:b7:c8:d9:e0:f1.

On that PBS host, I have created a user “myuser” with an api token named “backups” which gave me the token secret “12345678-1234-1234-abcd-1a2b3c4d5e6f” – and granted permissions to the namespace mynamespace on the datastore NVME0.

I want the PBS storage to appear as “pbs-NVME0-mynamespace” on the PVE interface.

Mounting your non-standard PBS storage

Using a root shell on the PVE host you want to mount the PBS storage on, use pvesm like so:

pvesm add pbs pbs-NVME0-mynamespace \ 
--fingerprint "a0:b1:c2:d3:e4:f5:a6:b7:c8:d9:e0:d1:e2:f3:a4:b5:c6:d7:e8:f9:a0:b1:c2:d3:e4:f5:a6:b7:c8:d9:e0:f1" \ 
--server 100.64.100.10 \
--port 8099 \
--datastore NVME0 \
--namespace mynamespace \
--username myuser@pbs\!backups \
--password 12345678-1234-1234-abcd-1a2b3c4d5e6f 

(NB: all the elements you should change to suit your environment are in bold, and pay special attention to the use of a backslash “\” before the “!” token delimiter given for the username to prevent “!” being interpreted as a special character by the bash shell – you will need to add the backslash in to your token username yourself)

Grant a Proxmox Backup Server user and API token access only to a specific namespace

If like us you have multiple namespaces on a single PBS instance, you will want to be able to create user and token rights that only grant access to the specific namespace that token actually needs in order to properly follow the principle of least access.

Once you have created a user and the API token for that user you’re going to use to authenticate with, you need to create the permissions to grant access only to the target namespace.

Let’s say you have a Datastore named “NVME0”. The user and token will need (non-propagated!) DatastoreAudit on the Datastore itself:

As will their token:

You then need to add DatastoreBackup on the namespace. You will have to type the namespace in manually after the /datastore/NVME0 path, so if your namespace was called.. “namespace”, then the permissions would be granted on /datastore/NVME0/namespace:

You’re now ready to mount your namespace “namespace” directly on your PVE host using your API token.

(It’s probably worth mentioning that these permissions will *only* give the PVE permissions to write new backups and restore from existing backups, but not to delete/prune backups that are on the PBS. We use scripts / policy on the PBS itself for deleting backups to prevent an attacker that gets elevation / VM escape on the PVE cluster from being able to wipe the backups on the PBS systems, which run on separate hardware. If you are in an environment where this isn’t as important, you might grant more than “DatastoreBackup” on /datastore/NVME0/namespace to allow pruning/deletion to be managed directly from the PVE interface).

Migrate a VM between separate Proxmox hosts/clusters

You want to move a VM between cluster A and cluster B, or standalone host A to standalone host B?

As of PVE 7.3:

qm remote-migrate

Now, this is documented here, but the documentation on how to specify the API token argument is honestly not at all clear. So here’s a quick howto page with how to use the command and some gotchas you might trip over on the way like I did.

Make an API token on the target host:

Datacenter -> Permissions -> API Tokens

I have not researched what the minimum privileges are, and so here am going with a non privilege seperated root api key.

“Token ID” is just any text string. I have just used TOKENID in this screenshot to match my notes below.
You should consider setting an expiry date so that you can’t accidentally leave a ‘forever key’ with full root access enabled.

Proxmox will then present you with what I guess you would call your full token ID (user and Token ID from the previous screen combined together) and secret:

Then on the shell of the source host, you do the below

NB: the “apitoken” value you have to provide to the CLI tool should take the form of an entire HTTP header for whatever reason, and you combine the user, token id and secret as shown below (Bits in bold are what you need to set for your environment – I have constructed the APIToken below to match the screenshot above so you can see where the elements came from).

qm remote-migrate SOURCEVMID TARGETVMID apitoken='Authorization: PVEAPIToken=root@pam!TOKENID=c362d275-5e68-4482-a4c1-a0114c2ea408',host=TARGETIP,port=8006,fingerprint='HOSTFINGERPRINT' --target-bridge=vmbr0 --target-storage=ZFS-Mirror --online=true

NB; I am doing online migration because as of writing, in Proxmox 8.2.2;

  • Ceph source does not support offline migration, period (ERROR: no export formats for 'SOURCESTORAGE:vm-SOURCEVMID-disk-0' - check storage plugin support!), and
  • when I tried instead to use an NFS volume as the source, I found that a ZFS target does not support offline migration unless it is from a ZFS source: (ERROR: migration aborted (duration 00:00:01): error - tunnel command '{"export_formats":"qcow2+size","format":"qcow2","volname":"vm-SOURCEVMID-disk-0.qcow2","migration_snapshot":0,"storage":"local-lvm","allow_rename":"1","with_snapshots":1,"cmd":"disk-import"}' failed - failed to handle 'disk-import' command - unsupported format 'qcow2' for storage type lvmthin)
  • while I suppose I probably could have exported from NFS to LVM or another NFS store on the target, my default local-lvm storage volume on the target host does not have enough storage for the VM to go on there, all of the storage is tied up in the ZFS-Mirror volume.

Then I hit a new problem:

ERROR: online migrate failure - error - tunnel command '{"migrate_opts":{"remote_node":"TARGETVMHOST","type":"websocket","spice_ticket":null,"migratedfrom":"SOURCEVMHOST","nbd":{"scsi0":{"drivestr":"TARGETSTORAGE:vm-TARGETVMID-disk-0,aio=native,cache=writeback,discard=on,format=raw,iothread=1,size=32G,ssd=1","volid":"TARGETSTORAGE:vm-TARGETVMID-disk-0","success":true}},"storagemap":{"default":"TARGETSTORAGE"},"network":null,"nbd_proto_version":1},"start_params":{"skiplock":1,"forcemachine":"pc-i440fx-9.0+pve0","statefile":"unix","forcecpu":null},"cmd":"start"}' failed - failed to handle 'start' command - start failed: QEMU exited with code 1

Not very descriptive, but fortunately you can get the /reason/ qemu exited with code 1 from the target host – checking the output of the ‘qmtunnel’ task in the task history on the target host, I see that the combination of disk options which are set on my source ceph storage are not valid with the target ZFS storage, and so the task is aborting:

QEMU: kvm: -drive file=/dev/zvol/TARGETSTORAGE/vm-TARGETVMID-disk-0,if=none,id=drive-scsi0,cache=writeback,aio=native,discard=on,format=raw,detect-zeroes=unmap: aio=native was specified, but it requires cache.direct=on, which was not specified.

To fix this, I switched aio back to the default io_uring on the source, restarted the VM to make that take effect, and then restarted the migrate.

You see the output of the copy progress on the shell window as it runs, but it also shows up in the proxmox web ui as a regular migrate task which you can montior through the task viewer as normal:

Once the task is done, if you did not specify --delete=true then you will need to issue qm unlock SOURCEVMID to unlock the VM on the source host in order to be able to delete it.

VCSA “User name and password are required” (Expired STS Certificates)

(also, Replacing Expired STS Certificates, Replacing Expired VMWare Service Certificates)

If your VMWare vCenter Server Appliance (VCSA) (at least versions 6.0, 6.5, 6.7, 7.0) web interface is suddenly telling you that “User name and password are required” no matter what credentials you fill out, then it’s quite likely your STS certificates are expired.

Check out the VMWare KB articles on checking and replacing these certificates, which, IMO – given they are self-signed in nearly every environment – should have always been replacing themselves since the invention of VCSA but for some reason, have not.

The TL;DR is this; (NB that these instructions are from the above linked articles, which at time of writing explicitly state they are for 6.0, 6.5, 6.7 and 7.0 – do not blindly attempt this stuff on newer VCSA versions)

  1. SSH to your VCSA as root
    (you did make a note of that password, right?)
  2. If you’re presented with the appliance shell (The prompt says “Command>“) rather than a bash prompt, type ‘shell‘ and press enter to get into bash.
    If you got a bash prompt right away, you can skip to step 4
  3. In order to SCP scripts to your VCSA, you’ll need to change the shell to regular bash instead of the appliance shell;
    root@VCSA# chsh -s /bin/bash root
  4. from your device, scp checksts.py to your VCSA’s tmp dir;
    you@YourPC$ scp checksts.py root@YOURVCSA:/tmp
  5. on the VCSA, execute the checksts script to see if your STS certificates are expired;
    root@VCSA# python /tmp/checksts.py
  6. If your STS certs are expired, SCP fixsts.sh from your device to your VCSA’s tmp dir;
    you@YourPC$ scp fixsts.sh root@YOURVCSA:/tmp
  7. on the VCSA, execute the fixsts.sh script to generate new self-signed STS certificates and install them;
    root@VCSA# bash /tmp/fixsts.sh
    (you will be prompted for the password for administrator@YOURSSODOMAIN – NB that if you make a typo when entering it in this script you will need to re-run the script, backspace does not work here)
  8. Restart the many services on your VCSA (each of these will likely take a while to complete, be patient);
    root@VCSA# service-control --stop --all
    root@VCSA# service-control --start --all
    If you do NOT get errors about services failing to start, skip to step 12.
  9. If you get errors about services not being able to start (timed out, crashed on start, etc), it is likely that your appliance / service certificates have also expired. You can run the following one-liner to output all of the certificates and their expiry dates to see if their expiry dates are in the past;
    root@VCSA# for i in $(/usr/lib/vmware-vmafd/bin/vecs-cli store list); do echo STORE $i; sudo /usr/lib/vmware-vmafd/bin/vecs-cli entry list --store $i --text | egrep "Alias|Not After"; done
  10. If you have expired service certificates, run the VCSA certificate manager;
    root@VCSA# /usr/lib/vmware-vmca/bin/certificate-manager
  11. Choose option 4 or 8 – they both do the same thing, except in the event the services won’t start after replacing the certs, option 4 will roll back certificates afterwards (which will of course, under the current circumstances, likely still leave you with a broken state anyway).
  12. If you had to chsh to /bin/bash and you now want to switch it back to the ‘factory supplied’ appliance shell;
    root@VCSA# chsh -s /bin/appliancesh root

You should now be able to access and log in to your VCSA as normal – NB that if you’ve pinned the old service / CA certificates anywhere you may need to remove/update those pins.

Please Tell your MP to vote against the passage of Police, Crime, Sentencing and Courts Bill 2021

I’m not going to go into the whys and wherefores here of why the above bill is bad; there’s plenty of resources on the Internet for that already.

I was asked for a text version of the email I wrote to my MP so that others might find it easier to email their MP even if they were short on time or free hands.

My partner also wrote her own email which covers some of the points I omitted – such as women’s suffrage only being won through the – doubtless “annoying” at the time – protests of many women, and the appalling state of our criminal justice system today meaning that regulatory reform for headlines is ultimately meaningless anyway.

I provide both for you here to make it easier to rework into your own email to your MP which you must send asap given the second reading on this bill, published less than a week ago, is tomorrow (the 15th of March).

PLEASE REMEMBER: email *your* MP, and include at the bottom your name and postal address so that they can clearly see you are their constituent, otherwise your email will likely be disregarded.

It is important that wherever possible you adjust these to write personally about why this is important to you rather than just cutting and pasting our entire emails. See the RESULTS guide here How to write to your MP | RESULTS UK

V’s email:

Dear SIR / MADAM,

I write to you to register my protest against the above bill, in particular the arbitrary, ill-defined and disproportionate controls on the right to protest. 

I am deeply concerned by the thought that mine and my fellow citizens rights to protest could be arbitrarily curtailed on the basis that a protest makes noise or causes disruption.  The existing legislative framework has not been proven to be faulty or flawed, it has allowed demonstrations on all manner of topics and I’ve heard and seen arguments that have altered my way of thinking. This is the entire point of a demonstration or protest – to be heard, for my voice, my argument to be brought to the attention of others. So that they can decide whether I have a point and whether they wish to agree and support that point. My right of suffrage stems from such acts of protest and demonstration and I am now using my voice to ask that you intervene to prevent this manifestly unjust intervention. 

The outrageous scenes of last night’s policing of the vigil in Clapham contrasted against those of the football fans in Scotland earlier in the week have focussed my mind on what many minorities in our country already know to be true: the police and state scarcely need any more controls or power to curtail ordinary people making their feelings known.  Your government have been clear that any “unpalatable” opinions from minorities are to be silenced. This must stop. There was no public health threat last night, not until the police acted. This was inexcusable and must not be repeated. 

The bill as proposed also includes sentencing changes. Including changes to make damaging statutes potentially carry heavier sentences than violence against women. I am aghast and astounded that such changes can be rushed through in this bill and without proper systematic review to ensure that the protection of humans over property is reflected appropriately in sentencing. 

I know that the Conservative rhetoric against anyone voicing concern over the bill and its passing will be that we are soft on crime and want criminals to “get away with it”.  I also know that whatever sentences are set out in this bill are going to be undermined and shortened because of the near-collapse of the criminal justice system due to the chronic underfunding and cuts instigated and pushed by the successive Conservative governments. Sentences written into law are meaningless when there is no resources to investigate, gather evidence, prosecute or hear cases. The delays in justice should be a priority for resolution. I urge you to ask the Government to focus on this in the coming months. 

I ask you to please vote against the passage of this bill – even if that might mean voting against the whip – until such time it has been properly scrutinised and revised as necessary so as to maintain our right to protest.

I look forward to your thoughts on this bill and the personal actions that you will take to support women in your constituency to tackle the misogyny that is so deeply engrained in our country. 

Yours faithfully,

YOUR NAME

YOUR HOME ADDRESS

My email:

Dear SIR / MADAM,

I write to you despite knowing there is little upon which we are likely to agree in general when it comes to our politics but the situation is so dire that it would be unacceptable for me not to register my protest against the above bill.

You and I both know very well that no matter how this bill is dressed up, whatever populist vote the home secretary and prime minister are currently chasing, it’s effect will be inevitable: any subject matter the government of the day declares to be ‘annoying’ will suddenly become prohibited to protest. 

The outrageous scenes of last night’s policing of a mere vigil in Clapham threw into sharp relief what many minorities in our country already know to be true: the police and state scarcely need any more controls or power to curtail ordinary people making their feelings known to those who might otherwise choose to ignore them.

Were this legislation tabled in far away countries which we take an almost perverse pleasure in lecturing as to democracy, we could reasonably expect our foreign minister to describe it for what it is – an unacceptably broad and vaguely worded overreach designed to limit people’s democratic right to protest in the name of avoiding “annoyance”, with completely outsized maximum penalties intended to have a chilling effect on the public’s free speech.

That the govt seeks to rush through a ~300 page bill with scarcely any time to digest and properly debate it belies the fact that they know what they are up to is wrong and ultimately indefensible. The government wishes to dress this up as fulfilling a manifesto commitment – but with a little over 3 years remaining for this government, there is no excuse for this not to be done in a proper and considered manner.

There are many other areas that appear problematic with this bill which I am not qualified to speak on but on the right to protest alone I ask you to please vote against the passage of this bill – even if that might mean voting against the whip – until such time it has been properly scrutinised and revised as necessary so as to address it’s fundamental flaws.

Yours faithfully,

YOUR NAME

YOUR HOME ADDRESS

Disable Mouse in Vim System-wide

For reasons unknown the Vim project decided to switch mouse support *on* by default, and the Debian package maintainer decided to just flow that downstream in Deb 10. It was a… contentious change. I have no idea what other distros are doing. You’ll know you have this if you go to select text in your terminal window and suddenly Vim is in “visual” mode.

Supposedly the Vim project can’t change the default (again) because “people are used to it” (uhhhh).

The issue is that while there is an include from /etc/vim/vimrc to include /etc/vim/vimrc.local but either creating this file apparently either disables every other default option completely (turning off, for example, syntax highlighting) or the defaults will load after it (!?) if the user doesn’t have their own vimrc and override your changes.

Mostly I’m happy with the defaults, I usually just want to change this one thing without having to set up a ~/.vimrc in EVERY HOME DIRECTORY

Anyway, here, as root:

cat > /etc/vim/vimrc.local <<EOF
" Include the defaults.vim so that the existence of this file doesn't stop all the defaults loading completely
source \$VIMRUNTIME/defaults.vim
" Prevent the defaults from loading again later and overriding our changes below
let skip_defaults_vim = 1
" Here's where to unset/alter stuff you do/don't want
set mouse=
EOF

pfSense: Suricata Sync Causes XMLRPC Failures and carp backup events

Observed with pfSense 2.4.5p1 and Suricata 5.0.3 (and presumably older versions of both)

Once you enable Suricata config sync, any configuration changes take *ages* to save because Syncs basically start failing to complete – eventually falling through to timeouts.

You might start to see synchronise errors like this on the master (which will get flagged up as notifications):

/rc.filter_synchronize: A communications error occurred while attempting to call XMLRPC method host_firmware_version:

and/or

/suricata/suricata_logs_mgmt.php: A communications error occurred while attempting to call XMLRPC method exec_php:

and/or

/rc.filter_synchronize: New alert found: A communications error occurred while attempting to call XMLRPC method restore_config_section:

You might also notice that CARP is – to put it mildly – freaking out:

Carp backup event
Carp backup event
Carp backup event

And that OVPN / other packages likewise are having problems, stopping/starting/restarting because it thinks the WAN IP has changed as the CARP state flaps back and forth, with the system logging stuff like:

/rc.newwanip: rc.newwanip: Info: starting on ovpns1.
/rc.newwanip: rc.newwanip: on (IP address: ) (interface: []) (real interface: ovpns1).
/rc.newwanip: rc.newwanip called with empty interface.
/rc.newwanip: pfSense package system has detected an IP change or dynamic WAN reconnection - -> - Restarting packages.

The issue is likely that you have promiscuous mode enabled on your Suricata interfaces (because it is the *default* to enable it).

The kernel disabling and enabling promiscuous mode off and on as Suricata reloads during sync causes carnage with the sync TCP connection, CARP, and in turn, everything else.

Promiscuous mode should not be required if you are using Suricata in-line at layer 3 (i.e. on the firewall which is hosting your default gateway which is probably why CARP is running to begin with).

Simply disable promiscuous mode (at the very *least* on any interfaces you’re running CARP on which is probably all of them in an HA setup) and you’ll find things behave much better, and config syncs complete nice and fast again.