All posts by Phil

Disable Mouse in Vim System-wide

For reasons unknown the Vim project decided to switch mouse support *on* by default, and the Debian package maintainer decided to just flow that downstream in Deb 10. It was a… contentious change. I have no idea what other distros are doing. You’ll know you have this if you go to select text in your terminal window and suddenly Vim is in “visual” mode.

Supposedly the Vim project can’t change the default (again) because “people are used to it” (uhhhh).

The issue is that while there is an include from /etc/vim/vimrc to include /etc/vim/vimrc.local but either creating this file apparently either disables every other default option completely (turning off, for example, syntax highlighting) or the defaults will load after it (!?) if the user doesn’t have their own vimrc and override your changes.

Mostly I’m happy with the defaults, I usually just want to change this one thing without having to set up a ~/.vimrc in EVERY HOME DIRECTORY

Anyway, here, as root:

cat > /etc/vim/vimrc.local <<EOF
" Include the defaults.vim so that the existence of this file doesn't stop all the defaults loading completely
source \$VIMRUNTIME/defaults.vim
" Prevent the defaults from loading again later and overriding our changes below
let skip_defaults_vim = 1
" Here's where to unset/alter stuff you do/don't want
set mouse=
EOF

pfSense: Suricata Sync Causes XMLRPC Failures and carp backup events

Observed with pfSense 2.4.5p1 and Suricata 5.0.3 (and presumably older versions of both)

Once you enable Suricata config sync, any configuration changes take *ages* to save because Syncs basically start failing to complete – eventually falling through to timeouts.

You might start to see synchronise errors like this on the master (which will get flagged up as notifications):

/rc.filter_synchronize: A communications error occurred while attempting to call XMLRPC method host_firmware_version:

and/or

/suricata/suricata_logs_mgmt.php: A communications error occurred while attempting to call XMLRPC method exec_php:

and/or

/rc.filter_synchronize: New alert found: A communications error occurred while attempting to call XMLRPC method restore_config_section:

You might also notice that CARP is – to put it mildly – freaking out:

Carp backup event
Carp backup event
Carp backup event

And that OVPN / other packages likewise are having problems, stopping/starting/restarting because it thinks the WAN IP has changed as the CARP state flaps back and forth, with the system logging stuff like:

/rc.newwanip: rc.newwanip: Info: starting on ovpns1.
/rc.newwanip: rc.newwanip: on (IP address: ) (interface: []) (real interface: ovpns1).
/rc.newwanip: rc.newwanip called with empty interface.
/rc.newwanip: pfSense package system has detected an IP change or dynamic WAN reconnection - -> - Restarting packages.

The issue is likely that you have promiscuous mode enabled on your Suricata interfaces (because it is the *default* to enable it).

The kernel disabling and enabling promiscuous mode off and on as Suricata reloads during sync causes carnage with the sync TCP connection, CARP, and in turn, everything else.

Promiscuous mode should not be required if you are using Suricata in-line at layer 3 (i.e. on the firewall which is hosting your default gateway which is probably why CARP is running to begin with).

Simply disable promiscuous mode (at the very *least* on any interfaces you’re running CARP on which is probably all of them in an HA setup) and you’ll find things behave much better, and config syncs complete nice and fast again.

Creating a VRF and Running Services Inside it on Linux

This was remarkably difficult to find a simple explanation for on one page and whilst not all that complex to achieve if you understand all of the component parts sometimes it is useful to have a complete explanation and so, hopefully, someone will find this howto useful.

There are a number of reasons to have one or more VRFs (VRF stands for Virtual Routing and Forwarding) added to a system – researching and discussing the *why* of doing this is not in scope for this article – I’m going to assume you know why you’d want to do this.

If you somehow don’t really know what a VRF is beyond suspecting it’s what you want, in essence each VRF has it’s own routing table and this allows you to partition – in networking terms – a system to service two or more entirely different networks with their own routing tables (eg: each can have it’s own default route, and their own routes to what would otherwise be overlapping IP ranges).

NB: It’s important to note that the work you’re doing here can break your existing management access, if you’re already relying on the interface you want to move into the VRF to access the server in the first place. Ensure you can access the server over an interface OTHER than the one you want to move into the VRF – be it over a different NIC or using the local console / IPMI / ILO / DRAC etc.

Example environment

Let’s say you have a Linux box with two interfaces, eth0 and eth1 (even if systemd’s “predictable” naming is more common now).

eth0 carries your production traffic. This has a default gateway to reach the Internet, or whatever production network you have, and it’s configuration is ultimately irrelevant.

eth1 faces your management network. For demonstration purposes, our IP is 10.0.0.2/24, the default gateway we want to use for management traffic will be 10.0.0.1, and this is the interface you want to be in a separate VRF to completely segment out your management traffic.

All of the below instruction takes place as root – prepend commands with sudo if you prefer to sudo.

How do I create a VRF?

In Linux VRFs have a name, and an associated routing table number. Let’s say we want to create a VRF called Mgmt-VRF using table number 2 (the name and number is up to you – I’ve just chosen 2 – the number should just not be in use and if you don’t currently have any VRFs then 2 will be fine), and set it “up” to actually enable it.

ip link add Mgmt-VRF type vrf table 2
ip link set dev Mgmt-VRF up

Verify your VRF exists

ip vrf show

Which should show you:

Name              Table
-----------------------
Mgmt-VRF             2

Add your interface(s) to the new VRF (This will break your connection if you’re currently using them! Exercise caution!), here we add eth1 to Mgmt-VRF:

ip link set dev eth1 master Mgmt-VRF

You can now add routes to your new VRF like this, here we’re adding the default gateway of 10.0.0.1 to the routing table for our new VRF:

ip route add table 2 0.0.0.0/0 via 10.0.0.1

You can then validate that the default route exists in that table:

ip route show table 2

You should see something like:

default via 10.0.0.1 dev eth1
broadcast 10.0.0.0 dev eth1 proto kernel scope link src 10.0.0.2
10.0.0.0/24 dev eth1 proto kernel scope link src 10.0.0.2
local 10.0.0.2 dev eth1 proto kernel scope host src 10.0.0.2

At this point you could add any more static routes your new VRF might require, and you’re essentially done with configuring the VRF. The interface eth1 now exists in our new VRF.

Okay, how do I *use* the VRF?

Any tinkering will quickly reveal that your services which were bound to (or accessible over) the IP on eth1 don’t work anymore, at least if they only bind by IP and not by device.

You’ll also notice that when you use ping or traceroute or whatever it’ll run with the default routing table – even if you set the source IP to 10.0.0.2, it won’t work. This is because, like sshd, ping (and bash, and anything else) will run in the context of the default VRF unless you specifically request otherwise. Those processes will use the default routing table and will only have access to listen to IPs that are on interfaces also in that same VRF.

If the processes or services are be configured to bind to an interface however, they will operate in the VRF that the interface is configured for. A good example of a command with native support for binding to interfaces rather than IPs is traceroute:

traceroute -i eth1 8.8.8.8

But if you just want a generic way to execute commands inside a particular VRF, doing so is fairly easy using ip vrf exec, here, the same traceroute command without the need to specify an interface:

ip vrf exec Mgmt-VRF traceroute 8.8.8.8

If you’re going to be doing a lot of work in a particular VRF, you will probably find it most convenient to start your preferred shell (eg bash) using ip vrf exec as all child processes you start from that shell will also operate from that VRF, then exit the shell once you want to return to the default routing table:

ip vrf exec Mgmt-VRF /bin/bash
# do your work now, eg
traceroute 8.8.8.8
# time to go back to the default routing table
exit

Great, I can run traceroute. But what about my SERVICES?

For linux distributions running systemd – shifting services to run inside a VRF is actually relatively straightforward.

systemd calls processes and services under it’s purview “units”, and has so called unit files that describe services, how and when (using dependencies and targets) they should be started, etc

If you want to run a single instance of a service across all VRFs for some reason this is possible though beyond the scope of this article (look up net.ipv4.tcp_l3mdev_accept and net.ipv4.udp_l3mdev_accept).

Alternatively you might choose to have several copies of the service running, each in different VRFs (make sure they use different socks/pipes/pid files etc!), which is also beyond the scope of this article. It’s up to you to decide what suits your environment best.

However – if you only want to change your one existing copy of your service to run in a VRF, you just have to specify the new command that systemd executes in a so called override file.

You should use override files rather than modifying the main unit file because – in general – there will not be an override file in the distribution-provided package for your service, so when you do package upgrades you shouldn’t have any collisions with the package version of the file and your modified one which means that your modifications will be preserved. That said, you will have to keep an eye on whether you need to update your override ExecStart command if it changes in a breaking way between releases (check this first if a service you have overridden starts misbehaving after package updates!).

First you need to look in the unit file to get the current command that is executed to start the service:

systemctl cat sshd

You should see something like this (taken from a Debian 10 x64 system):

# /lib/systemd/system/ssh.service
[Unit]
Description=OpenBSD Secure Shell server
Documentation=man:sshd(8) man:sshd_config(5)
After=network.target auditd.service
ConditionPathExists=!/etc/ssh/sshd_not_to_be_run

[Service]
EnvironmentFile=-/etc/default/ssh
ExecStartPre=/usr/sbin/sshd -t
ExecStart=/usr/sbin/sshd -D $SSHD_OPTS
ExecReload=/usr/sbin/sshd -t
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
RestartPreventExitStatus=255
Type=notify
RuntimeDirectory=sshd
RuntimeDirectoryMode=0755

[Install]
WantedBy=multi-user.target
Alias=sshd.service

The key configuration variable here is “ExecStart”. We need to modify ExecStart so that our sshd starts via ip vrf exec. Do so by creating (or opening, if you already have one!) the override file for sshd:

systemctl edit sshd

This will dump you into the default editor – probably nano unless you changed it – with either your existing override file if you have one, or a blank one if you don’t.

Due to the way systemd sanity checks your unit files, you have to deliberately *unset* ExecStart by first setting it to nothing, then specify the new ExecStart which you can see is the default ExecStart entry, but with

/bin/ip vrf exec Mgmt-VRF

prepended to the start. It’s important to specify the full path to the ip binary as when systemd executes this command, it will more likely than not do so without any PATH variable set, or with a different one to which your shell environment uses. Being explicit with paths ensures everything works as desired. (This is generally a good habit to get into)

If you have a blank file, in our example for sshd all you create is the following:

[Service]
ExecStart=
ExecStart=/bin/ip vrf exec Mgmt-VRF /usr/sbin/sshd -D $SSHD_OPTS

If you don’t have a blank file – well, I expect you know enough about what you’re doing here but if you do not already unset and reset ExecStart (or don’t have a [Service] section at all) then you can simply follow the above. If you’re already overriding ExecStart then you should prepend your override with the same /bin/ip vrf exec Mgmt-VRF

Force systemd to reload the unit files, and restart your service:

systemctl daemon-reload
systemctl restart sshd

That should be it – sshd is now running inside your new VRF; if you have a relatively up to date systemd build it should natively understand VRFs and so can show that it is running inside that vrf (see the CGroup section) – you can also see that it is using our override file as non-overridden services will not have a “Drop-In” section:

systemctl status sshd
● ssh.service - OpenBSD Secure Shell server
   Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/ssh.service.d
           └─override.conf
   Active: active (running) since Wed 2020-08-12 09:38:22 BST; 7h ago
     Docs: man:sshd(8)
           man:sshd_config(5)
 Main PID: 29107 (sshd)
    Tasks: 1 (limit: 4689)
   Memory: 2.8M
   CGroup: /system.slice/ssh.service
           └─vrf
             └─Mgmt-VRF
               └─29107 /usr/sbin/sshd -D

Aug 12 09:38:22 rt3 systemd[1]: Starting OpenBSD Secure Shell server...
Aug 12 09:38:22 rt3 sshd[29107]: Server listening on 10.0.0.2 port 22.
Aug 12 09:38:22 rt3 systemd[1]: Started OpenBSD Secure Shell server.
Aug 12 09:38:50 rt3 sshd[29116]: Accepted password for philb from 192.168.0.2 port 59159 ssh2
Aug 12 09:38:50 rt3 sshd[29116]: pam_unix(sshd:session): session opened for user philb by (uid=0)

Can’t connect?

If you’ve done all this, restarted your service, systemd confirms it’s running in the VRF, and you still can’t connect to it – make sure your service is not trying to bind to an IP that is on an interface in a different VRF to the one in which you started it. Remember that services can only successfully use local IPs that are in the same VRF, even if they start and give the impression of working.

Compiling and using mk_livestatus on Nagios4 on Debian 10/Buster

Prerequisites (other than the nagios4 packages, of course!):

# apt install rrdtool-dev librrd-dev librrd8 libboost-dev libboost-system-dev

Get latest source from https://checkmk.com/download-source.php, at time of writing, https://checkmk.com/support/1.5.0p23/mk-livestatus-1.5.0p23.tar.gz and unpack

# wget https://checkmk.com/support/1.5.0p23/mk-livestatus-1.5.0p23.tar.gz
# tar -zxvf mk-livestatus-1.5.0p23.tar.gz
# cd mk-livestatus-1.5.0p23

Configure for nagios4, compile and install

# ./configure --with-nagios4 --prefix=/usr/local/nagios && make install

Enable the broker module in Nagios4 – add this to, eg, your nagios.cfg – first make sure that this is set to send all events to the broker:

event_broker_options=-1

Then configure the broker_module – here, telling it to create the socket for livestatus at /var/lib/nagios4/rw/livestatus

broker_module=/usr/local/lib/mk-livestatus/livestatus.o /var/lib/nagios4/rw/livestatus

Now you can restart Nagios4 and test that the livestatus socket is working

# systemctl restart nagios4

# echo "GET status" | /usr/local/bin/unixcat /var/lib/nagios4/rw/livestatus

And you should get something like this:

accept_passive_host_checks;accept_passive_service_checks;cached_log_messages;check_external_commands;check_host_freshness;check_service_freshness;connections;connections_rate;enable_event_handlers;enable_flap_detection;enable_notifications;execute_host_checks;execute_service_checks;external_command_buffer_max;external_command_buffer_slots;external_command_buffer_usage;external_commands;external_commands_rate;forks;forks_rate;host_checks;host_checks_rate;interval_length;last_command_check;last_log_rotation;livecheck_overflows;livecheck_overflows_rate;livechecks;livechecks_rate;livestatus_active_connections;livestatus_queued_connections;livestatus_threads;livestatus_version;log_messages;log_messages_rate;mk_inventory_last;nagios_pid;neb_callbacks;neb_callbacks_rate;num_hosts;num_services;obsess_over_hosts;obsess_over_services;process_performance_data;program_start;program_version;requests;requests_rate;service_checks;service_checks_rate
1;1;0;1;0;1;1;0;1;1;1;1;1;0;0;0;0;0;0;0;57;0.416507;60;0;0;0;0;0;0;1;0;10;1.5.0p23;45;0.0199066;0;4310;1651;11.9169;83;467;0;0;0;1581514195;4.3.4;1;0;348;3.15754

Installing (and Booting) Linux on/FROM Intel vROC NVMe

Just remember to disable Secure Boot (at least, Supermicro’s guide to vROC says that vROC is not compatible with Secure Boot), and ensure that you boot your O/S installer in (U)EFI mode, and make sure you boot in (U)EFI mode afterwards.

Otherwise, expect problems like the CentOS 7 installer complaining that something went wrong as the installer GUI starts (this seems to mostly stem from not seeing the vROC RAID device, but still seeing the member NVMe devices but being confused by the mdraid-esque nature of vROC RAID sets.)

Once you boot the CentOS installer in EFI mode, you’ll be able to see and install to your “BIOS RAID” device. The same will apply to standalone NVMe drives – which on most boards will only work if everything is done in EFI mode.

Nextcloud “Could not load at least one of your enabled two-factor auth methods” after upgrade

Seems that upgrades in Nextcloud have a propensity to break 2FA provider apps as this is apparently something that bit people going to NC15 but in our case got us after we upgraded to NC16

Far as I can tell, what happens is that you have upgraded to the latest NextCloud before a compatible version of your 2FA providers “apps” is available (why you would release without 2FA is beyond me), and so the provider apps get disabled.

When you try to log in, all you’ll see is this:

To fix this, run the following in the nextcloud web root – these sudo commands have to be run as the same UID as the owner of the config file, so if in your environment you aren’t running nextcloud under www-data then you’ll have to adjust the sudo commands as necessary to specify the correct user.

NB: that I’m running these sudo commands as root, so I don’t need any sudo pre-configuration as such to allow me to run these as www-data.

First, identify what 2FA provides your affected user has configured – so, for a user called “adminusername”:

# sudo -u www-data php occ twofactorauth:state adminusername
Two-factor authentication is enabled for user adminusername

Enabled providers:
- totp
- u2f
Disabled providers:
- backup_codes

So, what we see here is this user had both TOTP and U2F (but, tsk, no backup codes – in our experience, twofactor_backup_codes was still working, so a user with backup codes would be able to still log in – you’d still have to understand what to do to fix your install though!)

Check to see if your modules are missing:

# sudo -u www-data php occ app:list | grep twofactor
  - twofactor_backupcodes: 1.5.0

Uh-oh, no twofactor_totp OR twofactor_u2f.

Make sure your nextcloud apps are up to date:

# sudo -u www-data php occ app:update --all

Then re-enable your twofactor provider apps, so for “totp” and “u2f”, you want:

# sudo -u www-data php occ app:enable twofactor_totp
twofactor_totp enabled

# sudo -u www-data php occ app:enable twofactor_u2f
twofactor_u2f enabled

Now you should be able to log back in as normal with your 2FA.

Configuring TACACS+ authentication and accounting on IOS 15

Just the bare minimum:

! you probably have this already, if you don't; you should read up on it first
aaa new-model

! use local users, and then all tacacs+ servers, to authenticate logins 
aaa authentication login default local group tacacs+ 

! give enable to tacacs+ users 
aaa authentication enable default group tacacs+ 

! send accounting records for when logins ('exec mode') begin and end 
aaa accounting exec default start-stop group tacacs+
 
! send accounting records for config commands 
aaa accounting commands 15 default stop-only group tacacs+ 

! send accounting records for outgoing connections made to other systems 
aaa accounting connection default start-stop group tacacs+ 

! send system event account records (reloads etc) 
aaa accounting system default start-stop group tacacs+ 

! OPTIONAL: On a router with multiple interfaces that could be chosen to
! reach the TACACS server it is best to specify one; we use Loopback addresses
! for iBGP peering, so it makes sense to use them here too
ip tacacs source-interface Loopback0 

! define at least one tacacs server with some friendly $SERVERNAME 
tacacs server $SERVERNAME
   ! Set the TACACS+ server's ipv4 $ADDRESS (or ipv6, adjust accordingly)
   address ipv4 $ADDRESS
   ! Set the encryption $KEY to match the key configured on the TACACS+ server for this device
   key $KEY
!

Now: BEFORE you log off, try to log in again and make sure you can still log in with your original local credentials.

If you can no longer login after making the above changes, you’ll need to fix that first before you disconnect to prevent you locking yourself out.

vSphere Client 5.1 plugins & search: Could not create SSL\TLS secure channel

If you can’t download vSphere Client 5.1 Plugins (eg vShield), and can’t use the search in the client because of:

An unknown connection error occured. (The request failed due to an SSL Error. (The request was aborted. Could not create SSL\TLS secure channel.)

And https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2114357 is of no help (you already allow all SSL.Versions), and your SSL certs don’t appear to be broken or expired, then you’ve probably been bitten by some recent changes in a windows update that’s evidently changed some defaults around the minimum DH key size.

Create the following key; everything will start working immediately (you’ll need to re-enable any disabled vSphere Client plugins) as you will start permitting 512bit DH keys.

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\KeyExchangeAlgorithms\Diffie-Hellman]
"ClientMinKeyBitLength"=dword:00000200

You should consider upgrading to newer versions of vSphere, but then if we all sat around doing things as complicated as that, we’d not have time for any actual work, would we.

Replace wildcard SSL certificate on all subomains on cPanel server

You’ve got a wildcard certificate you’ve been using on your cPanel server, and you’ve created loads of subdomain SSL accounts, only now your wildcard cert is expiring and there is no trivial way in WHM to replace a wildcard cert and have it replace the cert in use on all the subdomains that use the same original certificate.

Fear not: I have a script for that. I have tried to make the comments useful, so pass no further comment here. This script basically does a two-pair search and replace – so searching and replacing the crt filename, and the key filename.

Hopefully this is useful for someone.

#!/bin/bash
#
# replaceSSL.sh - v0.1 - Phillip Baker, Netcalibre Ltd - phil@lchost.co.uk
#
# WARNING: THIS SCRIPT DIRECTLY MANIPULATES CPANEL METADATA FILES.
# 
# BACK UP /var/cpanel/userdata FIRST IF YOU DECIDE TO USE THIS SCRIPT
# IF SOMETHING GOES WRONG, RESTORE THE /var/cpanel/userdata FILES FROM
# YOUR BACKUP AND CALL /scripts/rebuildinstalledssldb AND YOU SHOULD BE OK
#
# NO WARRANTY, NO GUARANTEES. WORKS FOR ME. MAY NOT WORK FOR YOU.
# DO NOT RUN THIS SCRIPT IF YOU DO NOT UNDERSTAND WHAT IT DOES AND CANNOT
# FIGURE OUT FOR YOURSELF IF IT WILL BREAK SOMETHING.
#
# This script should be run sudo/su as root.
#
# Quickly replace one SSL cert and key file for another already existing on cPanel server
# then rebuild apache config and the ssl.db before restarting apache.
#
# Useful when replacing a wildcard cert on lots of subdomain accounts
#
# Install the new cert on the server using WHM in the usual way on one of the accounts
# (perhaps you install it on the root - example.com - account)
#
# Determine the old cert string you are targeting by looking in a subdomain using
# the old certificate still:
# cat /var/cpanel/userdata/<username>/subdomain.example.com_SSL | grep crt
#
# Will spit out a crt filename that looks a bit like:
# _wildcard_example.com_aad23_12314_124112312_adsfasdfasdfasdfasdf.crt
#
# Determine the new cert string in the same way from the domain you installed the new cert on
# cat /var/cpanel/userdata/<username>/example.com_SSL | grep crt
#
# Do the same for the old / new key (if the key is unchanged, just specify the same string 
# twice)
#
# Then: ./replaceSSL.sh <oldcrt> <newcrt> <oldkey> <newkey>


if [ $# -ne 4 ]; then
    echo "./replaceSSL.sh <oldcrt> <newcrt> <oldkey> <newkey>"
    exit 1
fi

OLDCRT=$1
NEWCRT=$2
OLDKEY=$3
NEWKEY=$4

CRTFOUND=0

# change to the appropriate directory
cd /var/cpanel/userdata/

# For each _SSL file
for i in `find . -iname *_SSL`
do
 if grep -q $OLDCRT $i
 then
  CRTFOUND=1
  echo $i
  # copy to an intermediate file, precrt has the original file before the crt is replaced
  cp $i $i.precrt
  # modify the metadata file to replace the crt filename
  sed "s/$OLDCRT/$NEWCRT/g" $i.precrt > $i
  # copy to another intermediate file, prekey has the file after the crt was replaced 
  # but before the key was replaced
  cp $i $i.prekey
  # modify the metadata file to replace the key filename
  sed "s/$OLDKEY/$NEWKEY/g" $i.prekey > $i
 fi
done

if [[ "$CRTFOUND" -eq 1 ]]
then
 # rebuild apache config
 /scripts/rebuildhttpdconf

 # restart apache
 service httpd restart

 # rebuild ssl.db so that the WHM "Manage SSL hosts" section looks accurate
 /scripts/rebuildinstalledssldb

 # Once you're sure it worked
 echo "Pausing so you can check apache is still working and SSL Manager looks sane"
 echo "Press enter if everything is ok, or press ctrl-c to keep *.precrt/*.prekey files"
 echo "for manual inspection to try and figure out what went wrong."
 read

 # clean up
 for i in `find . -iname *.precrt`
 do
  rm $i
 done
 for i in `find . -iname *.prekey`
 do
  rm $i
 done
else
 echo "Old cert string not found - no changes made!"
fi

“letsencrypt.sh isn’t renewing my certs!”

There’s been a change at some point to the JSON format that Let’s Encrypt returns challenges in.

If you have an “old” installation of letsencrypt.sh that pre-dates v0.2.0, letsencrypt.sh is probably doing this:

$ ./letsencrypt.sh -c
# INFO: Using main config file /home/letsencrypt/letsencrypt.sh/config.sh
Processing somedomain.netcalibre.net
 + Checking domain name(s) of existing cert... unchanged.
 + Checking expire date of existing cert...
 + Valid till Jun 22 16:24:00 2016 GMT (Less than 30 days). Renewing!
 + Signing domains...
 + Generating signing request...
 + Requesting challenge for somedomain.netcalibre.net...
$

and then it silently exits.

Update it from git, move your config file to the new location:

git pull
mv config.sh config
./letsencrypt.sh -c