Category Archives: Debian

Disable Mouse in Vim System-wide

For reasons unknown the Vim project decided to switch mouse support *on* by default, and the Debian package maintainer decided to just flow that downstream in Deb 10. It was a… contentious change. I have no idea what other distros are doing. You’ll know you have this if you go to select text in your terminal window and suddenly Vim is in “visual” mode.

Supposedly the Vim project can’t change the default (again) because “people are used to it” (uhhhh).

The issue is that while there is an include from /etc/vim/vimrc to include /etc/vim/vimrc.local but either creating this file apparently either disables every other default option completely (turning off, for example, syntax highlighting) or the defaults will load after it (!?) if the user doesn’t have their own vimrc and override your changes.

Mostly I’m happy with the defaults, I usually just want to change this one thing without having to set up a ~/.vimrc in EVERY HOME DIRECTORY

Anyway, here, as root:

cat > /etc/vim/vimrc.local <<EOF
" Include the defaults.vim so that the existence of this file doesn't stop all the defaults loading completely
source \$VIMRUNTIME/defaults.vim
" Prevent the defaults from loading again later and overriding our changes below
let skip_defaults_vim = 1
" Here's where to unset/alter stuff you do/don't want
set mouse=
EOF

Creating a VRF and Running Services Inside it on Linux

Edited 2023-05-10 to add: 
Please check the comments at the bottom for a Debian 'interfaces' example and a netplan example provided by generous visitors. 
Netplan surely seems the easiest way to stand up VRFs, which does not particularly come as a surprise to me.
You will still need to configure your services to run inside the VRFs though. I thought it possible that there might be a more elegant way to do this in systemd now, but the existence of this issue against systemd suggests not.

This was remarkably difficult to find a simple explanation for on one page, and whilst not all that complex to achieve – if you understand all of the component parts – sometimes it is useful to have a complete explanation in one place and so, hopefully, someone will find this howto useful.

There are a number of reasons to have one or more VRFs (VRF stands for Virtual Routing and Forwarding) added to a system – researching and discussing the *why* of doing this is not in scope for this article – I’m going to assume you know why you’d want to do this.

If you somehow don’t really know what a VRF is beyond suspecting it’s what you want, in essence each VRF has it’s own routing table and this allows you to partition – in networking terms – a system to service two or more entirely different networks with their own routing tables (eg: each can have it’s own default route, and their own routes to what would otherwise be overlapping IP ranges).

NB: It’s important to note that the work you’re doing here can break your existing management access, if you’re already relying on the interface you want to move into the VRF to access the server in the first place. Ensure you can access the server over an interface OTHER than the one you want to move into the VRF – be it over a different NIC or using the local console / IPMI / ILO / DRAC etc.

Example environment

Let’s say you have a Linux box with two interfaces, eth0 and eth1 (even if systemd’s “predictable” naming is more common now).

eth0 carries your production traffic. This has a default gateway to reach the Internet, or whatever production network you have, and it’s configuration is ultimately irrelevant.

eth1 faces your management network. For demonstration purposes, our IP is 10.0.0.2/24, the default gateway we want to use for management traffic will be 10.0.0.1, and this is the interface you want to be in a separate VRF to completely segment out your management traffic.

All of the below instruction takes place as root – prepend commands with sudo if you prefer to sudo.

How do I create a VRF?

In Linux VRFs have a name, and an associated routing table number. Let’s say we want to create a VRF called Mgmt-VRF using table number 2 (the name and number is up to you – I’ve just chosen 2 – the number should just not be in use and if you don’t currently have any VRFs then 2 will be fine), and set it “up” to actually enable it.

ip link add Mgmt-VRF type vrf table 2
ip link set dev Mgmt-VRF up

Verify your VRF exists

ip vrf show

Which should show you:

Name              Table
-----------------------
Mgmt-VRF             2

Add your interface(s) to the new VRF (This will break your connection if you’re currently using them! Exercise caution!), here we add eth1 to Mgmt-VRF:

ip link set dev eth1 master Mgmt-VRF

You can now add routes to your new VRF like this, here we’re adding the default gateway of 10.0.0.1 to the routing table for our new VRF:

ip route add table 2 0.0.0.0/0 via 10.0.0.1

You can then validate that the default route exists in that table:

ip route show table 2

You should see something like:

default via 10.0.0.1 dev eth1
broadcast 10.0.0.0 dev eth1 proto kernel scope link src 10.0.0.2
10.0.0.0/24 dev eth1 proto kernel scope link src 10.0.0.2
local 10.0.0.2 dev eth1 proto kernel scope host src 10.0.0.2

At this point you could add any more static routes your new VRF might require, and you’re essentially done with configuring the VRF. The interface eth1 now exists in our new VRF.

Okay, how do I *use* the VRF?

Any tinkering will quickly reveal that your services which were bound to (or accessible over) the IP on eth1 don’t work anymore, at least if they only bind by IP and not by device.

You’ll also notice that when you use ping or traceroute or whatever it’ll run with the default routing table – even if you set the source IP to 10.0.0.2, it won’t work. This is because, like sshd, ping (and bash, and anything else) will run in the context of the default VRF unless you specifically request otherwise. Those processes will use the default routing table and will only have access to listen to IPs that are on interfaces also in that same VRF.

If the processes or services are be configured to bind to an interface however, they will operate in the VRF that the interface is configured for. A good example of a command with native support for binding to interfaces rather than IPs is traceroute:

traceroute -i eth1 8.8.8.8

But if you just want a generic way to execute commands inside a particular VRF, doing so is fairly easy using ip vrf exec, here, the same traceroute command without the need to specify an interface:

ip vrf exec Mgmt-VRF traceroute 8.8.8.8

If you’re going to be doing a lot of work in a particular VRF, you will probably find it most convenient to start your preferred shell (eg bash) using ip vrf exec as all child processes you start from that shell will also operate from that VRF, then exit the shell once you want to return to the default routing table:

ip vrf exec Mgmt-VRF /bin/bash
# do your work now, eg
traceroute 8.8.8.8
# time to go back to the default routing table
exit

Great, I can run traceroute. But what about my SERVICES?

For linux distributions running systemd – shifting services to run inside a VRF is actually relatively straightforward.

systemd calls processes and services under it’s purview “units”, and has so called unit files that describe services, how and when (using dependencies and targets) they should be started, etc

If you want to run a single instance of a service across all VRFs for some reason this is possible though beyond the scope of this article (look up net.ipv4.tcp_l3mdev_accept and net.ipv4.udp_l3mdev_accept).

Alternatively you might choose to have several copies of the service running, each in different VRFs (make sure they use different socks/pipes/pid files etc!), which is also beyond the scope of this article. It’s up to you to decide what suits your environment best.

However – if you only want to change your one existing copy of your service to run in a VRF, you just have to specify the new command that systemd executes in a so called override file.

You should use override files rather than modifying the main unit file because – in general – there will not be an override file in the distribution-provided package for your service, so when you do package upgrades you shouldn’t have any collisions with the package version of the file and your modified one which means that your modifications will be preserved. That said, you will have to keep an eye on whether you need to update your override ExecStart command if it changes in a breaking way between releases (check this first if a service you have overridden starts misbehaving after package updates!).

First you need to look in the unit file to get the current command that is executed to start the service:

systemctl cat sshd

You should see something like this (taken from a Debian 10 x64 system):

# /lib/systemd/system/ssh.service
[Unit]
Description=OpenBSD Secure Shell server
Documentation=man:sshd(8) man:sshd_config(5)
After=network.target auditd.service
ConditionPathExists=!/etc/ssh/sshd_not_to_be_run

[Service]
EnvironmentFile=-/etc/default/ssh
ExecStartPre=/usr/sbin/sshd -t
ExecStart=/usr/sbin/sshd -D $SSHD_OPTS
ExecReload=/usr/sbin/sshd -t
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
RestartPreventExitStatus=255
Type=notify
RuntimeDirectory=sshd
RuntimeDirectoryMode=0755

[Install]
WantedBy=multi-user.target
Alias=sshd.service

The key configuration variable here is “ExecStart”. We need to modify ExecStart so that our sshd starts via ip vrf exec. Do so by creating (or opening, if you already have one!) the override file for sshd:

systemctl edit sshd

This will dump you into the default editor – probably nano unless you changed it – with either your existing override file if you have one, or a blank one if you don’t.

Due to the way systemd sanity checks your unit files, you have to deliberately *unset* ExecStart by first setting it to nothing, then specify the new ExecStart which you can see is the default ExecStart entry, but with

/bin/ip vrf exec Mgmt-VRF

prepended to the start. It’s important to specify the full path to the ip binary as when systemd executes this command, it will more likely than not do so without any PATH variable set, or with a different one to which your shell environment uses. Being explicit with paths ensures everything works as desired. (This is generally a good habit to get into)

If you have a blank file, in our example for sshd all you create is the following:

[Service]
ExecStart=
ExecStart=/bin/ip vrf exec Mgmt-VRF /usr/sbin/sshd -D $SSHD_OPTS

If you don’t have a blank file – well, I expect you know enough about what you’re doing here but if you do not already unset and reset ExecStart (or don’t have a [Service] section at all) then you can simply follow the above. If you’re already overriding ExecStart then you should prepend your override with the same /bin/ip vrf exec Mgmt-VRF

Force systemd to reload the unit files, and restart your service:

systemctl daemon-reload
systemctl restart sshd

That should be it – sshd is now running inside your new VRF; if you have a relatively up to date systemd build it should natively understand VRFs and so can show that it is running inside that vrf (see the CGroup section) – you can also see that it is using our override file as non-overridden services will not have a “Drop-In” section:

systemctl status sshd
● ssh.service - OpenBSD Secure Shell server
   Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/ssh.service.d
           └─override.conf
   Active: active (running) since Wed 2020-08-12 09:38:22 BST; 7h ago
     Docs: man:sshd(8)
           man:sshd_config(5)
 Main PID: 29107 (sshd)
    Tasks: 1 (limit: 4689)
   Memory: 2.8M
   CGroup: /system.slice/ssh.service
           └─vrf
             └─Mgmt-VRF
               └─29107 /usr/sbin/sshd -D

Aug 12 09:38:22 rt3 systemd[1]: Starting OpenBSD Secure Shell server...
Aug 12 09:38:22 rt3 sshd[29107]: Server listening on 10.0.0.2 port 22.
Aug 12 09:38:22 rt3 systemd[1]: Started OpenBSD Secure Shell server.
Aug 12 09:38:50 rt3 sshd[29116]: Accepted password for philb from 192.168.0.2 port 59159 ssh2
Aug 12 09:38:50 rt3 sshd[29116]: pam_unix(sshd:session): session opened for user philb by (uid=0)

Can’t connect?

If you’ve done all this, restarted your service, systemd confirms it’s running in the VRF, and you still can’t connect to it – make sure your service is not trying to bind to an IP that is on an interface in a different VRF to the one in which you started it. Remember that services can only successfully use local IPs that are in the same VRF, even if they start and give the impression of working.

Edit: Persisting VRFs between reboots

I actually forgot about this minor detail when I originally wrote this post – but you soon notice when you reboot and your VRFs are missing.

While I am aware there are probably half a dozen ways to skin this cat, some of which likely including learning how to use systemd-networkd, using systemd to simply execute a bash script at the correct time is by far the quickest solution requiring the least amount of explanation.

First, create a bash script that contains the commands you need to start your VRFs; /sbin/vrf.sh will do, containing, using the above VRF configuration for example:

#!/bin/bash
ip link add Mgmt-VRF type vrf table 2
ip link set dev Mgmt-VRF up
ip route add table 2 0.0.0.0/0 via 10.0.0.1
ip link set dev eth1 master Mgmt-VRF

As this is a script that will get executed as root on system start, make sure this file is owned by, and only writeable by, root! (chmod 700 is fine)

Then create a systemd service that runs this script at the correct time – first you need a service file – in my instance, I created /etc/systemd/system/vrf.service – containing:

[Unit]
Description=VRF creation
Before=network-pre.target
Wants=network-pre.target

[Service]
Type=oneshot
ExecStart=/sbin/vrf.sh

[Install]
WantedBy=multi-user.target

Then enable the service

systemctl enable vrf

You should see something like:

Created symlink /etc/systemd/system/multi-user.target.wants/vrf.service → /etc/systemd/system/vrf.service.

Your VRF(s) should now exist at the correct time during boot for the network services (eg sshd) that need to attach to them.

Tips for Configuring Nagios3 Efficiently – part 1

Back when I started using Nagios (I think ~1.2 or earlier) I don’t remember many options for being all that efficient in terms of “lines of config written” – certainly, any options for being efficient that there may have been ended up being overlooked in the rush to get it up and running, and I’ve been largely been using the same configuration files (and style) ever since – though I did start using host and service templates as soon as I became aware of them some time back in the 2.x branch days.

In the spirit of self-improvement, I’ve been revisiting the Nagios configuration syntax as part of rolling out a fresh monitoring host based on Nagios3, and have significantly reduced the number of lines of config my Nagios installation depends on as a result.

Continue reading Tips for Configuring Nagios3 Efficiently – part 1

The LCHost Debian package mirror

As part of giving back to the community LCHost runs a Debian package mirror (including backports) (for i386 and amd64) which was recently added to the official mirrors list.

I spent a little time making it automatically detect changes at our source mirror within a couple of minutes and pull down, so at most it’ll never be more than 5 minutes behind the top-level mirrors.

To use our debian mirror, simply replace your regular mirror definition in /etc/apt/sources.list with:

deb http://mirror.lchost.net/debian/ stable main contrib
deb-src http://mirror.lchost.net/debian/ stable main contrib

(You may choose to use the release name in place of “stable” to ensure you never accidentally go between major releases – so for wheezy, simply replace the word stable with wheezy)

If you use backports, you can get those packages from us too, using something like:

deb http://mirror.lchost.net/debian/ wheezy-backports main

Installing Nagios3 on Debian Wheezy

It’s pretty straightforward to install Nagios on a Debian system but if you want to be able to use the web interface to control the nagios process a little more work is required.

Starting with a blank slate (apt/dpkg will ensure any required prerequisites will be installed):

# apt-get install nagios3 apache2-suexec

You’ll be asked to set a password for the nagiosadmin user for the web interface.

Enable check_external_commands in Nagios to enable the ability to mute alarms, make comments, restart the nagios process etc from the web interface (pretty much invaluable, but be aware of the inherent risks in enabling the ability to influence the process from “outside”)

# sed -i -e 's/check_external_commands=0/check_external_commands=1/' /etc/nagios3/nagios.cfg
# /etc/init.d/nagios3 restart

Edit the nagios3 apache2 config include to make the web interface scripts run as the nagios user so that the web interface can write to the nagios command pipe; inserting the following at the top of /etc/nagios3/apache2.conf:

User nagios
Group nagios

Restart apache..

# /etc/init.d/apache2 restart

And you’re pretty much done! You can go to http://YOUR_HOST_NAME/nagios3/ and log in with your nagiosadmin password you set up when prompted at the start of this process.

Now, you can get started with creating host and service configuration files in /etc/nagios3/conf.d/ to monitor your servers/network/etc

Debian 6 (Debian Squeeze) & Debian 7 (Debian Wheezy) reboot… doesn’t.

Someone made kexec-tools handle reboot requests by default seemingly. This allows the system to skip BIOS/POST etc and just drop to a minimal runlevel and start a kernel again.

This is great if you only have debian on your system and particularly great if you spend a lot of time changing kernels – when you issue reboot, or shutdown -r now (etc) kexec-tools intercepts the command and does a warm-restart rather than resetting the machine cold – if you don’t need to, why wait through all the BIOS checks, bootroms, etc, right?

Except some of us reboot because we want to change OS. I’d argue that it should perhaps be the default behaviour to cold-reboot (and the installer could, perhaps, ask!) or that KDE should have a button for “warm restart” and one for “cold reboot” or whatever, but anyway.

If you want to make reboot actually reboot the system you’ll want to:

# dpkg-reconfigure kexec-tools

And tell it to not use kexec-tools to handle reboots. If you’re never going to want kexec-tools, you can probably uninstall it using apt, but I just disabled it. It’s useful on the odd occasion I do want to just upgrade the kernel to enable it, reboot, and disable it again, I suppose.

Some SEO, perhaps?

Debian 6 Squeeze won’t reboot
Debian 6 Squeeze reboot doesn’t go to grub
Debian 6 Squeeze reboot dualboot
Debian 7 Wheezy won’t reboot
Debian 7 Wheezy reboot doesn’t go to bios

Debian 6 (Debian Squeeze) KDE4 Override Screen Resolution

Everything you needed to know about manually overriding incorrectly probed screen resolutions but nobody thought to write down, seemingly:

$ xrandr -q
Screen 0: minimum 320 x 200, current 3600 x 1080, maximum 8192 x 8192
DVI-I-1 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 477mm x 268mm
   1920x1080      60.0*+
   1600x1200      60.0  
   1680x1050      60.0  
   1400x1050      60.0  
   1280x1024      75.0     60.0  
   1440x900       59.9  
   1280x960       60.0  
   1152x864       75.0  
   1024x768       75.1     70.1     60.0  
   832x624        74.6  
   800x600        72.2     75.0     60.3     56.2  
   640x480        72.8     75.0     66.7     60.0  
   720x400        70.1  
DVI-I-2 connected 1680x1050+1920+0 (normal left inverted right x axis y axis) 0mm x 0mm
   1024x768       60.0  
   800x600        60.3     56.2  
   848x480        60.0  
   640x480        59.9  
   1680x1050      60.0*

This output shows what xrandr has detected. In my case, DVI-I-2 wasn’t showing the 1680×1050 resolution I needed. It’s there now because this output is from after I made my modifications.

$ xrandr --addmode DVI-I-2 "1680x1050"

Was all it took.

Sadly, of course, this is all lost on reboot, despite making changes in the System Settings/Display panel and saving them as default – because even though my screen alignment settings were saved in $HOME/.kde/share/config/krandrrc, the mode 1680×1050 isn’t remembered as being valid for my screen.

Because krandrrc contains a config element like this:

[Display]
ApplyOnStartup=true
StartupCommands=xrandr --output "DVI-I-1" --pos 0x0 --mode 1920x1080 --refresh 60\nxrandr --output "DVI-I-2" --pos 1920x0 --mode 1680x1050 --refresh 59.9543

I simply elected to try adding:

xrandr --addmode DVI-I-2 "1680x1050"

To the front end of StartupCommands, like so:

[Display]
ApplyOnStartup=true
StartupCommands=xrandr --addmode DVI-I-2 "1680x1050"\nxrandr --output "DVI-I-1" --pos 0x0 --mode 1920x1080 --refresh 60\nxrandr --output "DVI-I-2" --pos 1920x0 --mode 1680x1050 --refresh 59.9543

On reboot, my screen resolution is correctly set, and my dualhead config works as expected. Now I just need to remember never to change my screen settings again, or be prepared to make that change again.

Some SEO, hopefully:

KDE4 Manual Resolution
KDE4 Override Screen Resolution
KDE4 Incorrect Screen Resolution