Please read part 1 first.
Securing the beast
Now I’m getting close to wanting to move the box to a server room. That means that it’ll be directly connected to the internet, with no router and no nating. It will be attacked. A lot. If there’s any obvious hole in it’s defences, it will get owned.
I’m not a security expert, but I do know that there’s no such thing as absolut security. You just secure the box as well as possible, with the resources you are willing to use. It’s a compromise between usability, resources and security.
I see security as a triangle between, keeping the machine up to date, making sure that the installed programs are setup as secure as possible and keeping an eye on the box for strange behaviour (did somebody get through).
When you look at the security page at debian.org, all it tells you is to keep the packages up to date. I’ll start by looking at that.
Be up to date
Okay, lets start my editing the /etc/apt/sources.list to ensure that it’s able to get the latest security releases. The following line is recommended by the debian security page:
deb http://security.debian.org/ sarge/updates main contrib non-free
So lets start up vi (read part 1 about using vi to edit):
# vi /etc/apt/sources.list
And edit it so that it looks like this (you want will probably want to use local mirrors for most of it):
SO lets update the packages:
# apt-get update
# apt-get upgrade -s
Now the important part of the answer is the one that says:
The following packages will be upgraded:
libfreetype6 libmysqlclient14 libmysqlclient15off mysql-client
mysql-client-5.0 mysql-common mysql-server mysql-server-5.0
8 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Note that there’s 8 upgraded packages. Now the mysql client stuff may probably isn’t that much of a security risk, but the rest of it is (as the mysql server may be listing on an external address that it shouldn’t) – please note, that the mysql service may be stoppe for long enough to make an impact on any program that depend on it, so you probably want to do this when you have the lowest number of visitors to your site. I’ll run:
# apt-get upgrade
Now that hangs with a “Checking for crashed MySql tables in the background” message, as mysql tries to restart the service, so I press Ctrl-C to kill it. It seems to be okay, but I better fireup phpmyadmin just to be sure (see part 1). Yes, it’s working and the server is now 5.0.22-1.dotdeb.1 as downloaded by the apt-get upgrade command
If it hadn’t restarted I would have stopped it and then started it with:
# /etc/init.d/mysql stop
# /etc/init.d/mysql start
Now I want the system to keep an eye on out for needed upgrades, and for that I need the cron-apt package:
# apt-get install cron-apt
Okay, that was fast. Okay, I got no idea what this does, so lets look at the man page:
# man cron-apt
Okay… options by doing –help and set up by editing /etc/cron-apt/action.d and /etc/cron-apt/config – examples in /usr/share/doc/cron-apt/examples/config. Lets look at the example:
# more /usr/share/doc/cron-apt/examples/config
“No such file or directory”. Damn, yet another imprecise man page? What gives? Anyway lets look what we have:
# ls -l /usr/share/doc/cron-apt/examples/
Gives:
-rw-r--r-- 1 root root 18 Feb 6 2005 0-update-rw-r--r-- 1 root root 64 Feb 6 2005 3-download-rw-r--r-- 1 root root 91 Feb 6 2005 9-notify -rw-r--r-- 1 root root 1707 Apr 20 2005 config.gz
Ah, we have a gzipped file – lets unpack it:
# gunzip /usr/share/doc/cron-apt/examples/config.gz
That always inpacks a file called exactly the same as the gz file, just without the gz part, so lets repeat our more command (press up arrow a few times to get back to it):
# more /usr/share/doc/cron-apt/examples/config
Damn – three pages of stuff – nearly all of it commented out. I’m not sure what to do… Lets see what the program can do for us:
# cron-apt –help
“USAGE: cron-apt [-i] [-s] [configfiles]” – that was short, but not exactly informative. Well, what the .. heck lets try it:
# cron-apt
Nothing – takes a few seconds, so it’s actually doing something.
# cron-apt -i
Same thing.
# cron-apt -s
Hey, we have a winner:
CRON-APT RUN [/etc/cron-apt/config]: Sun Jun 11 14:18:01 CEST 2006
CRON-APT ACTION: 0-update
CRON-APT LINE: update -o quiet=2
CRON-APT ACTION: 3-download
CRON-APT LINE: autoclean -y
Reading Package Lists…
Building Dependency Tree…
CRON-APT LINE: dist-upgrade -d -y -o APT::Get::Show-Upgraded=true
Reading Package Lists…
Building Dependency Tree…
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
It updated the package list and tried to do an upgrade, but didn’t find anything. But, hey it’s actually using a config file from /etc/cron-apt/ so I’ll take a look at that:
# vi /etc/cron-apt/config
Okay, that’s the same as the example one. Most of it’s commented out, but it did seem to work. Lets see… okay, I better get some help.. found it. There’s a mailon line, which I’ll change to “mailon=always” for now while I’m testing and I want the mail to go to another address so I change the mailto line. And that’s it. I’ll run it again to see if I get a mail:
# cron-apt
And after long wait, a re-edit of the config file, where I uncommend the mailon=always part and try again, I get a mail! Yeah! Now I just need to run apt-get upgrade or something similar, when I get a mail that contains information on new packages that should be installed.
Getting the status
And while I’m on the informational mail thing I’ll check that I get a general status mail every night that tells me basic stuff about my machine. Let’s see whats in the cron job list:
# cat /etc/crontab
That gives me:
# /etc/crontab: system-wide crontab
# Unlike any other crontab you don’t have to run the `crontab’
# command to install the new version when you edit this file.
# This file also has a username field, that none of the other crontabs do.
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# m h dom mon dow user command
17 * * * * root run-parts –report /etc/cron.hourly
25 6 * * * root test -x /usr/sbin/anacron || run-parts –report /etc/cron.daily
47 6 * * 7 root test -x /usr/sbin/anacron || run-parts –report /etc/cron.weekly
52 6 1 * * root test -x /usr/sbin/anacron || run-parts –report /etc/cron.monthly
Okay, checking the man page of test -x tells, me that it tests if a file exists. I’m guessing that || is an instantiated if, and if the result before the || is true, the option after is run. So if anacron is installed, I get some kind repports daily, weekly and monthly – if the jobs output something. Nice. Lets see:
# /usr/sbin/anacron
“No such file or directory” – Okay:
# apt-get install anacron
So, that’s installed. Let’s see what the daily gets us:
# run-parts –report /etc/cron.daily
I get a mysql error:
/etc/cron.daily/mysql-server:
ERROR 1373 (HY000) at line 1: Target log not found in binlog index
run-parts: /etc/cron.daily/mysql-server exited with return code 1
Okay… I didn’t even know I had binlogging running. I take the second line (the one with “ERROR…” and throws it at google). Don’t really find anything. Let’s look at the cron (the one mentioned in the error message) file:
# cat /etc/cron.daily/mysql-server
Hm, okay, that’s mostly about rotating binary logs. Which I probably don’t have and certainly don’t need (they are mostly for replication). I’ll simply delete the file! I don’t need it:
# rm /etc/cron.daily/mysql-server
And run the report again:
# run-parts –report /etc/cron.daily
And a few seconds later I’ve my prompt back… wonder what happened? A mail may be on it’s way, but while I wait I’ll do:
# man run-parts
Okay, that just runs a a program (in this case the daily script). I’m guessing what’s supposed to happen is simply that all output of cron jobs goes to the root user in a mail (that’s how it worked on FreeBSD). But as it was me and not cron job runing it, I didn’t get a mail. Fair enough. Anyway, it didn’t answer anything back! I would like some information, like diskusage, trafic and login attempts.
Searching around a bit doesn’t help me. There seems to be a basic checksecurity package, but I’m not really sure this is what I’m looking for. FreeBSD comes with a total set of periodic cron jobs that does all kinds of security auditing out of the box. There’s eight just in the daily cron job. I’ll look for something similar for debian later.
Until then I want to do a small file that just output the current disk space usage, just to get something:
# cp logrotate diskusage
Just to have a template with the standard shell stuff intact. File contains:
#!/bin/sh
df -h
Hopefully I’ll get something… [later: and I actually do, so that works]
Removing the non-needed
Removing services that aren’t needed is fairly important, but also hard. I’ve already selected a minimal install of debian, it’s kind of hard to remove anything, but let’s see what we have running that’s connected and listing to the network:
[more later]
Locking things down
The next step is to lock down things.
I start by ssh. I want it as locked as possible. That means making it impossible for anybody by me to even try to login.
# vi /etc/ssh/sshd_config
I change “PermitRootLogin” to no.
I add a line that says “AllowUsers tc”, so that I’m the only one who can login.
Everything else I leave as is.
Okay, to lock down what ip adresse I want to talk to I have to edit the /etc/hosts.allow and the /etc/hosts.deny files. They way the operating system uses them is to run through first the allow file and then the deny file. If trafik matches in allow it’s allowed, if not match in allow, the deny file is checked, and if a rule matches the trafic i denied. If no rules match in either file, the trafic is allowed.
I edit hosts.allow to look like this:
ALL: 192.168.
ALL: LOCALHOST
ALL: 127.0.0.1
sshd: 83.88.241.xxx
sshd: 83.221.139.yyy
Now, a few comments. The first one is to allow any of my home machines to connect to any service. I’ll remove it when I move the machine to the server room, but right now I wouldn’t be able to connect to the machine without it. The next two is to ensure that all local services can talk to each other (not really sure that’s needed). The last to is to ensure that I can ssh into the machine from both my home and my work machine (in reality don’t use xxx and yyy, I just don’t want these addresses to be public).
Now I edit the hosts.deny to contain just one line:
ALL:ALL
That denies access to everything that wasn’t explicitly allowed by the first rules in the hosts.allow file. Now this is a tiny bit dangerous, as if you haven’t done the hosts.allow right, you will not be able to connect to the machine.
Now it’s important to note that the hosts.allow and hosts.deny files only control the services that uses the tcpwrappers. There’s a lot of these, but apache is not one of them, so that will keep on answering to outside connection attempts (a good thing in this case). To get a list of services (or deamons as they are called, in the linux world) do:
# apt-cache showpkg libwrap0 | egrep ‘^[[:space:]]’ | sort -u
(that’s not your installed and running servies, but a list of all thats available as packages).
Now that lists includes mysql, so that should be secure, but lets take a look at the my.cnf config file anyway:
# more /etc/mysql/my.cnf
Look for the line that says “skip-networking” that shouldn’t be commented out – it ensures that mysql is only listing for connection from the local machine.
Setting up the websites
First I need to set up a virtual host for phpMyAdmin. My primary domain is tc.dk, so I’ll make a subdomain called phpmyadmin.tc.dk and point it to my server (through the DNS, and during setup, my local hosts files – but that’s not a topic for this article).
First lets make a virtual host for the phpmyadmin site. It’s already installed in /var/www/phpmyadmin – I want my log files in /var/www/phpmyadmin.log/ so I do:
# mkdir /var/www/phpmyadmin.log
Okay, lets go to the directory that contains the host setups:
# cd /etc/apache2/sites-enabled
and see what’s there
# ls -l
There’s a link called 000default (it’s linking to the default file in the /etc/apache2/sites-available directory). I’ll take a copy of it and use that as the basis for my new virtual host (the recommended way is to make the file in sites-available and link to that, well… feel free to do it that way):
# cp 000default tc.dk
# vi tc.dk
I’ll edit it down to:
NameVirtualHost *:80
<VirtualHost *:80>
ServerAdmin webmaster@tc.dk
ServerName phpmyadmin.tc.dk
DocumentRoot /var/www/phpmyadmin
<Directory /var/www/phpmyadmin>
Options Indexes FollowSymLinks MultiViews
AllowOverride None
Order allow,deny
allow from all
</Directory>
DirectoryIndex index.php
ErrorLog /var/log/apache2/error.log
LogLevel warn
CustomLog “|/usr/sbin/rotatelogs /var/www/phpmyadmin.log/access_log 86400” combined
ServerSignature On
</VirtualHost>
Now, before I restart apache and check if it works I’ll do one more thing. I can still address the phpMyAdmin script through the default virtual server in apache (the one in the 000default file), by going to http://ip-address/phpmyadmin. Remember the web-server root is in /var/www/ and phpMyAdmin is in /var/www/phpMyAdmin… I’ll simply remove the default site, by deleting the link in the sites-enabled directory:
# rm 000default
And lets restart apache:
# /etc/init.d/apache2 restart
And test that it works in the browser (it does) and that http://ipaddress/phpmyadmin doesn’t (it didn’t).
Okay, now I just need to add the authentication.
Password protecting webpages
Some of my homepages are for my eyes only. Most of them have their own user authentication system, but most web-based systems also have weaknesses. phpMyAdmin, is probably not one of the worst programs out there, but as there’s really no reason why I should leave it available for other people, I’m not going to. It’s for my own use and I’ll add an extra layer of security by using apache authentication. I’ll use digest authentication as this doesn’t send of the password and username in clear text. It’s also harder to get to work.
Firstly I need to create a user that can be authenticated against. Apache has it’s own authentication system, what I’m going to use. I guess that I could get it to authenticate against the local system database or something else (like a AD). I’ll start by creating a database file with a database (db) user, as this is for the phpMyAdmin.
# htdigest -c /etc/apache2/passwd/digest db tc
And I get asked for the password twice. That’s it, now we have a password file with my user in it. (just rerun the above command with a new user name to add more users). Now we just need to demand that apache uses the authenticate.
Edit the phpmyadmin virtual server setting:
# vi /etc/apache2/sites-enabled/tc.dk
make the <directory …> section of the phpmyadmin.tc.dk part look like this:
<Directory /var/www/phpmyadmin>
Options Indexes FollowSymLinks MultiViews
AllowOverride None
Order allow,deny
allow from all
# And here comes the authentication bit…
AuthType Digest
AuthName “db”
AuthDigestFile /etc/apache2/passwd/digest
Require user tc
BrowserMatch “MSIE” AuthDigestEnableQueryStringHack=On
</Directory>
And restart the server:
# /etc/init.d/apache2 restart
Okay, that didn’t work, a quick look at the phpinfo page, shows that digest authentication module isn’t installed, let’s do it:
# a2enmod auth_digest
# /etc/init.d/apache2 restart
And it works. Now, I have to note that getting to the above setting took me a couple of hours. The description on the apache page is very hard to follow and other descriptions I found where more confusing than helping (like saying “realm” in the htdigest command, but writing /private/ for realm in the settings). If you have problems, I recommend that you look in the apache error log (in my setup /var/log/apache2/error.log). The browsermatch hack, is there to make apache work around a “feature” in Microsoft Internet Explorer. Now, I only use firefox and that works (as does just about any other browser thatn IE), but I decided to add it in there’s as I might need to admin the machine, from a workstation that only have Internet Explorer installed.
Nice little walkthrough. I will take some of these things into consideration when securing my Debian server.