Category: Linux

Tenable.SC plugin/feed updates failing & disk full

Today I was called to help with a Tenable.SC instance that failed updating it’s plugins. It turned out that that had its “/opt” filled 100%.. A little investigation into where the space had gone led me to see that “/opt/sc/data/” folder was full of “feed.XXXX” folders each being 2.4GB in size. (~130+ GB in total..)

When looking at the logs I could see that as of December 6th updating the feed had failed (/opt/sc/admin/logs/sc-error.log).

PHP Fatal error: Allowed memory size of 1782579200 bytes exhausted (tried to allocate 20480 bytes) in /opt/sc/src/lib/FeedLib.php on line 2769

So in order to get the SC updating itself normally again I removed all unneeded feed folders, except the “latest feed update attempt” by running the following command:

find /opt/sc -name "feed.*" -ctime +1 | xargs rm -rf

And next in order to fix the “feed update failing” & prevent it from filling up the disk again within a month had to increase the PHP memory parameters. Todo that I edited “/opt/sc/support/etc/php.ini” and turned the memory limit up to 1900M, its default value was 1700m. After that restarted the SC by running :

service SecurityCenter stop && service SecurityCenter start

Additional thoughts on SC disk cleanup can be found in these 2 posts on tenables website:

Kali ticket_converter.py issue fix

When trying to convert Kerberos tickets I ran into a little issue with it being unable to import a name.. To be more exact the specific error is:

Traceback (most recent call last):
File "ticket_converter.py", line 30, in
from impacket.krb5.ccache import CCache, Header, Principal, Credential, KeyBlock, Times, CountedOctetString
ImportError: cannot import name 'KeyBlock' from 'impacket.krb5.ccache' (/usr/local/lib/python3.8/dist-packages/impacket-0.10.1.dev1-py3.8.egg/impacket/krb5/ccache.py)

Fortunately the fix was quite easy when looking into impacket file mentioned in the error. Namely it seems, they have changed the class names and added versioning.

grep -i keyblock /usr/lib/python3/dist-packages/impacket/krb5/ccache.py
class KeyBlockV3(Structure):
class KeyBlockV4(Structure):

When seeing that I realized that the fix is easy.. Just change all occurrences of the word KeyBlock to KeyBlockV4. Just open up ticket_converter.py in vim and type: “%s/KeyBlock/KeyBlockV4/”. That fixed it for me. Happy Hacking! 🙂

OWASP ZAP/ZED Attack Proxy missing after Kali upgrade

If you find your self looking in the menus and not finding OWASP ZAP in the menu’s any more after updating/upgrading your Kali instance. Even locate command returns former paths to zap files that don’t exist any more.. Fortunately the fix is easy just updatedb command as root/with sudo. That should fix the issue and ZAP should be back visible in menus. At least that worked for me.

Don’t add user editable scripts to root cron

In quite a few servers that I’ve managed to gain access to during pen-tests I have found issues in filesystem permissions. The type of permission issues that end up with me gaining root privileges, aka privilege escalation.

When you gain access to a server it always seems to be a good idea to check the crontab log’s. If you have access to them and you if you see any of the scripts running in with the root user permissions.

If you find any root/other useful user entries in the logs, then go and check scripts filesystem permissions. Quite often I have stumbled upon a root script that can be modified by the “service users”. I don’t exactly know why, put some people have scripts with “apache/ww-data” write permissions run by root.

That is just a bad idea on so many levels. How-come people don’t realize that having root run what ever normal user’s scripts gives instantly root privileges to that user.

Apache TLS auth restriction based on custom OID value using PeerExtList

A client was concerned that Apache/OpenSSL combo was not respecting certificate key usage values out of the box. They demonstrated how they logged into a web-service using certificates that didn’t have the “TLS Web Client Authentication” extended key usage set. Because of that were asking if they had some config error and how would it be possible to require those extensions to be set.

They had messed about with different hints they got from the web, but none worked properly. After a bit of reading manuals/researching the web and tinkering around I managed to get it working the way they wanted it to. So I thought I’d share the brief config snippet to help out anyone else who was running into the same problem.

<Directory /var/www/html/test.site/tls_test>
  SSLVerifyClient require
  SSLVerifyDepth 2
  SSLOptions +StdEnvVars          
  Require expr "TLS Web Client Authentication, E-mail Protection" in PeerExtList('extendedKeyUsage')

Kali apt-get update fails with signature error

When booting up a older Kali VM I hadn’t used for a while, I wanted to update it & ran into a small issue. Namely whilst trying to update I ran into the following error message:

root@kali:~/Documents# apt-get update
Get:1 http://kali.koyanet.lv/kali kali-rolling InRelease [30.5 kB]
Err:1 http://kali.koyanet.lv/kali kali-rolling InRelease
The following signatures were invalid: EXPKEYSIG ED444FF07D8D0BF6 Kali Linux Repository devel@kali.org
Reading package lists… Done
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://kali.koyanet.lv/kali kali-rolling InRelease: The following signatures were invalid: EXPKEYSIG ED444FF07D8D0BF6 Kali Linux Repository devel@kali.org
W: Failed to fetch http://http.kali.org/kali/dists/kali-rolling/InRelease The following signatures were invalid: EXPKEYSIG ED444FF07D8D0BF6 Kali Linux Repository devel@kali.org
W: Some index files failed to download. They have been ignored, or old ones used instead.

As it seems, the GPG key had been changed. Fortunately the fix is easy, just do the following:

  • Check which is the most current key-s file at https://http.kali.org/kali/pool/main/k/kali-archive-keyring/ and download it. For example today it was “ kali-archive-keyring_2020.2_all.deb “.
  • Download it using the wget command or via browser:
    wget https://http.kali.org/kali/pool/main/k/kali-archive-keyring/kali-archive-keyring_2020.2_all.deb
  • Install it by running “pkg -i kali-archive-keyring_2020.2_all.deb”
    root@kali:~/Documents# dpkg -i kali-archive-keyring_2020.2_all.deb
    (Reading database … 528672 files and directories currently installed.)
    Preparing to unpack kali-archive-keyring_2020.2_all.deb …
    Unpacking kali-archive-keyring (2020.2) over (2018.2) …
    Setting up kali-archive-keyring (2020.2) …
    Installed kali-archive-keyring as a trusted APT keyring.
  • Now updating should work again.
    root@kali:~/Documents# apt-get update
    Get:1 http://kali.koyanet.lv/kali kali-rolling InRelease [30.5 kB]
    Get:2 http://kali.koyanet.lv/kali kali-rolling/main i386 Packages [17.5 MB]
    Get:3 http://kali.koyanet.lv/kali kali-rolling/non-free i386 Packages [167 kB]
    Get:4 http://kali.koyanet.lv/kali kali-rolling/contrib i386 Packages [97.7 kB]
    Fetched 17.8 MB in 3s (5,676 kB/s)
    Reading package lists… Done

Finding missing free disk space in Linux, the power of lsof

There might be times when you find that your Linux machines disk seems to be full and you can’t find the reason for it. You try and find the culprit with the du (disk usage) command, but with no success. The numbers just don’t add up.  In that case actually the problem might be that you have some deleted files that are still open by some program. It can actually happen with faulty logrotate configurations where you don’t tell the program that is writing the log to release the file. Or that you manually deleted a file that some program was writing to.

In such cases the “lsof” command comes to the rescue. Basically, it does what the name says, lists open files – even if it has been deleted and is still in use.  Here is an example of a command that I sometimes use to find if there are deleted files that are still open:

lsof | grep deleted|awk '{$7=$7/1048576 "MB"; print}'

The output of the previous command would list you the open deleted files, the process that is still writing to them and the size of the files. This is some random example output from when I last had to look for missing space:

java 32511 32646 tom 1w REG 12980.00024128MB 19510390447 6341662 /var/log/tomcat/log/catalina.out (deleted)
java 32511 32646 tom 2w REG 0.00024128MB 19510390447 6341662 /var/log/tomcat/log/catalina.out (deleted)

To reclaim the disk space, you just simply need to kill/restart the program that is writing to the deleted file.

User permissions issue on migration from MySQL to MariaDB

Today I decided to migrate the website from my old home server that had MySQL installed to a newer web server with MariaDB running on it.

Did it by doing the regular mysqldump and import procedure, which all went fine up to the point when I actually tried to access the site again. Then I got the following error message “Error establishing a database connection“. To see what’s going on I tried logging in to the database using the websites credentials in commandline and it also failed. After that logged in as root and saw that the user was imported, but it had no permissions.

To check what users exist in the database You can use the following SQL statement:

SELECT User, Host, Password FROM mysql.user;

To see what privileges a user has you can use the following SQL statement:

show grants for 'user_name'@'localhost';

In my case it showed the following out put stating that that the user has no privileges:

ERROR 1141 (42000): There is no such grant defined for user 'user_name' on host '%'

After seeing that I tried just re-applying the users rights by using the regular grant command to re-grant the user it’s privileges on the database using the following command:

MariaDB [(none)]> GRANT ALL PRIVILEGES ON database_name.* to 'user_name'@'localhost' WITH GRANT OPTION;
Query OK, 0 rows affected (0.00 sec)

Then I looked at the users permissions again and unfortunately I got the same result as before, “no such grant defined for the user..”.  After that I tried just flushing privileges, to force the server to reload them by issuing the following command:

FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

It didn’t get any better. Finally ended up revoking, flushing and resetting the permissions by doing the following:

MariaDB [(none)]> REVOKE ALL PRIVILEGES, GRANT OPTION FROM 'user_name'@'localhost';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL ON database_name.* TO 'user_name'@'localhost';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> show grants for 'user_name'@'localhost';
+--------------------------------------------------------------------------------------------------------------------+
| Grants for user_name@localhost |
+--------------------------------------------------------------------------------------------------------------------+
| GRANT USAGE ON *.* TO 'user_name'@'localhost' IDENTIFIED BY PASSWORD '*xxxxxxxxxxxxx' |
| GRANT ALL PRIVILEGES ON `database_name`.* TO 'user_name'@'localhost' |
+--------------------------------------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)

That finally helped and the site is up on the new server.

SSH key based authentication secure and convenient or is it?

SSH key based authentication secure and convenient or is it? Well that seems really obvious that it is secure and convenient no passwords to be guessed and changed all the time, or that can be guessed logging on to servers much faster. But when done improperly it isn’t that safe and secure as it would seem.

The issue

When logging on to SSH servers using authentication agent forwarding for convenience so you could jump hosts using the same key. See nothing wrong with it?  Still seems all good and  secure? Well not that secure any more, as soon as convenience of the authentication agent forwarding comes to play a little issue arrises that a lot of people do not think about. Namely the key you used to authenticate to the server is now accessible to others on the server, not in the sense that they could copy it, but they can use it to authenticate to other servers where your key would be valid and that are accessible from that server. Although it requires escalated privileges to get access to it, it is still a problem. So where is this key located? It goes to the /tmp/ folder. As the following is an example from my test machine:

huxx@lnx:~# ls -la /tmp/

total 10

drwxrwxrwt 10 root     root     3072 Feb  1 01:00 .

drwxr-xr-x 23 root     root     4096 Jun  2  2015 ..

drwx------  2 huxx     huxx     1024 Feb  1 00:36 ssh-DhNiAzWTEV
huxx@lnx:~# ls -la /tmp/ssh-DhNiAzWTEV

total 4

drwx------  2 huxx huxx 1024 Feb  1 00:36 .

drwxrwxrwt 10 root root 3072 Feb  1 01:01 ..

srwxr-xr-x  1 huxx huxx    0 Feb  1 00:36 agent.18922

Is there a solution for it?

So is there a solution for the afore mentioned issue? Well luckily  Yes there is. There are SSH key agents out there that actually ask for your permission first before allowing access to the private key. For Windows one such solution would be to use the KeeAgent plugin for the password manager called KeePass it allows to set a password/confirmation to be prompted for every time someone/something tries to access the private key. The same combination will also work on macOS with a bit of work by porting the Windows application using mono for Mac and adding ssh-askpass script to the system. The exact solutions will be shown in followup posts to come.

Edit:
Solution for Windows users described here: https://www.huxxit.com/index.php/2018/02/02/safer-ssh-key-usage-windows-just-using-putty-pageant/