Page 5 of 9

F5 Big-IP LTM expired password issue

Although the issue I am writing about doesn’t exist anymore in version 13.x, it is still relevant to lower versions.

Namely when a user fails to change their password before their password expires completely they can’t log in to the web interface any more. They don’t get an error saying that their password is expired. Neither do they get a prompt to change it. They actually get an error about invalid credentials.

Initially when investigating the issue, I changed the affected users passwords manually. But then I asked one user to try and log in using SSH. What happened was, he was prompted to change his password. After that, he could successfully log in to the web interface again. And no that user did not have CLI permissions. So if you are not in a hurry to upgrade to versions 13.x and up, you still have a workaround.

F5 Big-IP password policy behavior

As it turns out F5 Big-IP LTM devices apply/check password policy only when the user changes their password. What it means is, that users that existed prior to the policy being applied will not have their password expire, etc.

I know that checking the password strength after it has already been set that is “kind of hard”. But the least you can do is set the passwords to expire according to the policy. In the case no expiry time exists it should be set to all users, to make the device actually comply with the policy that it has configured. So, in my opinion that is F5’s oversight.

So in order to actually enforce the policy you must take care that your users change their passwords after the password policy changes to actually apply them.

E-mail spam, a way to sway policy makers decisions

Although politicians and law-making is not something I usually would write about, it is something I just found interesting.

I think by now everyone who has an e-mail address has come in to contact with spam e-mails. Usually they are sent to sell you something or do some phishing. But as it turns out sending spam e-mails can also make politicians vote in certain ways.

A few days ago, I happened to hear a old recording of a radio show that had multiple politicians as guests. And Indrek Tarand an Estonian representative at the EU was one of them. When the topic of the new “EU copyright bill” came up, he did something that I wasn’t expecting. He completely baffled me with his reasoning behind his decision.

Namely, he said he voted for the bill, because the people who are against the bill supposedly used AI to send spam to him to try make him vote against it. And voting for the new law was his way of reacting to the hundreds of e-mails he got.

So as it turns out, you don’t need to spend a lot of money to lobby a politician in to voting some way. Just try and press the right buttons by sending them spam e-mails. They might just vote your way just because you spammed them not to do it.

No more digital privacy in Australia

As it turns out the Australian House of Representatives has actually passed the “Telecommunications Assistance and Access Bill”. It is basically an anti-privacy bill that should come in to effect as a law early in 2019. It basically requires tech companies to provide access to users encrypted data to law enforcement agencies. Talk of similar laws has been around for a long time already, but no one had actually passed such laws.

Although quite a few people are calling it an anti-encryption bill, it actually doesn’t require the weakening of end to end encryption in the applications/services. What they require is that access to unencrypted data be provided in from the end devices or from some other point where the data is in plain text form. In that sense it is a bit better than other anti-privacy laws that I have heard of. They have acknowledged that weakening the encryption would grant anyone access to the data. But is forcing tech companies to make call-home features or back-doors to everything better? I think that it is a bit better than having weak encryption.

But I also think that such anti-privacy features can still be abused by hackers. As soon as you add a back-door, there is a risk that someone could get access to it and abuse it. There is no guarantee that only the legitimate users would get access to it. And as always it is said that the features would be only used when necessary. So they are trying to say that it wouldn’t be an all-out spying campaign on all the users all the time. But then the good old question comes to mind. How can you be sure that they are not spying on everyone? Simple, you can’t be. As soon as the possibility of eavesdropping exists there is no guarantee of privacy.

Finding missing free disk space in Linux, the power of lsof

There might be times when you find that your Linux machines disk seems to be full and you can’t find the reason for it. You try and find the culprit with the du (disk usage) command, but with no success. The numbers just don’t add up.  In that case actually the problem might be that you have some deleted files that are still open by some program. It can actually happen with faulty logrotate configurations where you don’t tell the program that is writing the log to release the file. Or that you manually deleted a file that some program was writing to.

In such cases the “lsof” command comes to the rescue. Basically, it does what the name says, lists open files – even if it has been deleted and is still in use.  Here is an example of a command that I sometimes use to find if there are deleted files that are still open:

lsof | grep deleted|awk '{$7=$7/1048576 "MB"; print}'

The output of the previous command would list you the open deleted files, the process that is still writing to them and the size of the files. This is some random example output from when I last had to look for missing space:

java 32511 32646 tom 1w REG 12980.00024128MB 19510390447 6341662 /var/log/tomcat/log/catalina.out (deleted)
java 32511 32646 tom 2w REG 0.00024128MB 19510390447 6341662 /var/log/tomcat/log/catalina.out (deleted)

To reclaim the disk space, you just simply need to kill/restart the program that is writing to the deleted file.

What’s up with all the bad passwords out there

A bit over a week ago the list of the worst passwords of the year (2018) was released by SplashData. You can review it yourself at https://www.teamsid.com/100-worst-passwords-top-50/.

After having a look at it I found myself amazed at the people’s choices of password. It just baffles me that people are still using passwords like “password” or “1234” as their password and when websites require longer passwords they just keep counting up the numbers instead of “1234” its now “12345678..”.

Do people still actually think that their passwords don’t matter? That no one will guess their username and password? By now almost everybody must have heard of the constant take overs of peoples social media accounts through simple password guessing. If not that, then people surely must have already come in contact with some one trying to log in to their account at some point – warnings at Gmail or similar services. Surely that should make people think.

In order for a password to resist simple brute force attacks it doesn’t have to be too complicated and something that is hard to remember like “x1Ds$!abFrdc?”. You can just your favorite quote from somewhere, which would be very easy to remember and much more secure than the ones on the list. To be a bit on the safer side you can add something to the beginning or ending of it. That would just be a precaution against some attackers that actually do some research on you. So that it wouldn’t happen that an attacker sees that The Simpsons is your favorite TV-show and would guess that your password is “Eatmyshorts!”

F5 BigIP health checks mark host resource down although it’s up

A couple of times I have happened to run across a strange issue on some F5 Big-IP LTM clusters where one of the node’s marks some resources as down although they are actually up. Which can cause quite a lot of confusion and trouble.

At least in the cases that I have seen TMM seems to start interpreting the output of health checks backwards for some hosts. In the logs you can see that the health check returned the host is up and that host was marked as down.  I have had it happen a couple of times with the 11.x series LTM software and it has also happened with the 12.x versions even with the latest patch levels. But I have not seen it happen with the 13.x version(yet).

So in order to get around the issue I have usually just restarted the TMM process on the affected device and all has gone back to normal after it.

Basically to restart the TMM just log in to the device using SSH and issue the following command:

tmsh restart /sys tmm

Beware that restarting the TMM will cause the device to stop processing traffic. So, in case you are having the issue on a device processing the traffic and are running a Big-IP cluster just do a fail-over first if you already haven’t done it.

Like with many other issues the phrase “have you tried turning it off and on again” comes to mind and saves the day.

Check Point 1400 series SMB device VPN debug log fast rotation work-around

If you have ever had to debug VPN-s on a Check Point SMB device you might have noticed that they rotate their logs every 1MB, which means that sometimes You might actually miss the information You were looking for.  At least for me it was a problem trying to get debug level information on some VPN issues that occurred randomly. 

So in order to get the required output I added a 32GB SD-card to the firewall to extend its small storage made some symlinks and wrote a few little script to get all the output I required for debugging.

So on to the details. After you have mounted your SD-card you have access to it on the path:

/mnt/sd

Before You enable debugging You should make symbolic links for the ikev2.xmll and ike.elg files so that you wouldn’t run out of space on the built-in flash.  You can do that by using the following commands:

touch /mnt/sd/ikev2.xmll && touch /mnt/sd/ike.elg
ln -s /opt/fw1/log/ike.elg /mnt/sd/ike.elg
ln -s /op/fw1/log/ikev2.xmll /mnt/sd/ikev2.xmll

Now enable debugging like you usually would(cp support site SK):

vpn debug trunc
vpn debug on TDERROR_ALL_ALL=5

And here is the script I used to copy the logs to the SD-card as they were rotated:

!/bin/bash
while true
do
fmtime=$(stat -c %Y /opt/fw1/log/sfwd.elg.0)
curtime=$(date +%s)
diff=$(echo $curtime-$fmtime|bc)
if test $diff -le 1
then
cp /opt/fw1/log/sfwd.elg.0 /mnt/sd/sfwd.elg-$fmtime
fi
sleep 1
done

So basically, it checks if the sfwd.elg.0 file has changed every second and copies the changed file to the SD-card. I actually also experimented using logger to send the log to a central server via syslog. Using logger just didn’t work. It sent the first one fine, but then the other changes afterwards were just dropped and I opted for the copying. 

Fixing Smart Dashboard crashing after receiving “Disconnected_Objects already created by another user” error

Today I happened upon an error Smart Dashboard after it randomly crashed and refused to start again. After the crash it started always showing me the error “Disconnected_Objects already created by another user” and crashing again. Quick lookup on Check Point’s support site gave me the idea that SmartMap cache might be corrupted.  So here is a quick copy paste of the commands needed to reset the Smart Map cache in R77.30 on Gaia.

mkdir -p /var/tmp/SmartMap_Backup/
cpstop
cd $FWDIR/conf/SMC_Files/vpe/
mv mdl_version.C /var/tmp/SmartMap_Backup/mdl_version.C
mv objects_graph.mdl /var/tmp/SmartMap_Backup/objects_graph.mdl
cd $FWDIR/conf/
mv applications.C /var/tmp/SmartMap_Backup/applications.C
mv CPMILinksMgr.db /var/tmp/SmartMap_Backup/CPMILinksMgr.db
cpstart

After doing that I was able to start Smart Dashboard again and continue working! 🙂

If you are running your management server on Windows are actually are using Multi-Domain-Server you can find the commands needed to do the same on those systems in “sk92142” which is about “SmartDashboard crashes when loading SmartMap data, after upgrading the Security Management Server “

Windows Offline files not syncing in Windows 10

Usually I don’t have that many issues with Windows 10, but somehow after last Windows update I lost control over the contents of the “Documents” folder which was being synced with a file server. I was able to add files but never delete them getting the error “Permission Denied”. Talked to the domain admin, he looked over the permissions on the file server and all seemed fine there. Reset the offline file sync cache, etc (the usual hints you get while googling resetting offline files sync issues) got me back permissions on my files, or so I thought.. After leaving the office I noticed in the evening that I have no more Documents at all. It turned out that after the reset Offline files were not syncing at all and I was able to access them only when I had connectivity to the file server. The issue was that offline files were in “sync pending state” and it wouldn’t actually start the sync.

Try the classics “reboot” the computer, no the sync would not start again, try resetting the offline files cache again – no success.. What actually worked for me was running:

gpupdate /force

After re-installing the group policy clicked on the sync offline files button and voila it synced like a charm again.