Tag: F5

F5 Big-IP LTM expired password issue

Although the issue I am writing about doesn’t exist anymore in version 13.x, it is still relevant to lower versions.

Namely when a user fails to change their password before their password expires completely they can’t log in to the web interface any more. They don’t get an error saying that their password is expired. Neither do they get a prompt to change it. They actually get an error about invalid credentials.

Initially when investigating the issue, I changed the affected users passwords manually. But then I asked one user to try and log in using SSH. What happened was, he was prompted to change his password. After that, he could successfully log in to the web interface again. And no that user did not have CLI permissions. So if you are not in a hurry to upgrade to versions 13.x and up, you still have a workaround.

F5 Big-IP password policy behavior

As it turns out F5 Big-IP LTM devices apply/check password policy only when the user changes their password. What it means is, that users that existed prior to the policy being applied will not have their password expire, etc.

I know that checking the password strength after it has already been set that is “kind of hard”. But the least you can do is set the passwords to expire according to the policy. In the case no expiry time exists it should be set to all users, to make the device actually comply with the policy that it has configured. So, in my opinion that is F5’s oversight.

So in order to actually enforce the policy you must take care that your users change their passwords after the password policy changes to actually apply them.

ConfigSync issue upon resource pool member IP address change

Well as it turn’s out changing a node’s IP address in clustered environments doesn’t go that smoothly as one would expect. Not only that, F5 have made a annoyingly complex procedure of something that simple as changing one back end servers IP address. What I mean by that is, that in order to change the IP address of a node you actually have to delete the node and then re-create it. But when you have one node running a ton of services on different ports and is a part of a large amount of resource pools, etc you have to remove it from all of the resource pools before you can delete it. And then after creating the node again you have have to put it back in the pools you need it to be in.

As it turns out in a clustered environment when you do the aforementioned procedure and then try to sync the cluster member settings manually it will fail with an error message saying something like this:

0107003c:3: Invalid pool member modification. An IP address change from (192.168.1.1) to (192.168.1.2) is not supported.

So in order to avoid that error you need to sync the devices after you have finished all the deletion steps of the node and after you have done the config sync you may proceed with the creation of the node on the new IP address.

If you are already in the state where it is refusing to sync, well what helps is deleting the troublesome node on the secondary device also and then performing the config sync.