Category: Networking

Windows 10 WiFi ignoring DHCP DNS settings

After a long period of home office it seemed that my computer did not want to work well in any other WiFi network any more. It showed “no internet connection” in every other network.

When looking into the connection settings, I saw that it was still showing my home DNS server in the settings. No matter what network I was connecting to, be it my phones hot spot, etc still the same.
Example output of the netsh command:

C:\WINDOWS\system32>netsh interface ipv4 show config name=”Wi-Fi”

Configuration for interface "Wi-Fi"
DHCP enabled: Yes
IP Address: 10.1.0.38
Subnet Prefix: 10.1.0.0/24 (mask 255.255.255.0)
Default Gateway: 10.1.0.1
Gateway Metric: 0
InterfaceMetric: 70
DNS servers configured through DHCP: 172.31.1.1
Register with which suffix: Primary only
WINS servers configured through DHCP: None

So I tried using the “netsh” command to reset it by entering a static DNS:
netsh interface ipv4 set dnsservers name="Wi-Fi" source=static address=8.8.8.8

Now I had working name resolution, but this is not a fix for me to have to set a correct DNS server for all the networks I go to, so I set it to DHCP settings again.
netsh interface ipv4 set dnsservers name="Wi-Fi" source=dhcp

Name resolution broke again, as the “show config” returned my home DNS again.. So I turned to the Windows registry to find where that IP address exists. Find yielded the following result. In Computer\HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\Tcpip\Parameters\Interfaces\{interface-uid} there was a registry key called ProfileNameServer. It had a value that matched my problematic DNS server entry. After deleting registry key and reconnecting to the WiFi I finally saw that the DHCP given DNS server list was being used and network connection was working normally again.

Check Point to Cisco ASA IKEv2 VPN with SHA-256 “no proposal chosen” – Timed out

When creating a VPN tunnel between Cisco ASA 9.x and Check Point firewalls using IKE v2 and integrity checks better than SHA1 you might run into a small issue where Phase 1 comes up with no issue and on Phase 2 see time outs in the Check Point logs.

After seeing time out, you enable VPN debugging and you see in the ikev2.xmll log “No Proposal Chosen” message coming from the ASA side. Then you and compare the the crypto configurations on both sides and see that they are identical. If that is the case, there might be a pseudo-random function (“prf”) mismatch. To get around it you should try the following command on the Cisco side:

prf sha

It’s only doable on Cisco side, as Check Point doesn’t let you change this value. That was supposedly the only change made on the peer gateway by the Cisco admin after which the tunnel came up.

Check Point R77.30 new sub interface not forwarding traffic

As it seems on Check Point R77.30 Take_351, it is possible that after adding a new VLAN interface a it may fail to route traffic. When looking at the cluster status, everything seems OK. But when you take a look at the routing table you notice that actually the newly added network is missing.

Doing the usual “cpstop & cpstart” does not fix the issue. What actually was needed to get it to forward traffic to the good old “have you tried turning it off and on again”. If it happens on your primary cluster node just fail over to the secondary node and reboot.

Some connections getting reset on CP R77.30

On one system I started hearing people complain that one server cannot connect to an outside service normally. The connection is opened and gets reset after a while.

When looking at tcpdump traffic from both the client and the server side we saw that the culprit had to be the Check Point firewall in between. As the internal server got a tcp-rst with the source address of the external server and the external one got a tcp-rst with the source address of the internal server. Smart Console logs show that connection is being allowed through and that is it.

So to find out what is going on I turned to zdebug. Zdebug output showed the following line:

[DATE TIME];[cpu_10];[fw4_5];fw_log_drop_ex: Packet proto=6 Internal_Address:47891 -> ExternalHost:443 dropped by fwpslglue_chain Reason: PSL Reject: ASPII_MT;

Googling the “dropped by fwpslglue_chain Reason: PSL Reject: ASPII_MT;” message lead me to sk119432 and that pointed me to towards application control blade which had been activated on that GW by someone previously. When looking at the Application Control policy I found that the particular internal host was not included in the app control policy. So I added the source host involved to the app control policy and the traffic started flowing normally and stopped getting tcp-rst sent to both hosts.

To me it is interesting that all Application Control rules had logging and accounting defined to them, even the final drop rule. And yet no application control blade intercepts were logged. Besides that, the connection was allowed to be opened and always was up for a few seconds before getting reset.

“According to the policy the packet should not have been decrypted” after managment upgrade to R80

It seems that Check Point has changed the way that policies are compiled/handled. At one installation I saw a VPN connection break right after the policy was installed the first time from the upgraded management server. Right after that “According to the policy the packet should not have been decrypted” started appearing in the logs and traffic started being dropped.

After a bit of looking around on the Check Point support site about the message it pointed to multiple objects having similar VPN domains. After going through the object database, I managed to find a “copy” of the peer GW object. Basically 2 gateway objects had same external IP address and similar VPN domains defined. Fortunately that “duplicate” entry wasn’t in use so I deleted it. After deleting the duplicate entry and pushing the policy again, the traffic started flowing again.

I wonder why the upgrade verification process showed no errors or warnings. To me it seems they should have shown some sort of warnings about it, as it breaks things.

Nice feature/bug in Check Point NAT

It seems that I have stumbled upon a interesting Check Point firewall NAT behavior. Namely the firewall does something that it is not ordered to do, it translates the source IP to a different one than in the policy.

Had a static inbound NAT rule where the source address of the client was supposed to be faked to another static address. What happened was that yes the policy installed fine. Yes the client IP address was changed and hidden from the service. But it was changed to an incorrect address. The rule stated that the client IP address be changed to for example 172.16.2.10, but what it was translated to was 172.16.2.100. And the new address did not exist anywhere in the policy database.

And after upgrading that particular management server to R80.20 it refused to compile the policy with that NAT rule in place. Fortunately recreating the NAT rule removed the issue.

The error in the FWM debug logs was for that issue:
“Invalid Object in Source of Address Translation Rule 57. The range size of Original and Translated columns must be the same.”

The interesting thing is, that it was fixed simply by deleting the rule and just re-adding it. And yes it was a 1 address to 1 address translation, not a network to 1 address.

Upgrading a secondary management server to R80.20 GA issue

The Check Point upgrade guide for R80.20 is quite clear in saying that on a secondary server you should perform a clean install. All is fine and dandy with that. I don’t know if I am the first one to actually run in to a problem with it.

When using the CPUSE clean install feature in the WebUI, it seemed to work. That is after I had resized the partitions to give it enough space to perform the upgrade. It was arguing that 30GB of free space is actually 29.83GB. So, after making the current active install partition smaller all seemed well. Until the reboot that is.

After the server rebooted, I could see the new WebUI and felt excited, that it actually kept the original IP address. As CPUSE had warned that all settings will be wiped that was a nice surprise.

Unfortunately, the nice things ended there. I was unable to log in. The default “admin/admin” combo didn’t work or the previous credentials that i had. Found no hint on the Check Point support site either.

After a bit of googling around ended up just downloading the clean install ISO file and re-installing the machine. If I had used the ISO before instead of the CPUSE, I would have saved a lot of time. I guess that CPUSE clean install feature is not that clean yet.

F5 Big-IP LTM expired password issue

Although the issue I am writing about doesn’t exist anymore in version 13.x, it is still relevant to lower versions.

Namely when a user fails to change their password before their password expires completely they can’t log in to the web interface any more. They don’t get an error saying that their password is expired. Neither do they get a prompt to change it. They actually get an error about invalid credentials.

Initially when investigating the issue, I changed the affected users passwords manually. But then I asked one user to try and log in using SSH. What happened was, he was prompted to change his password. After that, he could successfully log in to the web interface again. And no that user did not have CLI permissions. So if you are not in a hurry to upgrade to versions 13.x and up, you still have a workaround.

F5 Big-IP password policy behavior

As it turns out F5 Big-IP LTM devices apply/check password policy only when the user changes their password. What it means is, that users that existed prior to the policy being applied will not have their password expire, etc.

I know that checking the password strength after it has already been set that is “kind of hard”. But the least you can do is set the passwords to expire according to the policy. In the case no expiry time exists it should be set to all users, to make the device actually comply with the policy that it has configured. So, in my opinion that is F5’s oversight.

So in order to actually enforce the policy you must take care that your users change their passwords after the password policy changes to actually apply them.