Windows 10 Media Creation Tool error 0x80004005 fix

When trying to create a Windows 10 USB installation disk you may get errors starting with code 0x80004005 and end up scratching your head, that why isn’t it working. When I happened to get that error, what helped me get around it was basically emptying the windows update cache by doing the following:

  • Open command prompt in administrator rights
    Click on the start menu button and type cmd. A best match of "Command Prompt" will appear, right click on it and select run as administrator.
  • Using the previously opened Command Prompt stop the “Windows Update service” by typing the following:
    net stop wuauser
  • If your computer is a part of a Windows Domain, it might not have “Windows Updare service” running but rather have “Update Orchestrator Service” instead running, then you need to stop that by typing the following:
    net stop "Update Orchestrator Service"
  • Next you need to stop the Cryptographic and Background Intelligent Transfer services, by typing the following commands:
    net stop bits
    net stop cryptsvc
  • Now lets just rename some of the folders used by Windows Update so, it would re-create them, by typing:
    ren %systemroot%\System32\Catroot2 Catroot2.old
    
    ren %systemroot%\SoftwareDistribution SoftwareDistribution.old
    
    
  • Now lets start the services back up again that we previously stopped by typing:
    net start wuauser
    
    net start bits
    
    net start cryptsvc
    
    # and if necessary also update orchestrator service
    
    net start "Update Orchestrator Service"
  • And thats it close the Command Prompt and retry creating your Windows 10 installation media.

If your are getting “Access is denied” on the “ren %systemroot%\SoftwareDistribution SoftwareDistribution.old” command and you haven’t stopped the “Update Orchestrator Service”, try stopping that.

 

{ Add a Comment }

Check Point R77.30 management interface crypto hardening (WebUI and SSH Cipher change)

By default the management interfaces (WebUI/SSH) of a Check Point firewall are using crypto settings that are not that great (MD5 and SSLv3, etc are enabled), but fortunately it is possible to change them.

SSH daemon is configured like in a normal Linux Distribution by just editing the /etc/ssh/sshd_config, Check Point in its support site also recommends you also modify the ssh client configuration located in /etc/ssh/ssh_config.  Basically in order to change the encryption algorithms available when connecting to the firewall using ssh add the following lines to the aforementioned configuration files using the vi command in Expert mode:

Ciphers aes256-ctr,aes256-cbc,aes128-ctr,aes192-ctr,aes128-cbc,aes192-cbc
MACs hmac-sha1

After modifying the config file restart the SSH server using the following command:

 service sshd restart

If everything is fine then your connection survives and if for some strange reason your ssh connectivity breaks and you can’t log back in you can undo the previous changes by using the terminal access that you can get in the WebUI.

Now that the SSHD settings have been changed, lets start changing the Cipher suites available for HTTPS used for WebUI. Just connect to command line using SSH and do the following in Expert mode.

  1. Backup the current file /web/templates/httpd-ssl.conf.templ:
    [Expert@HostName:0]# cp /web/templates/httpd-ssl.conf.templ /web/templates/httpd-ssl.conf.templ_ORIGINAL
  2. Edit the current /web/templates/httpd-ssl.conf.templ file:
    [Expert@HostName:0]# vi /web/templates/httpd-ssl.conf.templ
  3.  Find the line containing the SSLCipherSuite parameter and change the values behind it for example to ECDHE-RSA-AES256-SHA384:AES256-SHA256:!ADH:!EXP:RSA:+HIGH:+MEDIUM:!MD5:!LOW:!NULL:!SSLv2:!SSLv3:!eNULL:!aNULL:!RC4
  4. Close the editor by using :wq!  , the ‘!’ in the end will override the fact that the file has read only permissions.
  5. Update the current configuration of HTTPD daemon based on the modified configuration template:
    [Expert@HostName:0]# /bin/template_xlate : /web/templates/httpd-ssl.conf.templ /web/conf/extra/httpd-ssl.conf < /config/active
  6. To activate the configuration changes restart the HTTPD daemon by using the “tellpm” command:
    [Expert@HostName:0]# tellpm process:httpd2
    
    [Expert@HostName:0]# tellpm process:httpd2 t

To find out what you actually want to use as the SSLCipherSuite value you can use the cpopenssl to see what algorithms will be available with which value. Example:

[Expert@HostName:0]# cpopenssl ciphers -v 'ECDHE-RSA-AES256-SHA384:AES256-SHA256:!ADH:!EXP:RSA:+HIGH:+MEDIUM:!MD5:!LOW:!NULL:!SSLv2:!eNULL:!aNULL:!RC4' | sort -k1

Expected output:

AES128-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(128) Mac=SHA1
AES256-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA1
DES-CBC3-SHA SSLv3 Kx=RSA Au=RSA Enc=3DES(168) Mac=SHA1

{ Add a Comment }

Renewing F5 BigIP LTM expired device certificates

Every once in a while it is necessary to renew the device certificates on your BigIP devices which are used in the connection for the Web UI(XUI). It’s easy enough to do using the web interface. When the certificate hasn’t expired yet just log in to the Web UI using any web browser you like, but when the certificate has already expired Edge/Chrome/Firefox won’t let you in (no there is no “proceed” button, since the management interface is using strict settings), but Internet Explorer will still work. If you don’t have Internet Explorer available, it can also be done via the command line interface.

To renew the device certificate using the web interface just log in to the management interface and go to the page: System ›› Device Certificates : Device Certificate ›› Device Certificate and click on the Renew button. There you can choose whether you want to create a new self signed certificate or generate a certificate request to your company internal CA, or some external CA if you prefer.

In a clustered environment after you renew the certificate on one device, you need to sync the configurations between the devices before proceeding to update the others. If you don’t do config sync in between you may end up having to renew the previously already renewed certificates again, as config sync will push the old certificates back to active state on the other devices, since it doesn’t have info on the peer’s new certificates.

When renewing device certificates using the command line you will need to use openssl to generate the new rsa private key and certificate request and then use tmsh to activate the newly created key/certificate pair.

OpenSSL command example for generating a new RSA key and creating a certificate request:

openssl req -out CSR.csr -new -newkey rsa:2048 -nodes -keyout privateKey.key

OpenSSL command example for generating a new self signed certificate:

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout privateKey.key -out certificate.crt

The newly created private key should be placed in the /config/http/confd/ssl.key/ directory and the newly created certificate should be placed in the /config/httpd/conf/ssl.crt/ directory. After you have placed them there, the command to activate new key/certificate pair using tmsh is:

tmsh modify /sys httpd ssl-certkeyfile /config/httpd/conf/ssl.key/new-private.key ssl-certfile /config/httpd/conf/ssl.crt/new-certificate.crt

 

{ Add a Comment }

Policy Based Routing resulting in no ARP replies from gateway

One might think that when applying Policy Based Routing it will not affect ARP (Address Resolution Protocol) because they are considered to be things working on different layers. PBR clearly should affect only Layer 3 routing decisions and ARP is running somewhere below layer 3.. There are many nice discussions on the internet whether ARP is a Layer2 or Layer3 protocol and some people tend to say its Layer 2,5.

As it turns out PBR can affect ARP. If you for example wish to re-route every packet originating from the 192.168.1.0/24 network and make a policy route stating that everything from source net of 192.168.1.0/24 be routed to lets say to the GW 172.16.1.1 with out specifying any port or protocol. What will happen is that, ARP requests that use broadcast work, but unicast ARP requests won’t get replies any more – at least from Check Point firewalls. So you would need to either make 2 rules stating that it would affect TCP and UDP only based on your needs or follow Check Point supports guide lines: https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk84480

{ Add a Comment }

Why using VMware vMotion on an active F5 BigIP LTM VE cluster member can be a bad idea

Although F5 states that starting from version 11.5 it supports vMotion to move a BigIP LTM VE instance between physical hosts (K15003222) some times it still can cause issues even in the newer 12.x series software. To those that didn’t want to click on the link and read what F5 has to say about it here are their recommendations for using vMotion:

  • You should perform a live migration of BIG-IP VE virtual machines on idle BIG-IP VE virtual machines. Performing a live migration of the BIG-IP VE system while the virtual machine is processing application traffic may produce unexpected results, such as dropped connections.
  • Using the vMotion feature to migrate one member of a high availability (HA) pair should not cause a failover in most cases. However, F5 recommends that you thoroughly test vMotion migration for HA systems, as you may experience different results, depending on your environment.

Well having tested it I have to say that yes, moving an active member is a bad idea since it can have “nice” side effects in certain cases. I like their unexpected results statement, namely I have seen one BigIP LTM instance drop half it’s inbound connections after vMotion in a way that even after a reboot/upgrade to a newer patch level it still drops connections from certain IP addresses in a way that they don’t even show up in tcpdump and no half the connections don’t go to the standby node they just vanish.. and as soon as you force that device to standby on the other node they re-appear.  So be very careful on what you migrate during the night, as unexpected things might happen…

But atleast in my case using vMotion on the BIG-IP VE virtual machine again, this time in standby mode and then making it active again got traffic flowing normally again.

{ Add a Comment }

Insane amount of IKE SA’s on a SMB device caused by DPD and errors in logs

It seems that Check Point 1400 series SMB devices don’t handle Dead Peer Detection (DPD) that well when suddenly an external partner decides to enable it on a 3rd party firewall. Namely what happens is that you end up with tens of thousands of IKE SA’s on your little Check Point box and “Traffic Selector Unacceptable” errors in your logs.

Although in my case it didn’t cause any problems besides me being unable to see the output of the “VPN TU” command , since the IKE SA’s of the DPD flooded my console and the Embedded Gaia VPN TU utility decided not to show me it’s entire output and even crashed a few times. Ended up calling the other side and telling them to disable DPD. Hope they fix DPD support in some newer software release…

{ Add a Comment }

CheckPoint to Amazon AWS VPN connection issue

When trying to create a VPN tunnel between a CheckPoint firewall and Amazon managed VPN service I happened upon a unpleasant surprise.

Namely when using stronger crypto methods than defined by default in the guides by CheckPoint or Amazon you will run in to an issue, that the CheckPoint device will start dropping traffic after Phase2 key exchanges for a ~5 minute time period. To be more exact the traffic from Amazon to the hosts/networks behind the CheckPoint GW will start failing and connections started from behind the CheckPoint device will continue working as before. Namely Amazon VPN service refreshes it’s keys 5 minutes before the lifetime set in the VPN properties and CheckPoint close to 30 seconds. It actually wouldn’t be a problem if Amazon would use the same parameters as were used to initially establish the tunnel, but it doesn’t. It will actually use DH group 2 to initiate key exchange after which the CheckPoint device will start dropping the traffic coming in from the Amazon service with the following error:

encryption failure: Packet was decrypted with methods which are different from the methods according to the security policy - Gateway and Peer use different DH groups

After talking to both CheckPoint and Amazon support, I can say that the only thing you can do to remedy this is actually setting the DH group to 2  for PFS.

Although Amazon in its documentation(here) states it supports a bunch of different DH groups, and yet it defaults to DH group 2 when initiating the connection it self. To be honest, to me it seems a bit strange that the AWS VPN actually mirrors the encryption/integrity settings of the previous negotiation, but doesn’t remember the PFS settings and defaults to DH group 2. When talking to support services the only thing that AWS support suggested was to force the CheckPoint device to exchange keys before the AWS service does. Unfortunately you cannot do that according to Check Point support services, as there is no such setting available and that timer is around 30s+- some random number of seconds prior to the end of the life time set in the VPN properties.

 

 

{ 2 Comments }

Network security policy installation failure fix

Some times your network policy installations on Check Point devices might fail.  For me it happened after updating a gateway cluster to the “latest and greatest R77” version. I was unable to push the policy and I was getting the “/opt/CPSFWR77CMP-R77/conf/policy-name.pf”, line 912700: ERROR: target <fw-name> is prohibited” error message.

In order to see what is actually causing the error you will need to log in to the management server via SSH. Go in to “expert mode” and look at what is on the line that the error message is pointing at.

So basically, to fix the issue in my case/work around, I did the following:

  1. Logged in to the security management server in expert mode
  2. Opened up the policy file from the place it was complaining about with the less command. (Hints on how to go to a specific line can be found stackoverflow topic here)
  3. In my case on that specific line I saw a list of DPD(dead peer detection for IPSec VPN) peers which hinted that I should try and disable DPD
  4. Logged in to the management server using smart dashboard, removed permanent tunnel ticks on VPN’s relating to the GW cluster with the issue and tried installing the policy.
  5. Policy successfully installed..

After that I reported the bug to Check Point support and they confirmed the issue..

{ Add a Comment }

Enabling DPD on VPN instead of tunnel_test on R77 gateway

To keep VPN tunnels alive Check Point uses by default it’s proprietary tunnel_test protocol. In order to get it working with 3rd party vendors it isn’t enough to have the partner device set as an “Interoperable device” and set the tunnel keep alive method on your gateway object as DPD. You also need to set the peer gateway’s tunnel keep alive method as DPD, because by default it is still set to tunnel_test.

To change the keep alive methods you need to do the following as described on Check Point’s website here:

  1. In GuiDBedit, go to Network Objects > network_objects > <gateway> > VPN > tunnel_keepalive_method.
  2. For the Value, select a permanent tunnel mode.
  3. Save.
  4. Install policy on the gateways.

{ Add a Comment }

ConfigSync issue upon resource pool member IP address change

Well as it turn’s out changing a node’s IP address in clustered environments doesn’t go that smoothly as one would expect. Not only that, F5 have made a annoyingly complex procedure of something that simple as changing one back end servers IP address. What I mean by that is, that in order to change the IP address of a node you actually have to delete the node and then re-create it. But when you have one node running a ton of services on different ports and is a part of a large amount of resource pools, etc you have to remove it from all of the resource pools before you can delete it. And then after creating the node again you have have to put it back in the pools you need it to be in.

As it turns out in a clustered environment when you do the aforementioned procedure and then try to sync the cluster member settings manually it will fail with an error message saying something like this:

0107003c:3: Invalid pool member modification. An IP address change from (192.168.1.1) to (192.168.1.2) is not supported.

So in order to avoid that error you need to sync the devices after you have finished all the deletion steps of the node and after you have done the config sync you may proceed with the creation of the node on the new IP address.

If you are already in the state where it is refusing to sync, well what helps is deleting the troublesome node on the secondary device also and then performing the config sync.

{ Add a Comment }