APIsecure 2023 day 1 Red Track

“New conference in town”

Today was the first day APIsecure API security conference and as it was a free conference I didn’t know what to expect. I actually somehow missed it last year so I didn’t know what to expect. I was expecting a lot of “product coverage”, but it was the opposite, I was pleasantly surprised that that wasn’t the case. One presenter who tried to pitch a product & was actually cut off for it.

My Favorites of the Day

There were quite a few interesting talks, but my favorites of the day were:

  • Michael Taggart-“Beyond Vuln Management: How Adding Offensive Methodology Made Our APIs More Secure.”
  • Antoine Carossio and Tristan Kalos Escape Workshop: “Discovering GraphQL Vulnerabilities in the Wild”
  • Ted Miracco “Enhancing API Security with Runtime Secrets & Attestation”

Michael Taggart had the smoothest and most enjoyable presentation to follow. If the videos are uploaded this is the one I will send to quite a few blue teamers I know. I totally agree with the idea that the blue team must know offensive tactics.

Although “Antoine Carossio and Tristan Kalos” had a lot of technical issues (internet issues) that made the talk a bit hard to follow. Besides the issues, I actually liked it a lot and learned something new. Hadn’t taken too much time previously to go into details on GraphQL vulnerabilities and this talk actually gave me new ideas on what to try when doing an assessment.

Ted Miracco‘s talk on mobile app API security was quite interesting also, proposing some interesting ideas on bettering the security of apps. To be honest, the leak statistics shown in the talk were worse than I thought they would be.

All the talk videos/slides were supposed be uploaded some time after the conference on their website.. Can’t wait to actually be able to go through the slides and “perfect my notes” on GraphQL. The conference website can be found here.

Does VMware Workstation Pro 15.5 run on Windows 11???

As Microsoft stopped selling Windows 10 licenses & Windows 11 has been out for quite a while now I thought I’d give it a try. First questions that came to mind were does everything I need for work actually work there and what do I need to change.. As VMware itself states that Workstation 15.5 on Windows 11 isn’t a supported setup I still thought I’d give it a try before getting the upgrade.

So here’s what you can expect from this setup (or how it was for me). It somewhat worked.. :

  • Some VM-s required “VM hardware upgrade”
  • None of the VM-s with more than 1 CPU/Core would even start – threw errors & refused to start until extra cores removed
  • 3D acceleration issues inside VM-s when needing to use GUI (Gnome/KDE,etc)- GUI worked, but image sometimes was blurry/sometimes resolution issues when resizing VM Window, etc.
  • Suspend VM button instantly crashes/shuts the VM down.

So if you don’t need multi core VM-s with/3d acceleration or pause(standby) functionality, then it might work for you.. But I’d just recommend skipping this trial and error phase if your not just curious & bored.. and just upgrade to 17.

Tenable.SC plugin/feed updates failing & disk full

Today I was called to help with a Tenable.SC instance that failed updating it’s plugins. It turned out that that had its “/opt” filled 100%.. A little investigation into where the space had gone led me to see that “/opt/sc/data/” folder was full of “feed.XXXX” folders each being 2.4GB in size. (~130+ GB in total..)

When looking at the logs I could see that as of December 6th updating the feed had failed (/opt/sc/admin/logs/sc-error.log).

PHP Fatal error: Allowed memory size of 1782579200 bytes exhausted (tried to allocate 20480 bytes) in /opt/sc/src/lib/FeedLib.php on line 2769

So in order to get the SC updating itself normally again I removed all unneeded feed folders, except the “latest feed update attempt” by running the following command:

find /opt/sc -name "feed.*" -ctime +1 | xargs rm -rf

And next in order to fix the “feed update failing” & prevent it from filling up the disk again within a month had to increase the PHP memory parameters. Todo that I edited “/opt/sc/support/etc/php.ini” and turned the memory limit up to 1900M, its default value was 1700m. After that restarted the SC by running :

service SecurityCenter stop && service SecurityCenter start

Additional thoughts on SC disk cleanup can be found in these 2 posts on tenables website:

Kali ticket_converter.py issue fix

When trying to convert Kerberos tickets I ran into a little issue with it being unable to import a name.. To be more exact the specific error is:

Traceback (most recent call last):
File "ticket_converter.py", line 30, in
from impacket.krb5.ccache import CCache, Header, Principal, Credential, KeyBlock, Times, CountedOctetString
ImportError: cannot import name 'KeyBlock' from 'impacket.krb5.ccache' (/usr/local/lib/python3.8/dist-packages/impacket-0.10.1.dev1-py3.8.egg/impacket/krb5/ccache.py)

Fortunately the fix was quite easy when looking into impacket file mentioned in the error. Namely it seems, they have changed the class names and added versioning.

grep -i keyblock /usr/lib/python3/dist-packages/impacket/krb5/ccache.py
class KeyBlockV3(Structure):
class KeyBlockV4(Structure):

When seeing that I realized that the fix is easy.. Just change all occurrences of the word KeyBlock to KeyBlockV4. Just open up ticket_converter.py in vim and type: “%s/KeyBlock/KeyBlockV4/”. That fixed it for me. Happy Hacking! 🙂

OWASP ZAP/ZED Attack Proxy missing after Kali upgrade

If you find your self looking in the menus and not finding OWASP ZAP in the menu’s any more after updating/upgrading your Kali instance. Even locate command returns former paths to zap files that don’t exist any more.. Fortunately the fix is easy just updatedb command as root/with sudo. That should fix the issue and ZAP should be back visible in menus. At least that worked for me.

Couldn’t create partition or locate an existing one in Windows 10

Over the weekend when my Windows 10 decided to completely go nuts. Ok it was my fault, as it happened after some exploitation attempts. But during the re-install I ran into a small issue. When trying to re-install on my NVME drive the setup kept stating that “Couldn’t create partition or locate an existing one in Windows 10“.

In regards to that error there were some hints out there about using diskpart to clean/”resetting the disk” which I didn’t want to do, as I had things I wanted to keep on other partitions.

Fortunately I got away with only deleting the all the Windows related partitions on that disk. Namely I deleted the windows partition itself & 2 recovery related partitions – so all I had left was the data partition. After doing that my windows installer stopped throwing that error and went on without any issues.

Nessus Essentials activation email issue

As it turns out that Nessus Essentials is having trouble sending out e-mails. Ran into it after installing Nessus on a Kali VM. Filled out the form and although Nessus stated, that e-mail sent successfully, no message arrived. Not even after a few more attempts. Fortunately there is a quick work around, I wish I just had turned to Tenable’s website a bit sooner. To activate Nessus Esstentials just use Tenable’s own website to request the activation code. Just go fill out the form at https://www.tenable.com/products/nessus/activation-code and don’t wait for the one from your own installer, as it probably will never arrive.. Happy Scanning!

Don’t add user editable scripts to root cron

In quite a few servers that I’ve managed to gain access to during pen-tests I have found issues in filesystem permissions. The type of permission issues that end up with me gaining root privileges, aka privilege escalation.

When you gain access to a server it always seems to be a good idea to check the crontab log’s. If you have access to them and you if you see any of the scripts running in with the root user permissions.

If you find any root/other useful user entries in the logs, then go and check scripts filesystem permissions. Quite often I have stumbled upon a root script that can be modified by the “service users”. I don’t exactly know why, put some people have scripts with “apache/ww-data” write permissions run by root.

That is just a bad idea on so many levels. How-come people don’t realize that having root run what ever normal user’s scripts gives instantly root privileges to that user.

Using Cisco WSA to bypass firewall and access networks you wouldn’t have access to

This is a short write up of a old flaw I reported to Cisco years ago to which they replied it’s that they see no issue there.

When doing a security audit at a client I stumbled upon a Cisco-WSA/11.5.2-020 appliance filtering HTTP traffic. As it’s the first encounter for me with sucha device, the first thing that came to my mind when seeing that header in HTTP responses was, how can I abuse this. As it turns out I actually could abuse it.

Setup description

It is a small corporate network with a few different segments separated by a firewall with a really strict access policy. Client computers don’t have access to the management network only access to specific internal applications and the internet.

All internet bound HTTP requests are sent by the firewall to the Cisco WSA by using “policy based routing”. that client computers network from which all internet HTTP traffic gets redirected to the Cisco WSA by the firewall.

The Issue

The clients firewall was blocking access to their management network from the users segment as it should. But I was able to bypass the firewall rules by adding a extra header to HTTP requests and effectively map all the hosts in their management network. As it turned out they had too much trust in their Cisco appliance and firewall rule set and thought you don’t need to create “deny to internal” rules on the WSA. But that provided them with a false sense of security.

In the case of this setup, when you add a extra “Host: x.x.x.x” header the firewall wont know the true connection destination thus won’t be able to actually do it’s job. As it will see your computer connecting to the IP address of the original query destination. At the same time the Cisco WSA device ignores the connection that your firewall thinks you are opening actually establishes a connection to the secondary host header. That effectively bypasses your firewall policy giving and purely relies on what policies you have set in the WSA. An example from the notes I have from that time:

C:\>curl -kv http://google.com --header host:"192.168.90.1"
* Rebuilt URL to: http://google.com/
*   Trying 216.58.210.174...
* TCP_NODELAY set
* Connected to google.com (216.58.210.174) port 80 (#0)
> GET / HTTP/1.1
> host:192.168.90.1
> User-Agent: curl/7.55.1
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< Location: https://192.168.90.1/
< Transfer-Encoding: chunked
< Date: Mon, 23 Aug 2021 12:16:19 GMT
< Via: 1.1 wsa.ent.int:80 (Cisco-WSA/11.5.2-020)
< Connection: close
<
* Closing connection 0

As you can see, I originally queried google.com, but WSA actually returned to me the HTTP response from an internal host. In that case it was the management network where the WSA management interface resides. Using that “Host” header, I could map their whole network. When the IP address I chose didn’t have a listening tcp port 80, then the connection was closed, when it existed it returned the HTTP page/response from the hidden server or when the host didn’t exist it timed out.

Although this looks bad, then it gets worse.. At least for that client. In their case they actually had a switch that had it’s web management over HTTP open and with default credentials. It turned out it was the same switch I was connected to, so I was able to reconfigure the port where I was connected to be directly in the management network.

Final thoughts

Although most of the things that I reported to the client probably could have been avoided by having changed their switch admin password and having also a strict “deny all inbound HTTP” traffic from that specific user segment rule on the WSA (not sure, if it would have triggered). Then still in my honest opinion the fact that the WSA device actually connects to the added host header, while all other devices in the connection chain see that the client is going to some innocent place is just wrong. Probably a lot of implementations can fall victim to this oversight in the policy as normal policy testing will never find such a loop hole.. When directly trying hidden/internal hosts you get time outs, when you add them to the header “voila it works..”.