Uncategorised
- Details
- Written by: David Barker
- Category: Uncategorised
- Hits: 819
I have two isps, Frontier and Spectrum. My Frontier connection is a 1 gbps fiber optic connection and Spectrum is 450 mbps cable. For obvious reasons, I want most traffic to prefer the Frontier connection. There is a Palo Alto Firewall connected to both ISPs and my local network. It serves as my firewall and default gateway for all the hosts on my lan.
Here's where the problem comes into play. My TVs are smart and have the Spectrum TV app. This application lets Spectrum subscribers view Spectrum TV over the internet. However, Spectrum only offers a subset of channels if you're not on the Spectrum network. The solution is to policy route the Spectrum App destinations over the Spectrum ISP. However, here's where the trouble lies. These destinations are not static and the application itself seems to add additional destinations with each update. Try as I might to capture all the destinations in a group and policy route those destinations, inevitably, something is missed and I get the limited list of channels.
The solution is to use a dynamic address group as the source for the policy based forwarding rule.
The primary url that the app uses is "watch.spectrum.net". I've defined this a FQDN in an address object. Next, I built a TagAsTelevision Log Forwarding Profile, with the Built-In Action as a Tag as Televison with an expiry of 30 minutes.
Next, I created an access rule with the source being my local lan, destination as "watch.spectrum.net", action allow, with the log forwarding profile at log at session start with the TagAsTelevision log forwarding profile.
Next I created a address group object as type dynamic named TVs with match criteria being 'Televison' which was the Tag action from the Log Forwarding Profile.
Finally, I created a policy based forwarding rule with the Source as TVs to egress through the Spectrum ISP by setting the nexthop as the Spectrum router ip.
In summary, the logic goes like this, if any device on my local network requests the FQDN "watch.spectrum.net" the allow rule triggers a log forwarding profile that Tag's the Source as a Televison. Objects with that Tag are matched by dynamic address group TVs. TVs are policy based forwarded to Spectrum ISP. If the request to "watch.spectrum.net" doesn't happen for 30 minutes, the tag expires and the source routes normally out the Frontier ISP.
This allows for the TVs to use Netflix, Max, Hulu, etc. through the faster isp. But if someone wants to watch Spectrum TV the ISP switches to Spectrum automatically.
This solution is a creative use of a feature that was designed as a method of tagging malicious traffic. The premise is this, malicious traffic is sent via a log forwarding profile that marks the sources with a tag. A dynamic address group can be created to match that tag. This tag can be used to block sources that have been marked with that tag.
For example, ip addresses that hit a honey pot destination as an allow rule with a log profile that tags the sources. This Tag can be used in a dynamic address group to block further access from that group by creating a drop rule with that dynamic address group as the source.
Similarly, this could be a URL filtering rule with the category "command-and-control" that tags hosts as "infected" and then subsequent rules could block all access from "infected".
In summary, the solution doesn't care if the traffic is malicious or benign, the logic is the same, tag traffic, then do something with the tagged traffic.
- Details
- Written by: David Barker
- Category: Uncategorised
- Hits: 1021
Sometimes we want to hide our network ip from the outside world. Sometimes we want to appear as if we are coming from a server in another country. Whatever the reason, we can use a 3rd party proxy or VPN service to do so.
In my case I chose IPVanish's VPN service. I use it for firewall testing, ip scanning, etc. I think this article will apply to any VPN service that uses openvpn.
First, you'll need to build an Ubuntu 22.04 setup with Squid.
Then install openvpn
Download the config files from IPVanish, unzip them and find the server you want to connect to.
secret sauce to get around the weak CA algorithm restriction
/usr/sbin/openvpn --config /root/ipvanish/ipvanish-US-Miami-mia-a01.ovpn --tls-cipher DEFAULT:@SECLEVEL=0
*note the --tls-cipher DEFAULT:@SECLEVEL=0 without this the openssl version on Ubuntu 22.04 will not allow this connection.
Since openvpn builds a tun0 interface, you'll need to nat behind it.
/usr/sbin/iptables -t nat -A POSTROUTING -o tun0 -j MASQUERADE
Now you can route through it, or proxy to it. Whatever works better for you.
- Details
- Written by: David Barker
- Category: Uncategorised
- Hits: 1230
How to Make a Check Point Firewall Cluster Give You The Cluster Status in The Expert Prompt
Have you ever logged into a Palo Alto firewall cluster, noticed that it tells you the HA status in the prompt and thought: “I wonder why Check Point doesn’t do that?”
We’re here to show you how to make this possible!
What is Check Point?
Around the world, Check Point is known as an industry leader in cybersecurity solutions. They have earned a multitude of awards from a variety of organizations, from NSS labs, SC Media, Forrester, Gartner, and more.
Check Point has evolved their threat hunting capabilities to now cover more of what is going on in the world and is continuously adapting to changes in the workforce, including cloud integrations, network and mobile adaptations, and robust endpoint configurations. This vast selection of evolving products and solutions has shaped the cybersecurity industry and prepared it for the next generations of threats.
Palo Alto Firewall Cluster Node Example
Here’s an example if you don’t know what I’m talking about. This is what I see when I log into a Palo Alto firewall cluster node:
And similarly, if I ssh into the other node, I will see:
Check Point Firewall Cluster Node Example
But, if you ssh into a Check Point cluster, you will see the following:
Notice in both clish and expert, you don’t see the cluster status.
Now with some editing of the /etc/bashrc, you can have the following prompt:
Notice how the prompt changes when we run clusterXL_admin down / up.
This prevents you from having to run cphaprob state every time you login to a firewall to know which one is active.
Editing The Bashrc
Now, how do you edit the bashrc—and more importantly, what exactly do you edit?
In order to do this part, you’ll need a rudimentary knowledge of vi. You won’t need to be a vi expert, but know enough to edit and save files.
First, let's backup the existing bashrc. In order to accomplish this from expert prompt, you will need to do the following:
cp /etc/bashrc /etc/bashrc.orig
Then:
vi /etc/bashrc
Scroll down until you find the following code block:
Everywhere you find the phrase export PS1, you’ll need to add the following lines underneath:
PS1+="(\$(/opt/CPsuite-R80.40/fw1/bin/cphaprob state | grep 'local' | awk '{print \$(NF-1)}'))> "
export PS1
Such that the resulting block looks like this:
Note in my example, I’m running R80.40 and the command to get the cluster status is
/opt/CPsuite-R80.40/fw1/bin/cphaprob.
If you were running a different version of check point, you can run the command:
which cphaprob
to get the path of the command:
Final Steps
After you’ve made the appropriate modifications to both cluster members /etc/bashrc, you’ll need to logout and login again for the changes to be seen.
There you have it… cluster status right in the prompt! This changes automatically when you failover (note you’ll still need to hit enter or run a command, it doesn’t automatically refresh the screen), but the prompt changes without having to login again.
Need more assistance or guidance as you navigate your Check Point Firewall Cluster? For over 20 years, Compuquip has partnered with Check Point Technologies to provide its community’s next security level. Being a 5-star elite partner has allowed us to understand their fleet of solutions and manage the band of products for our Check Point community. Reach out to one of our Check Point experts today to learn more!
- Details
- Written by: David Barker
- Category: Uncategorised
- Hits: 1060
Understanding & Tuning The Ring Buffer on a Check Point Firewall
Is your firewall dropping allowed traffic unexpectedly, without logging? Does it seem like your firewall is under performing but not undersized? It may be that you need to adjust the size of the ring buffers.
Not sure where to begin? Don’t worry—we’ve got you covered! Read on to better understand the size of the ring buffers you need and how to adjust them on a Check Point firewall!
What Are Ring Buffers?
Ring buffers, also known as circular buffers, are shared buffers between the device driver and Network Interface Card (NIC). These buffers store incoming packets until the device driver can process them. Ring buffers exist on both the receive (rx) and transmit (tx) side of each interface on the firewall. So, there is a rx ring buffer and a tx ring buffer.
How Do Ring Buffers Work?
Have you ever turned on your computer, gone to your bank’s website, and started typing your password to log in—then noticed a major lag in what you’re typing into the password field? This lag happens when the computer processor begins to “wake up” and realize it needs to start grabbing data from the processor in the keyboard. You may be thinking: “Where is this information stored if there’s only a small amount of space in that keyboard’s memory?”
Well, that’s where the ring buffer comes in! The ring buffer functions like a queue, storing data in a fixed-size array. In our example, that lag is caused by the buffer trying to pull that password data out of the queue.
How To Check Your Ring Buffer Sizes
To check the current and the maximum ring buffer sizes, run the following GAIA clish command:
HostName> show interface NAME_of_PHYSICAL_INTERFACE rx-ringsize
HostName> show interface NAME_of_PHYSICAL_INTERFACE tx-ringsize
Output example:
HostName> show interface eth0 rx-ringsize
Receive buffer ring size: 256
Maximum receive buffer ring size: 4096
In this example, 4096 is the maximum size of RX ring buffers for this particular driver, and the current ring buffer size is 256.
Common Questions When Tuning Ring Buffers
How do I know when I need to tune ring buffers?
Now that you know what a ring buffer is, you need to understand what circumstance would necessitate changing it. The only reason to change this buffer is if you are experiencing or are expected to experience interface drops.
You can check for drops on the interface from GAIA with the following command:
“show interface x statistics” where “x” is the interface name. e.g “show interface eth0 statistics”
Output example:
cpgw1> show interface eth0 statistics
Statistics:
TX bytes:1277516009 packets:1609950902 errors:0 dropped:0 overruns:0 carrier:0
RX bytes:3274762007 packets:1088786537 errors:0 dropped:17365890 overruns:0 frame:0
As you can see in this example, there are 17365890 dropped packets, indicating that there is a problem with RX drops. Note there are no TX drops, which is fairly common as RX drops occur much more frequently.
The command “show interface x rx-ringsize” where x is the interface name e.g “show interface eth0 rx-ringsize” will produce an output like this:
cpgw1> show interface eth0 rx-ringsize
Receive buffer ring size:256
Maximum receive buffer ring size:4096
In this case, the receive buffer ring size is only 256 out of a maximum of 4096.
Now that I’ve identified a problem with interface drops and a small ring buffer size, what should I change it to?
The rule of thumb is to double the ring size and watch for drops. In this example, this would be 512, and it would be changed with the command:
“set interface eth0 rx-ringsize 512”
Give it some time and then re-check the statistics. If the drops are continuing to increase, you should double the buffer size again. If not, leave the value of rx-ringsize where you just set it.
Why double the size instead of just setting it to the maximum value to start with?
Increasing the ring buffer size causes the interface to store more data before sending an interrupt to the CPU to process that data. The longer (or bigger) the ring buffer size (or queue), the longer it will take for the packets to be de-queued (or processed). This will cause latency in traffic, especially if small packets (e.g., ICMP packets, some UDP datagrams) make up the majority of the traffic.
The main reason to increase the ring buffer size on an interface is to lower the amount of interrupts sent to the CPU, thus decreasing the number of interface drops. Due to the inevitable latency factor, increasing the ring buffer size must be done gradually, followed by a test period.
What is an acceptable test period or window for determining the buffer size?
Since these drops occur when the firewall is busiest, it would make the most sense to allow the increased buffer size to run across the busiest part of the day when the firewall is at its highest CPU utilization. A few hours should be enough.
What about the tx-ringsize?
While most often the rx-ringsize needs to be adjusted, the tx-ringsize sometimes needs to be adjusted as well. Similar commands can be executed such as “show interface x tx-ringsize.” Here is an example:
cpgw1> show interface eth0 tx-ringsize
Transmit buffer ring size:1024
Maximum transmit buffer ring size:4096
If you notice the default tx-ringsize is 1024 compared to the default rx-ringsize of 256. With this buffer already quadrupling the size, drops on the TX side of the interface are less common.
What will tuning the ring buffer on my firewall mean for my organization?
While this procedure isn’t a substitute for needing a bigger firewall, increasing the ring buffer sizes can decrease the number of dropped packets on the interfaces. This simple free procedure may prolong the life of your firewall solution, meaning better security for your organization without breaking the budget!