Checkpoint SmartConsole just Sucks. 80.10 80.20

I’ve been deploying and managing corporate firewalls for over 27 years. Over the past two years this included an assortment of Cisco ASA (Firepower), Sophos and Checkpoint appliances. I can say without hesitation that Checkpoint SmartConsole is the absolute worst firewall management interface I’ve ever experienced. And Checkpoint wants a ransom to expand the number of appliances it can manage and deploy policies on.

I was told by our local NC Checkpoint rep that anyone who questions Checkpoint’s pricing will be shut down by their top brass in Israel because they’re ultra arrogant with regards to their perceived value. Apparently anyone who questions their pricing is just stupid and unqualified to judge. I’m qualified. Checkpoint SmartConsole is shit. Complete shit.

The catch is that I like the smaller Checkpoint, locally managed units and their interfaces are not too bad. The Checkpoint 3200 sold to us by RMSource in Raleigh, NC wasn’t up to the job. They didn’t bother to mention the need for Checkpoint’s “Management Console” licensing and put the management directly on the 3200. Later I’m told by Checkpoint that in order to deploy the licensed “Management Console” to push policy to multiple devices local management would have to be removed from the 3200 and it would have to be reconfigured or re-imaged from a backup. Never mind they were told this is our 24-7 core production firewall and they only get one shot at this. Vendor fail. They were fired. We never bought the Checkpoint Management Console. Not enough units to justify the price.

There are so many problems with “Smart”Console I don’t even know where to start. Let’s begin with the inability to make any changes in any security policy or the unit’s configuration without “Installing” the new policy on the 3200. This disconnects every VPN tunnel, every time. Interrupts active sessions. That’s just ridiculous bullshit. Perhaps this can be avoided with the fully licensed management console running on a VM? I don’t care. I’m not paying for it and any other firewall I’ve ever administered can have local configuration and security policies adjusted on the fly without interrupting any active sessions assuming the configuration of the ports, VPN or settings for any connection have not changed. Even the smaller Checkpoint units can do this. Not so with the 3200 and SmartConsole. It mole whacks every session, every time.

Want to see which specific VPN tunnels are connected and active? You’re not going to easily in SmartConsole which requires a few steps to launch Smartview and then run a Tunnel View… blah, blah…. fuck this. Why can’t I just click “Monitor”, “VPN tunnels” like every other security appliance on earth and see a list of gateway and remote access tunnels and their connection status? Aside from intentional complication, which it seems Chekpoint has mastered, I can’t think of a single reason they can’t make this as simple in Smart Console as it is on their other appliances.

There’s so much more to hate about SmartConsole. It can’t be upgraded in place, previous versions have to be removed before the latest release can be installed. It’s 2020. Fix your shit. The Gaia OS is as bad as it’s name and still a resource hog. What the hell is a Gaia anyway? Never mind, I don’t care. Or how about the fact that I still have updates pending on this damn 3200 that neither the Checkpoint vendor, RMSource, or Checkpoint support could ever get installed without errors? Again, they want to rip it down an start from scratch. What is it these people don’t understand about 24-7 up time meaning NO MAINTENANCE WINDOW for core key components? We don’t have hours to re-image or reconfigure our primary firewall. We will spend thousands to hot-swap replace this ill-advised 3200 before losing even one hour of production orders that flow through the thing. And guess what Checkpoint, we are.

Home Depot B2B EDI “support” is a model of Asian outsourcing failure.

Home Depot outsourced it’s B2B and EDI (Electronic Document Interchange) support to India, Pakistan or somewhere in Asia long ago.  It’s a model demonstration of the failures that can come from outsourcing.  The long running jokes about Indian call center support embraced by US technology and telecommuncations companies have spread across almost all areas of I.T.   This particular failure on the part of Home Depot is of particular importance because it causes disruption in their vendor supply chain.

Honorable mention goes to Home Depot for their selection of unqualified candidates to work in their B2B support center.  Not only are they generally unhelpful and unknowledgeable regarding things like their own EDI mapping specifications, but Home Depot has found it acceptable to hire those who ONLY speak Farci or Urdu with almost zero ability to speak English.  This is no exaggeration or matter of interpretation.  My guess is the top of the totem pole in Atlanta probably isn’t even aware how bad the situation is with this language barrier.  I challenge anyone in their stateside senior management to call their own B2B support department and hold a conversation.  Our organization has been required to call in our Indian and Pakistani product managers to sit on calls and speak with the HD B2B support staff in their native language because they genuinely did not know the words in the English language to communicate high level technical information to our internal EDI staff or our application vendors.  This is when you know they’ve gone too far in their quest to offset costs.

Predictably Home Depot could play the “we can’t find U.S., Canadian or European workers with the skill set to fill these roles”.  Well, you didn’t find them in India or Pakistan either.  Furthermore the document specifications and translation sets are written in English code, specifically XML. If they can’t speak it my guess is they couldn’t read a map or the specification sets during training either.

We are at a point of impasse in our organization right now when it comes to turning up a new trading partnership for Home Depot Canadian distribution centers even though we have a signed supplier agreement because we literally can’t find anyone in Home Depot B2B who can communicate with us in English.  Furthermore when we engage our language translators they still can’t grasp technical concepts well enough to even provide us proper document specifications for their domestic and international programs.  This is why Home Depot’s long running B2B outsourcing initiative deserves a resounding FAIL.

Home Depot has millions of dollars to fix this problem and insure faster supply chain integration.  Apparently the decision not to fix the problem is completely based on trying not to pay U.S., Canadian or European technical specialists the wages such B2B and EDI expertise demands, opting instead for cheap, unqualified, outsourced Asian call center operatives who are at best ineffective in their roles and in many cases detrimental to vendor supply chain integration.

Google Translate poses a security risk.

There are plenty of articles to be found detailing why it’s not safe to translate sensitive internal business documents using Google Translate.  Most of these articles discuss accuracy and confidentiality.  But Google translate is also dangerous because it acts as a proxy by design, creating a security issue.  That means you can plug in a URL in any language, including English and Google will display the contents of the site.  This undermines any corporate security measures put in place to keep employees away from blocked or compromised sites.  The answer is a translation service from Google or a competitor built for business.  This would allow for administrative and user authentication logging what sites are translated and monitoring documents uploaded for translation.   It’s also a revenue generator for the first service to come up with such an administrative translation control.

Details of a ransomware attack and a way to thwart the ransom. Don’t plan to pay. Plan to recover.

Here are the basic steps included in a ransomware attack and how vulnerable people and ports are used to accommodate the attacker.  Conditions must be met.

  1. The attacker relies on stolen credentials.  The credentials are harvested by viruses delivering malware.  Specifically in recent attacks Emotet as the delivery agent for the Trickbot trojan.  All too easy with users susceptible to social engineering.
  2. Trickbot moves laterally across systems, relying on SMB to navigate the network as it steals passwords, mail files, registry keys and more.  It communicates the stolen material back to the bad actor, the Black Hat.
  3. Next Trickbot might launch the Empire Powershell backdoor and download the Ryuk virus upon the black hat’s command.  Armed with harvested credentials, the black hat is now ready to execute Ryuk and encrypt files at will.
  4. The black hat scans for any vulnerable port of entry on an external interface.

┌─[blackhat@parrot]─[~]

└──╼ $nmap -Pn -p 8443 xxx.123.xxx.456
Starting Nmap 7.70 ( https://nmap.org ) at 2019-07-09 16:47 EDT
Nmap scan report for system.contoso.com (xxx.123.xxx.456)
Host is up (0.029s latency).
PORT     STATE SERVICE
8443/tcp open  https-alt

Once a port of entry is found, in this case a very common and vulnerable port used as a remote access interface, the black hat can use the stolen credentials to log in to the network and rely on protocols such as SMB and RDP to access and exploit systems on the network, launching Ryuk to encrypt files on select systems, typically all of them.  Azure?  Too bad, encrypted.  Active directory authenticated AWS?  Ouch.ryk, every file owned.  Once the damage is found you’ll need to recover.

So how can you protect systems and most importantly backups so that rapid recovery, the best response to a live attack, remains possible?

  • The obvious first step in recovery is to neutralize all exploits.  It can also be the most time consuming.  Use Windows firewalls to block all SMB traffic and stop lateral movement across systems.  Deploy through domain level group policy.  Open only the ports necessary to deliver anti-malware utilities to clean all machines of any sign of exploits.  Windows 7 systems remain highly vulnerable to SMB attacks without proper patching and configuration.  Update 02/07/20: Windows 7 is depreciated, insecure and should not be used.  Best to get them off your network regardless of how annoyed some end users are by the thought of Windows 10.
  • Always be certain backup files and database backups reside on systems that are not authenticated to the network using domain level authentication.  Make sure they cannot be accessed using SMB or RDP protocols at all.
  • Of extreme importance is to make sure EVERYONE, especially your domain administrators are forced to change their login credentials routinely.  IT staff have a bad habit of being prime offenders of exempting themselves from password changes.  Take a stand.  Everyone changes their passwords and password complexity rules must be adhered to by every single account on the network.  Use 2 Factor Authentication 2FA every time possible, especially mailboxes and cloud accounts.
  • Make sure you have machine images that are not accessible using domain level authentication or credentials.  If you run a VMware environment make sure you administer VCenter only through local Vsphere credential logins, not AD authentication.  This serves not only to protect your production images, more importantly it protects your snapshots.  Hyper-V environments, God help you.  When you are solely reliant on Windows authentication to manage your virtual servers, you’re vulnerable.  I’d have to do more research on exactly how to stop propagation to all systems in a Hyper-V environment.  My first inclination would be spend some money on VMware or a Citrix XEN Hypervisor, Nutanix if you must.
  • Have snapshots.  Have recent snapshots.  If you don’t run virtual servers at least have Windows bare metal restore backups for physical machines.  Again these are to be written to appliances that are not connected to the network with domain level authentication.  Snapshot and bare metal backup files should remain recent enough to take into account all hardware and operating system changes that have been implemented.
  • Close vulnerable ports on your public interfaces or at minimum set them to random port numbers.  Obvious ports like 8443 are gonna get hit.
  • If you are a heavy transaction environment then you will also want to incorporate more more redundancy at the database server and application server level, such as SQL database replication with incremental transaction log offloads to drive space that is again, not domain authenticated.

Note: I did not specify anything related to archiving and compliance backups because while essential for certain industries and disaster situations they are not specific to rapid recovery in the event of any malicious disaster in which physical hardware assets are not compromised.  

Once you are able to quickly restore a virtual machine or physical system from a recent snapshot or bare metal recover file copies of data files and database backups can be moved into place for restoration to the most current backup set.  Daily is usually the best most small to medium “enterprises” can achieve.  With added expense in resources and configuration backups can be run with more frequency.   Unfortunately even hourly database log shipping won’t save a database from an encryption attack.  As my last point emphasized, unless log files are being off loaded in hourly increments to storage appliances that are not connected with domain level authentication they aren’t safe.  As always, the question of investment becomes: How much can you afford to lose?

The best defense against Ransomware is a good offence in the form of rapid recovery.  Since these exploits rely on social engineering (gullible people) you can never pretend your network is free of any vulnerability.  Don’t just design your backup and recovery environment in case something happens.  Make sure it’s tested it for when it happens.