I don’t understand Proshare’s SCC.

Why would Proshare’s Consumer Services short (SCC) be lower now than it was in late 2018? Was there more concern over the consumer driven economy over tariffs than there is over every hotel, bar and restaurant in the country being closed right now? When something like this doesn’t make sense I typically jump on it. With a 2018 high near $30 seems $14 to short consumer services in the coming months almost seems like no brains are required. Anyone who thinks the consumer economy is climbing out of Corona fear any time soon has lost touch. So many “happy day” investors looking for a place to park and gain in a perpetual bullshit market. Investing is supposed to be a risk and shorting mitigates gambling in a rigged market deigned only for Good Times.

Banking on the Quarantine

Here are a few tech companies doing very well during quarantine because of their services to support remote workers:


1. Citrix Systems, owners of join.me, GoToMeeting, XenApp and other application services supporting remote workers and collaboration. Almost all hospitals publish medical applications via Citrix XenApp as well.
2. Microsoft. Thanks to Sharepoint, Office 365 (now “Microsoft 365”, and most importantly Skype.
3. Slack.com. The most prominent corporate collaboration tool in use by most enterprises.
4. Zoom Video Conferencing. If you can log in. Their servers were slammed on Monday and Tuesday but they have increased back end resources dramatically over the last 48 hours, thanks to a dump truck load of cash put in their driveway this week.
5. Cisco Systems. They probably sold more AnyConnect remote access VPN licenses in the past 2 weeks than they sold in the past two years. Hoping I don’t have to buy more!


There are a lot of non-tech companies profiting from this too. 3M, Dupont, very smart of GM and Ford to start making ventilators since their auto production lines are shuttered. It’s not 100% doom and gloom but I you’ve got to feel for those who made a decision to start their own business within the last couple of years. Then there are the gig workers and the bars and restaurants they rely on = 60 days or less until desperation and panic mode.


One big lesson US small business operators should learn from this situation: A couple thousand isn’t enough, start thinking in terms of tens and even hundreds of thousands in reserve before hiring full time employees. Any company that can’t pad one month of business downturn has no business hiring at all.

COVID-19 Panic Check

Time for another panic check. The media is producing comparison charts to make COVID-19 look worse than other historic outbreaks. They NEVER include N1H1 which started right here in the USA.

“The CDC estimated that from April 12, 2009 to April 10, 2010, there were 60.8 million H1N1 cases, with 274,304 hospitalizations and 12,469 deaths in the U.S. alone. They also estimate that worldwide, 151,700 to 575,400 people died from (H1N1)pdm09 during the first year. Unusually, about 80% of the deaths were in people younger than 65 years of age”.
“Although it is not unusual in pandemics, over time, the fatality rate of COVID-19 has steadily decreased. For example, according to the China CDC study, in patients whose symptoms began between January 1, 2020 and January 10, 2020, the fatality rate was an astonishing 15.6%. But in the patients who didn’t report illness until February 1 to February 11, in China, it was 0.8%”.

“It’s worth noting that even after China got the death rate down to 0.7%, or even 0.4%, that’s still about four to seven times greater than the death rate for seasonal flu. (The rate for the flu is about 0.1%—or 1 in 1,000 patients.)”
It’s even lower than .4% in the US right now. Young, healthy people are not gonna die despite the number of empty cots in gyms published by the media. And toilet paper still isn’t going to save anyone with preexisting conditions indicating they could. Learn lessons from old people in Italy who run around kissing each other on the face while living 5-6 deep per apartment. Yeah, don’t do that.

https://www.biospace.com/article/2009-h1n1-pandemic-versus-the-2020-coronavirus-pandemic/

Checkpoint SmartConsole just Sucks. 80.10 80.20

I’ve been deploying and managing corporate firewalls for over 27 years. Over the past two years this included an assortment of Cisco ASA (Firepower), Sophos and Checkpoint appliances. I can say without hesitation that Checkpoint SmartConsole is the absolute worst firewall management interface I’ve ever experienced. And Checkpoint wants a ransom to expand the number of appliances it can manage and deploy policies on.

I was told by our local NC Checkpoint rep that anyone who questions Checkpoint’s pricing will be shut down by their top brass in Israel because they’re ultra arrogant with regards to their perceived value. Apparently anyone who questions their pricing is just stupid and unqualified to judge. I’m qualified. Checkpoint SmartConsole is shit. Complete shit.

The catch is that I like the smaller Checkpoint, locally managed units and their interfaces are not too bad. The Checkpoint 3200 sold to us by RMSource in Raleigh, NC wasn’t up to the job. They didn’t bother to mention the need for Checkpoint’s “Management Console” licensing and put the management directly on the 3200. Later I’m told by Checkpoint that in order to deploy the licensed “Management Console” to push policy to multiple devices local management would have to be removed from the 3200 and it would have to be reconfigured or re-imaged from a backup. Never mind they were told this is our 24-7 core production firewall and they only get one shot at this. Vendor fail. They were fired. We never bought the Checkpoint Management Console. Not enough units to justify the price.

There are so many problems with “Smart”Console I don’t even know where to start. Let’s begin with the inability to make any changes in any security policy or the unit’s configuration without “Installing” the new policy on the 3200. This disconnects every VPN tunnel, every time. Interrupts active sessions. That’s just ridiculous bullshit. Perhaps this can be avoided with the fully licensed management console running on a VM? I don’t care. I’m not paying for it and any other firewall I’ve ever administered can have local configuration and security policies adjusted on the fly without interrupting any active sessions assuming the configuration of the ports, VPN or settings for any connection have not changed. Even the smaller Checkpoint units can do this. Not so with the 3200 and SmartConsole. It mole whacks every session, every time.

Want to see which specific VPN tunnels are connected and active? You’re not going to easily in SmartConsole which requires a few steps to launch Smartview and then run a Tunnel View… blah, blah…. fuck this. Why can’t I just click “Monitor”, “VPN tunnels” like every other security appliance on earth and see a list of gateway and remote access tunnels and their connection status? Aside from intentional complication, which it seems Chekpoint has mastered, I can’t think of a single reason they can’t make this as simple in Smart Console as it is on their other appliances.

There’s so much more to hate about SmartConsole. It can’t be upgraded in place, previous versions have to be removed before the latest release can be installed. It’s 2020. Fix your shit. The Gaia OS is as bad as it’s name and still a resource hog. What the hell is a Gaia anyway? Never mind, I don’t care. Or how about the fact that I still have updates pending on this damn 3200 that neither the Checkpoint vendor, RMSource, or Checkpoint support could ever get installed without errors? Again, they want to rip it down an start from scratch. What is it these people don’t understand about 24-7 up time meaning NO MAINTENANCE WINDOW for core key components? We don’t have hours to re-image or reconfigure our primary firewall. We will spend thousands to hot-swap replace this ill-advised 3200 before losing even one hour of production orders that flow through the thing. And guess what Checkpoint, we are.

Home Depot B2B EDI “support” is a model of Asian outsourcing failure.

Home Depot outsourced it’s B2B and EDI (Electronic Document Interchange) support to India, Pakistan or somewhere in Asia long ago.  It’s a model demonstration of the failures that can come from outsourcing.  The long running jokes about Indian call center support embraced by US technology and telecommuncations companies have spread across almost all areas of I.T.   This particular failure on the part of Home Depot is of particular importance because it causes disruption in their vendor supply chain.

Honorable mention goes to Home Depot for their selection of unqualified candidates to work in their B2B support center.  Not only are they generally unhelpful and unknowledgeable regarding things like their own EDI mapping specifications, but Home Depot has found it acceptable to hire those who ONLY speak Farci or Urdu with almost zero ability to speak English.  This is no exaggeration or matter of interpretation.  My guess is the top of the totem pole in Atlanta probably isn’t even aware how bad the situation is with this language barrier.  I challenge anyone in their stateside senior management to call their own B2B support department and hold a conversation.  Our organization has been required to call in our Indian and Pakistani product managers to sit on calls and speak with the HD B2B support staff in their native language because they genuinely did not know the words in the English language to communicate high level technical information to our internal EDI staff or our application vendors.  This is when you know they’ve gone too far in their quest to offset costs.

Predictably Home Depot could play the “we can’t find U.S., Canadian or European workers with the skill set to fill these roles”.  Well, you didn’t find them in India or Pakistan either.  Furthermore the document specifications and translation sets are written in English code, specifically XML. If they can’t speak it my guess is they couldn’t read a map or the specification sets during training either.

We are at a point of impasse in our organization right now when it comes to turning up a new trading partnership for Home Depot Canadian distribution centers even though we have a signed supplier agreement because we literally can’t find anyone in Home Depot B2B who can communicate with us in English.  Furthermore when we engage our language translators they still can’t grasp technical concepts well enough to even provide us proper document specifications for their domestic and international programs.  This is why Home Depot’s long running B2B outsourcing initiative deserves a resounding FAIL.

Home Depot has millions of dollars to fix this problem and insure faster supply chain integration.  Apparently the decision not to fix the problem is completely based on trying not to pay U.S., Canadian or European technical specialists the wages such B2B and EDI expertise demands, opting instead for cheap, unqualified, outsourced Asian call center operatives who are at best ineffective in their roles and in many cases detrimental to vendor supply chain integration.

Google Translate poses a security risk.

There are plenty of articles to be found detailing why it’s not safe to translate sensitive internal business documents using Google Translate.  Most of these articles discuss accuracy and confidentiality.  But Google translate is also dangerous because it acts as a proxy by design, creating a security issue.  That means you can plug in a URL in any language, including English and Google will display the contents of the site.  This undermines any corporate security measures put in place to keep employees away from blocked or compromised sites.  The answer is a translation service from Google or a competitor built for business.  This would allow for administrative and user authentication logging what sites are translated and monitoring documents uploaded for translation.   It’s also a revenue generator for the first service to come up with such an administrative translation control.

Details of a ransomware attack and a way to thwart the ransom. Don’t plan to pay. Plan to recover.

Here are the basic steps included in a ransomware attack and how vulnerable people and ports are used to accommodate the attacker.  Conditions must be met.

  1. The attacker relies on stolen credentials.  The credentials are harvested by viruses delivering malware.  Specifically in recent attacks Emotet as the delivery agent for the Trickbot trojan.  All too easy with users susceptible to social engineering.
  2. Trickbot moves laterally across systems, relying on SMB to navigate the network as it steals passwords, mail files, registry keys and more.  It communicates the stolen material back to the bad actor, the Black Hat.
  3. Next Trickbot might launch the Empire Powershell backdoor and download the Ryuk virus upon the black hat’s command.  Armed with harvested credentials, the black hat is now ready to execute Ryuk and encrypt files at will.
  4. The black hat scans for any vulnerable port of entry on an external interface.

┌─[blackhat@parrot]─[~]

└──╼ $nmap -Pn -p 8443 xxx.123.xxx.456
Starting Nmap 7.70 ( https://nmap.org ) at 2019-07-09 16:47 EDT
Nmap scan report for system.contoso.com (xxx.123.xxx.456)
Host is up (0.029s latency).
PORT     STATE SERVICE
8443/tcp open  https-alt

Once a port of entry is found, in this case a very common and vulnerable port used as a remote access interface, the black hat can use the stolen credentials to log in to the network and rely on protocols such as SMB and RDP to access and exploit systems on the network, launching Ryuk to encrypt files on select systems, typically all of them.  Azure?  Too bad, encrypted.  Active directory authenticated AWS?  Ouch.ryk, every file owned.  Once the damage is found you’ll need to recover.

So how can you protect systems and most importantly backups so that rapid recovery, the best response to a live attack, remains possible?

  • The obvious first step in recovery is to neutralize all exploits.  It can also be the most time consuming.  Use Windows firewalls to block all SMB traffic and stop lateral movement across systems.  Deploy through domain level group policy.  Open only the ports necessary to deliver anti-malware utilities to clean all machines of any sign of exploits.  Windows 7 systems remain highly vulnerable to SMB attacks without proper patching and configuration.  Update 02/07/20: Windows 7 is depreciated, insecure and should not be used.  Best to get them off your network regardless of how annoyed some end users are by the thought of Windows 10.
  • Always be certain backup files and database backups reside on systems that are not authenticated to the network using domain level authentication.  Make sure they cannot be accessed using SMB or RDP protocols at all.
  • Of extreme importance is to make sure EVERYONE, especially your domain administrators are forced to change their login credentials routinely.  IT staff have a bad habit of being prime offenders of exempting themselves from password changes.  Take a stand.  Everyone changes their passwords and password complexity rules must be adhered to by every single account on the network.  Use 2 Factor Authentication 2FA every time possible, especially mailboxes and cloud accounts.
  • Make sure you have machine images that are not accessible using domain level authentication or credentials.  If you run a VMware environment make sure you administer VCenter only through local Vsphere credential logins, not AD authentication.  This serves not only to protect your production images, more importantly it protects your snapshots.  Hyper-V environments, God help you.  When you are solely reliant on Windows authentication to manage your virtual servers, you’re vulnerable.  I’d have to do more research on exactly how to stop propagation to all systems in a Hyper-V environment.  My first inclination would be spend some money on VMware or a Citrix XEN Hypervisor, Nutanix if you must.
  • Have snapshots.  Have recent snapshots.  If you don’t run virtual servers at least have Windows bare metal restore backups for physical machines.  Again these are to be written to appliances that are not connected to the network with domain level authentication.  Snapshot and bare metal backup files should remain recent enough to take into account all hardware and operating system changes that have been implemented.
  • Close vulnerable ports on your public interfaces or at minimum set them to random port numbers.  Obvious ports like 8443 are gonna get hit.
  • If you are a heavy transaction environment then you will also want to incorporate more more redundancy at the database server and application server level, such as SQL database replication with incremental transaction log offloads to drive space that is again, not domain authenticated.

Note: I did not specify anything related to archiving and compliance backups because while essential for certain industries and disaster situations they are not specific to rapid recovery in the event of any malicious disaster in which physical hardware assets are not compromised.  

Once you are able to quickly restore a virtual machine or physical system from a recent snapshot or bare metal recover file copies of data files and database backups can be moved into place for restoration to the most current backup set.  Daily is usually the best most small to medium “enterprises” can achieve.  With added expense in resources and configuration backups can be run with more frequency.   Unfortunately even hourly database log shipping won’t save a database from an encryption attack.  As my last point emphasized, unless log files are being off loaded in hourly increments to storage appliances that are not connected with domain level authentication they aren’t safe.  As always, the question of investment becomes: How much can you afford to lose?

The best defense against Ransomware is a good offence in the form of rapid recovery.  Since these exploits rely on social engineering (gullible people) you can never pretend your network is free of any vulnerability.  Don’t just design your backup and recovery environment in case something happens.  Make sure it’s tested it for when it happens.