Cloud - Yes or No ?

A couple of weeks ago I was at a FinTech event with numerous other like-minded individuals. The setting was rather formal and we were basically sat in the same room and seats all day. I had a stranger to my right, and like any good neighbour introduced myself and we had an exchange about experiences, places we work(ed) and soon we were talking tech. We talked the controversy about this programming language vs another one and in some respects I tried to bait him (sorry!) with the ".NET is far superior", but he didn't bite and just gave me a very pragmatic response of "We use whatever is the right tool for the problem at hand". You can't argue with that. We talked about patterns and architectures, and finally on to the big fluffy one, #cloud. The banter then took a sharp turn and got slightly academic and philosophical, at which point he then baited me (!) with the "I can't see any good reason to use cloud". Whoaaaa, stop right there!

Up until this point I thought I knew my response. I admit it, I've been drinking (heavily) the cloud Koolaid for the best part of 8 years and possibly addicted to the amber nectar. I started getting a bit reflective and challenging myself on arguments. Even now I'm not sure there's a compelling argument. So here goes with my considered thoughts.

IaaS vs Bare Metal

Let's admit it, bare metal as a single use commodity is not good for the planet. At one point, at home, I had 10 bare metal servers. As projects waxed and waned the number of servers would only increase. As you can imagine, by the 10th server, the first server was seeing no use at all. Once you've gone through that capital purchase angst, efficiency seems to go out the window and you believe it has some intrinsic value. I'm not saying that's how corporates behave, but there's little transparency on usage across the compute real-estate, the CapEx budget a year or two took care of procurement dilemma, and the next challenge is to stave off the technology refresh bill where you start all over again. The commercials no longer stack up and the tin legacy is not great.

Corporate data centres/halls are being decommissioned, yes to save costs, but also from a green perspective. So, the thought of at-a-whim being able to allocate compute, on demand, is just wondrous. But we don't have to go to cloud for that to happen. Even in my home lab/datacentre, long gone are my server procurement days (business cases with the CFO (wife) were just getting tougher!), and I've resorted to Type-2 hypervisors such as Hyper-V, VirtualBox, Synology VMM, etc (corporates use Type 1 bare metal hypervisors such as VMWare ESXi, XenServer, etc). In an instance, not only can we fabricate compute, but we can expunge it. We can scale up, we can scale down. Now, it's just a case of .. do you want a hyperscalar like AWS managing your IaaS/bare metal estate, or do you as a corporate want the responsibility and oversight of your own assets? The hyperscalar has a significant buying power and economies of scale and automation to manage it all. As of September 2023, Amazon Web Services (AWS) had reduced prices 134 times since 2006.

Security


However, the elephant in the room is security with reputational risk at stake in the gamble. The bigger the risk, the more cautious an organisation gets. There's the phenomena of "noisy neighbours", those cloud customers/tenants who happen to be sitting on the same bare metal as you, creating a loud party and thrashing away at the CPU (is it a thread or a core?) and impacting your performance. Or, whilst you're performing your confidential data processing, are you inadvertently leaving stray artefacts in the cache ? No one knows... In fact, those that are concerned would likely go Dedicated Instances to eliminate the risk.

There's all the security associated with vulnerabilities around software, not only the software you've written, but the software you need to run as part of your ecosystem, including the operating system, and, the hardware (two well known examples being Spectre and Meltdown). Sadly, all of this is not a one-off deployment check, it's a continuous (daily) process.

Then there's social engineering whereby the bad actors use methods to manipulate us humans and extract useful information, such as usernames, passwords, etc. We've all heard stories of close family members getting hacked, but it's happening at a much larger scale. This strategy is called Big Game Hunting and as you can imagine, the rewards are tempting for criminals. If you're data is sitting on the cloud, relatively speaking, it's far easier to get at than data that's locked in your own private data centre.

Just recently, it looks as though Snowflake customers have been socially engineered into giving away credentials. You can read about the threat actor with the wonderful name of "Whitewarlock" in this article. This social engineering attack/event is sometimes known as the "bump", similar to when a lock is bumped/jiggled to get it to open. Unfortunately, the effect of a security bump, on average, takes about 200 days to surface whereupon your data is up for sale on the dark web. Snowflake beat the statistics with a much shorter time in their case.

Application Architecture

In the beginning there was the single (monolith) code base, then came two tier client / server, then three tier client / server / database, N-tier, then modular monolith, and from there we spun out of control to microservices, containers, etc. All of these are still valid and relevant. I've possibly mixed up my metaphors and patterns but there's undoubtedly some applications and patterns that fit way better in a cloud environment than deployed on-premise (unless you have a lot of premises!).

One benefit is scale in terms of performance. With microservices and containers, whether you have one client or one million, you can write (and pay) for the same codebase and address that elasticity in cloud. On-premise, you'd have to have a pretty good sense of your potential client base and whether you could accommodate them in terms of compute etc. But even more demanding would be the ability to scale geographically, caching content at the edge. That said, Meta run their Facebook, WhatsApp, Instagram and Oculus business out of their own datacentres, with an eye-watering bill of $35bn, including spend on submarine network cables. Pretty impressive stats to support 4bn users and a revenue of $135bn.

Conclusion

Still no conclusive, compelling reason. I was going to talk about SaaS, but then again, you don't need to be in cloud to offer SaaS, as Meta have demonstrated. I think it also depends on where you are on your corporate journey; do you have a significant bare metal investment ? Are you just starting out with no compute ? and there's nothing to say that hybrid (part cloud, part on-premise) isn't a valid strategy.

As my classroom neighbour said, "... the right tool for the problem at hand", and if that's a Raspberry Pi, then so be it.

Please let me know your thoughts, arguments or reasons that individuals should factor in when considering cloud. Any anecdotal stories would be great to hear as well.

In the meantime, I'm off to the Cloud Anonymous Support Group to meet my peers, taking care not to stumble into the room next door, labelled "AI Frenzy/Bubble/Hype Support Group".

Previous
Previous

Bloomberg Terminal - Go Big or Go Home

Next
Next

Snap Data - How hard can it be?