Brutkey

Jess👾👾
@JessTheUnstill@infosec.exchange

With the continued blatant enshittification of GitHub, your periodic reminder:

Cloud services and SaaS in particular is simply "someone else's computer". They can and will "I have altered the the deal and pray I won't alter it further" whenever it suits them. So it's always going to be best to leave your stuff as lightly coupled to someone else's computers as possible.

Jess👾👾
@JessTheUnstill@infosec.exchange

"Business Continuity Planning" isn't just about "US-EAST-1 went offline, how do we manage fail over and uptime". It also includes things like "this whole platform has been discontinued/gone bankrupt/KilledByGoogle/quintupled in price/been MegaBreached, how does my business/project survive?"


Greg Bell
@ferrix@mastodon.online

@JessTheUnstill@infosec.exchange @TindrasGrove@infosec.exchange my favorite emerging threat model "the US government noticed us and decided we should not exist; told Microsoft and Google to delete our accounts"

Jason Stuart
@JSCybersec@infosec.exchange

@JessTheUnstill@infosec.exchange Don't forget "how do I get my data back into MY hands?"

firebreathingduck
@firebreathingduck@social.vivaldi.net

@JessTheUnstill@infosec.exchange

We live in an age of Cover-Your-Ass engineering. I would bet almost all of the people in charge of business continuity across the entire industry approach the problem this way: "If both the primary and backup cloud regions we use are offline, the business will hemorrhage cash but I personally won't be blamed."

It's another form of "Nobody ever got fired for buying IBM."

My tiny employer has a sub $5k/month cloud bill on AWS. If we were bigger, I would be pushing for moving to two or three independent smaller providers spread across at least two continents. That solves the actual problem, not just shielding me from blame.

mkj
@mkj@social.mkj.earth

@JessTheUnstill@infosec.exchange "their AI decided one of our employees were in violation of the ToS and therefore terminated our account, and any communications about getting it restored must be done through the email address on file for which domain registrar, DNS and email is hosted with them"

"their employee used an internal tool incorrectly wiping our whole organizational account with zero recourse and we were only finally told what happened several weeks later"

(Tell me these things haven't happened...)

Jess👾👾
@JessTheUnstill@infosec.exchange

At the VERY least, do a periodic tabletop operation of "We no longer can use this core piece of cloud software/platform/infrastructure, what would be ways we could recover?"
Partial recovery in 14 days, full recovery in 30 days?
Welp, guess we just close down the company?
We keep around a small experimental environment on some other provider/a few VPS/some colo servers/some servers running in the closet that we could use as a foundation to scale up when required?"

Jess👾👾
@JessTheUnstill@infosec.exchange

@mkj@social.mkj.earth

Someone once coined the phrase "The underwear problem".

Basically, until relatively recently, it wasn't possible to decouple the amazon.com account that first set up an AWS root account from the store account. So it ended up with the whole company's AWS account being tied to the same account the founder uses to buy underwear.

Jess👾👾
@JessTheUnstill@infosec.exchange

@mkj@social.mkj.earth

Someone once coined the phrase "The underwear problem".

Basically, until relatively recently, it wasn't possible to decouple the amazon.com account that first set up an AWS root account from the store account. So it ended up with the whole company's AWS account being tied to the same account the founder uses to buy underwear.