Past EED rants


Live leaderboard

Poker leaderboard

Voice of EED

Thursday, 17 June 2004

Disastrous Disaster Recovery [brit]

You gotta laugh. Well, if you don't, you'll go postal.
Disaster Recovery - two words that executive management love to throw about their squash courts, and the two words that make finance types withdraw instantly into a near comatose state.
Since September The Eleventh Two Thousand And One (I refuse to call it 'NyneWonWon') many companies woke up to the idea that their huge corporate edifices are indeed somewhat transient when, for example, hit head on by a fully laden airliner.
We are no different. 'DR' as it's known, is on everyone's tongues, and after having a chat with a couple of folk (it shouldn't come as any surprise to you that as Technical Director, I didn't know the top management were talking about this until after I spoke with someone else) I discover we have a DR policy too.
Wow. We do? Excellent.
Our DR policy is amazing. Amazing in a kind of short sighted shitty kind of way. Naturally enough, we're doing the whole 'implementation of remote office space' thing, but we'll come to that in a minute.
What's more amazing is the fact that our internal data DR policy appears to be this; we will take X terabytes of 'stuff' and back it up to more servers. In the same building, in the same *room*.
Great. So if we do get blown up, or the place burns down, or God Himself Visits Upon Us With A Furious Vengeance, we can be sure that our DR policy will have ensured total data loss from the off.
Now, the other bit - the 'bums on seats' notion. Sounds good (forget the practicalities of switching mission critical real time systems over to another location for a moment) except it turns out that our secondary location is... no more than 5 miles away.
Fucking mad. In my mind, you put your DR a significant distance away from your primary place of business so that IF we do find ourselves in serious bother, we can move a LONG WAY AWAY from it.
But, it appears to be keeping various people happy; people who know precious little or nothing about DR (except what they've been sold by DR agencies) are happy in the knowledge that an action has been taken, we are *covered* people!
Bah. I wonder if my CV is still on monster...


  1. Can you program ?We're looking for programmers :-)

  2. There does appear to be two very distinct approaches to DR. The first being the tick in the box, you have to have DR to satisfy your auditors, shareholders, regulators, etc, so you buy or write something that looks like it kinds of fits and when anyone asks the question you point to the contract or document and everyones happy. Until something goes wrong of course...
    The second approach is to attempt to do something that actually works should your primary place of business or parts of it be unavailable. This of course is a lot more challenging, and of course should encompass BCP as well as DR if it's all going to work out.
    The best way to find out which camp your in is to invent a dissaster, and test the plan. A gas leak outside your building is a nice simple one, a leaks detected and your office is unavalable for a week while they fix it.
    I've been through regular DR tests, and it's amazing how many times you have to nip back in the office for a CD, which of course is no good, but that's why you test.
    Good luck!

  3. Indeed, non-tech people, auditors and finance bods, for example, pose the question to I.T. 'Do we have a DR policy?'. I.T. answers 'YES!', Audit and Finance say 'HUZZAH!'. Requirement ticked.
    The DR policy may even be just 'We should have a DR policy'.
    Anyway, it's not called DR anymore it's Business Continuity. Presumably invented by DR agencies who are running out of new business. Nasty word disaster, makes people worry about how the business will continue y'see :)

  4. Call it what you will, couch it in soft fluffy lingo if you like, but I'm planning for DISASTER - think Towering Inferno! think King Kong! think.... why the fuck is an advertising agency even bothered?

  5. I've always treated DR and BCP as two separate things. DR is the physical task of getting your business going after an event that renders current facilities unavailable. BCP is how the day to day procedures and operations will operate in such a situation.

  6. I was going to say that actually. If there's some huge disaster which means that some other facility five miles away is no good as well, then who gives a flying fucking shit if an advertising agency still functions or not? :)

  7. rofl matt, true..
    Chris, if a 2 billion megatonne a-bomb falls on paddington, and wipes out the entire city of london, I think several large companies may cut back on advertising spend anyway, plus you may have issues with getting staff, since everyone in south england/north europe will all be dead :)
    But backing up to servers in the same room, thats quality! Please tell me they are at least on a different network, power source, UPS?

  8. You'll be amused to know that our network 'went down' at 03:something this morning, due to a power failure.
    This was after we've put in 2 metric tonnes of batteries and a brand spanking new UPS. Oh, and twin power feeds into the server room.
    It's shite, it really is. The boys responsible for the network infrastructure hardware and associated gubbins work their tits off, but it falls over or has problems, regularly.
    There just seems to be an ever increasing lack of common sense in this whole planning / implementation arena.. *shrug*

  9. if your network keeps falling over, it does not matter how hard the chaps are working, they need someone with a very strong clue at the top.
    In the web hositng company I worked on in Dublin, the highest paid bloke (outside the directors) was the weird network expert, who knew everything about every kind of network and network product. We pinched him from a senior position from C+W in the US. Bet he was glad he came when the company closed 9 months later!!