Tag: disaster

  • When disruption is more than a buzzword

    When disruption is more than a buzzword

    A briefcase sized device could wreak havoc in today’s networked world warns William Radasky in the IEEE Journal.

    Fans of the  wave of nuclear war movies like The War Game or The Day After will remember the first bomb detonated in the attacks was a high level explosion designed to knock out electronic equipment.

    The resultant Electro Magnetic Pulse leaves everything from military radar to civilian communications systems unusable.

    In both The Day After and The War Game the high altitude detonations over Rochester and Kansas City destroyed motor cars’ ignitions leaving a key part of the nation’s infrastructure paralysed.

    Unlike a zombie TV series, the unlucky survivors of a nuclear strike weren’t going to leap into the nearest abandoned Camaro and speed away from the heaving hungry masses.

    What should be considered is The War Game was filmed in 1965 when electronics were not ubiquitous. Even then the scale of the damage from an EMP was substantial.

    In today’s world, an wide scale EMP would bring down a region’s entire economy.

    I’m writing this post on the 28th Floor of San Francisco’s St Francis hotel and were such a blast to happen now I’m not sure I’d be able to find the fire escapes as the emergency lighting would be fried — it’s not even worth considering the lifts.

    What a first world city like San Francisco would like after all its technology, including electrical and communications systems, were knocked out doesn’t bear thinking out.

    On the bright side, this means a devastating nuclear war killing millions may not be useful military strategy any more. To bomb a first world nation ‘back to the stone age’ just needs a handful of well targeted high altitude nukes.

    The IEEE article is a timely reminder of both the fragility of our systems and the society that depends upon them.

    Similar posts:

    • No Related Posts
  • How do communications networks stand up to real times of disruption?

    How do communications networks stand up to real times of disruption?

    One of the big problems during and after Hurricane Sandy was how the cell phone network fell over.

    As the Wall Street Journal describes, many parts of New York and New Jersey still didn’t have mobile phone services several days after the storm.

    Yang Yeng, a shopkeeper selling batteries, candles, and flashlights on the street in front of his still darkened shop in the East Village, said his T-Mobile phone was useless in the area. The situation, he said, reminded him of the occasional cellphone-service outages where he used to live, on the outskirts of a small city in southern China.

    What’s often overlooked is that mobile networks are different products from a different era to the traditional landlines most of us grew up with.

    The older landline phone systems used their own power and the batteries in most telephone exchanges had enough juice to supply the Plain Old Telephone Service (POTS). So in the event of a blackout most services kept running.

    Of course POTS services could still be disrupted – a car could hit a pole on your street, those poles could burn down in a fire, your local exchange could be struck by lighting or a blackout could last longer than the telephone company’s batteries.

    Most importantly, in times of major emergencies those exchanges would get overwhelmed by frantic callers trying to contact the authorities or their families.

    All of the above would have happened during Hurricane Sandy, so it is somewhat unfair to single out the mobile networks for their ‘unreliability’.

    There are some differences though with modern mobile and fibre based networks that shouldn’t be overlooked when understanding the reliability of these systems in times of crisis or disaster.

    A hunger for power

    Modern communications networks need far more power than the POTS network. Fiber repeaters, cell towers and the handsets themselves can’t be sustained in the way low powered rotary phones and mechanical telephone exchanges were.

    The cost of providing and maintaining reliable batteries to these devices is a serious item for telcos and it’s no surprise they lobbied against laws mandating the use of them in cell phone towers.

    Even if they were installed, the fibre connections to the towers are also subject to the same problem of needing power to connect them to the rest of the network.

    Of course the problem of keeping power to your handset then kicks in. Many smartphones or cordless landline handsets struggle to keep a charge for 24 hours, further reducing their effectiveness during any outage that lasts more than a day.

    Bandwidth Blues

    Even if your cellphone does keep its charge and the local tower remains running and connected to the backbone, there’s no guarantee you can get a line out.

    In this respect, the modern systems suffer the same problem as the old phone networks – there’s a limit to the traffic you can stuff down the pipe.

    This isn’t news if you’ve tried to make a call on your mobile at half time at a sporting event or at the end of a big concert. If there’s too much traffic, then the system starts rationing bandwidth; some people get a line out while others don’t.

    Prioritising traffic

    Another way of managing demand during high traffic times is to ‘prioritize’ what passes over the network – voice comes first, SMS second and data a distant last.

    This is why on New Year’s Eve you might be able to call your mum, but you can’t post a Facebook update from your smartphone and all your text messages come through at 5am the following morning.

    During emergencies it’s fair to assume that if the mobile network stays up, social networks won’t be the priority of the operators and this is something not understood by those advocating reliance of social networks during disasters.

    No best efforts

    Probably most important to understand is the difference between the utility culture of the POTS operators and the ‘best effort’ services offered by ISPs and many mobile phone companies.

    Under the ‘utility model’, the telco was run the same way as the power company and water board – largely run by Engineers with a focus on ensuring the network stays up for 99.99% of the time.

    That four or ‘five nines’ reliability is expensive and the step between each decimal point means an exponential increase in costs and spare capacity.

    Over the last three decades the utilities themselves have seen a reduction of reliability as the costs of maintaining a network that has a 24 hour outage once every three years (99.9%)* over three times a year (99%) interfere with a company’s ability to pay management bonuses.

    ISPs and most cell phone networks never really had this problem as their services are based upon ‘best effort’. If you read your contract, user agreement or condition of sale you’ll find the provider doesn’t really guarantee anything except to do their best in getting you a service – if they fail, tough luck.

    As we become more connected, we have to understand the limitations of our communications networks. The assumptions those systems will be around when we need them could bring us unstuck.

    *the definition of uptime and what constitutes an outage varies, the definition I’ve used is a 24 hour blackout or suspension of supply in any given area.

    Similar posts:

  • A blind faith in technology

    A blind faith in technology

    “How could this happen with all the technology these ships have?” is the first question many of us had when we saw pictures of the Costa Concordia lying on its side with a ripped hull.

    In an era where we have Global Positioning Systems, sonar, radar and sophisticated mapping technology it seems almost impossible that a ship could find itself in such a terrible situation.

    Every generation has its own blind faith in the technology of the day and almost a hundred years ago one of the greatest shipping disasters of all – the RMS Titanic – happened because of the same belief in that era’s technology.

    While the Titanic’s builders claim they never said the ship was unsinkable, popular belief held the vessel was the safest of all ocean liners with sophisticated steam engines, modern safety designs and better communications tools like the radio and Morse Code.

    Those technologies were part of the Titanic’s undoing; the improved performance of steam ships saw the shipping companies competing for the Blue Riband prize of the fastest crossing of the Atlantic, meaning captains took risks they wouldn’t have with less technically advanced vessels. This is why the Titanic found itself in an ice field.

    Once the ship was struck another problem with our blind faith in technology arose – we never foresee all the consquences.

    In the Titanic’s case there weren’t enough lifeboats – the safety rules of the day had fallen behind the capacity of the ships and, while the Titanic exceeded the minimum number required, there were barely enough lifeboats to take a third of the passengers.

    The Titanic’s sinking has some similarities in that today cruise ship companies are in an ‘arms race’ to build bigger and more luxurious liners, marketing them as floating resorts raising concerns among maritime experts that the capacity of these ships is too great for them to be evacuated quickly.

    Of course we have to be careful of drawing too many parallels between the Titanic and the Costa Concordia, the Titanic’s loss of life was several orders of magnitude greater than the Concordia’s and the Titanic happened towards the end of a period when technology looked like it would solve all the world’s problems.

    The sinking of the Titanic was also the peak of the Edwardian standards of “women and children first” and “for King and country.” Only one in six of the third class male passengers and half of that in second class survived.

    A few years later, the clash of Edwardian culture and modern technologies was starkly shown when millions died in the trenches of France, Belgium and Gallipoli as generals applied 18th Century cavalry tactics against 20th Century weapons. Another example of not understanding the effects of new technologies.

    Whenever we adopt a new technology there’s a risk we’ll get it wrong and blind faith in tools we don’t understand can lead us to a disaster.

    Even in a business we can’t just accept that because a computer says “yes”, the answer is yes. Sometimes we have to think.

    Similar posts: