How this small business lost 2 months of work due to bad backups
It started out like any other day. It was a busy Wednesday morning for Smith Accounting (name has been changed) just three weeks away from tax day. The team had been “all hands on deck” for weeks, making sure their clients were prepared for April 15.
Then it happened – around 10am, the company’s internal file server shut down unexpectedly and no one was able to access their files. A Junior Accountant tried to restart the server, unplug it and plug it back in, and no dice – the box was dead.
“No worries,” said Dan Smith (“the boss”), “we’ll just restore from our offsite backup.”
But what Dan didn’t know, because he had never tested restoring from a backup before, was that the team would be completely unable to access their files for the rest of the day while their freelance IT tech tried to restore. After hours of trying and encountering corrupted file after corrupted file, they were finally able to access and restore a backup file – from two months ago.
What is the real cost of bad backups?
So what exactly did this cost Smith Accounting?
Lost Productivity
Factoring in an almost full day of downtime for the team, this was dozens of hours of lost productivity across the board – hours that could have been spent servicing their clients and continuing their work.
Lost Files
Losing two months of data meant that the team would now need to work extra to redo all of that work just to be sure they were ready come tax day. The team was seriously concerned about messing up their work due to having to rush through it this time around.
Lost Trust
In order to redo their work, the team had to reach out to almost all of their clients to re-request important documents and data. This led to a loss of confidence in the relationship between them and their clients, which would take time to build back up.
Lost Money
All of the downtime and extra hours of work meant that the team was working overtime, and Dan would have to compensate them for all this time. Dan hadn’t factored this into his budget for the year, and was really thrown for a loop when it came down to how much work the team would actually need to put in to get themselves back up to where they should be.
So what can you do to prevent this?
Always test your backups!
A backup is only any good if it actually works. Many people, like Dan, fall into the trap of thinking they’re “safe” as long as they’re backing up their files, but this often isn’t the case (as we’ve seen here and a number of other times). Regularly test your backups and be sure you can actually restore your files when you need to.
Have a plan in place.
No one knows when disaster may strike, but it’s always a good idea to plan for it just in case it does. The team should know exactly who to contact and what to do in the event of downtime, and that designated person or team should know exactly how to restore your system and how long they can expect it to take. Downtime does happen – but anything you can do to minimize it can and should be done!