I was one of the first Sidekick customers, long before the MS buyout. It was an amazing email/texting device, far easier and more comfortable to use than a Blackberry. Plus the web browsing was decent, and the UI was simple and elegant. All in all, a really cool phone.
Unfortunately, the first generation of hardware was super flaky. I had five of them die on me in a year, often failing within weeks. Since there wasn't any PC sync tool, the only thing that saved my bacon was the server-side data model. When a phone died, I'd get a replacement, drop in the SIM card and all my stuff would be there same as always.
Back in the day, there was no data sync with your PC. I think they eventually released Outlook syncronization, but most consumers don't use Exchange/Outlook. I would guarantee that a huge number of Sidekick owners (most? all?) have never had any of their phone data anywhere but the Danger servers.
To make matters worse, the Sidekick becomes kind of a brick when the servers are down. I can see a lot of people resetting their device because it's suddenly become unresponsive, just expecting their data to magically come back from the server.
I was a loyal Sidekick user right up until the day the first gen iPhone launched. It really was a fantastic device with the always on AIM, server side re-rendering of websites to optimize them for the device and one of the best physical keyboards I have used to this date.
If I remember correctly, Mark Cuban was a sidekick fanatic. I wonder what his reaction to this debacle is.
Server side re-rendering of websites?? Sounds like wide-spread support for that would be lacking... I'm sure as the Sidekick gained traction, more web sites adopted its special rendering requirements as you mention, but in the beginning, things must have been rough!
Disclosure - I've never owned a Sidekick (and know little about them, other than they are apart of Microsoft's 3 screens and a cloud strategy, yet they don't run Windows Mobile (??)).
i'm pretty sure he's talking about servers run by danger that re-render other websites. those other websites don't know or care that they are being presented in such a fashion.
The timing of this is a bit suspicious. I just read an article late last week from an "insider" detailing what a major clusterfuck the Danger acquisition has been for Microsoft and how they've completely neglected the SideKick platform in favor of their own failed attempts to create a competing device. The speculation from this insider being that Microsoft had zero interest in the NetBSD based SideKick and only bought Danger to kill the SideKick and launch their own product to replace it. Most of the original developers are long gone so that makes me wonder who was taking care of the SideKick backend servers. Are they perhaps UNIX based servers? Did Microsoft perhaps throw a bunch of Windows Server guys at them? (that never ends well)
I don't know, why did they buy them in the first place? Just to kill them?
Did they fire the whole system staff they used to have? I just can't believe they just know stopped running backups and let it die. Sounds odd, and prone to being sued by T-Mobile quick. (Maybe Microsoft is going after T-Mobile for android?)
"The final operator who is going to be pissed is T-Mobile, who has been just as loyal of a partner to Danger as Sharp has been. I don't know exactly what Microsoft has been telling them, but they have no doubt realized that they've been cut out of this deal in favor of their largest competitor. What's worse is that apparently Microsoft has been lying to them this whole time about the amount of resources that they've been putting behind Sidekick development and support (in reality, it was cut down to a handful of people in Palo Alto managing some contractors in Romania, Ukraine, etc.).
"The reason for the deceit wasn't purely to cover up the development of Pink but also because Microsoft could get more money from T-Mobile for their support contract if T-Mobile thought that there were still hundreds of engineers working on the Sidekick platform. As we saw from their recent embarrassment with Sidekick data outages, that has clearly not been the case for some time."
This would be fraud. Microsoft would never do that!
Now, seriously, Microsoft is not be likely to commit fraud in order to get more money from T-Mobile. But they are perfectly capable of engaging in any unethical behaviour that would prevent T-Mobile from investing more in Android, and, less specifically, of anything that could prevent Android from gaining foothold.
They should really worry because many phones run Windows Mobile only as a kernel and the users seldom see its underpinnings. It would be rather easy to port those front-ends to other platforms. Actually, I would fire the developers who didn't isolate them from the underlying OS as much as possible.
A plausible rumor is that the data loss stems from a botched SAN upgrade by Hitachi:
Currently the rumor with the most weight is as follows:
Microsoft was upgrading their SAN (Storage Area Network aka the thing that stores all your data) and had hired Hitachi to come in and do it for them. Typically in an upgrade like this, you are expected to make backups of your SAN before the upgrade happens. Microsoft failed to make these backups for some reason. We’re not sure if it was because of the amount of data that would be required, if they didn’t have time to do it, or if they simply forgot. Regardless of why, Microsoft should know better. So Hitachi worked on upgrading the SAN and something went wrong, resulting in it’s destruction. Currently the plan is to try to get the devices that still have personal data on them to sync back to the servers and at least keep the data that users have on their device saved.
I've seen people "back up" NetApp SANs before to prepare for upgrades; I don't think it's generally accepted practice to just say "fuck it, we'll do it live!"
They don't use "big tapes". They plan a disaster recovery architecture and (in the cases I've been involved with) use block streaming protocols to mirror data. Granted, I didn't do the backups, but I'm pretty sure they didn't pay us to bust up those streaming protocols so they could not use them.
This is a good reason to think of what are the quickest ways to bring one of your own sites down and try to prevent them.
In 2002 I lost the data to a small (~200 users) hobby site I built and to this date it still haunts me as my biggest failure. So now I'm taking precautions such as offsite backups, forbidding DELETE for all SQL users, etc., to limit the chance of such a thing happening.
I am admittedly not a good sysadmin, but I keep an eye on my own "pirate" server (that is, outside the realm of the corporate network) and I have already ordered additional storage because Munin shows that, at present consumption rates, we will run out of storage in about 4 months, crossing the 70% mark in three. I know they will take at about two months for corpotate IT to spend the US$ 100 it costs for a pair of disk-drives and that gives me a month to set-up a soft RAID before I cross the 70% mark.
It's completely unacceptable not to have the storage mirrored and load-balanced on a second SAN. And mind you many SAN vendors even bundle software that makes it easy to do. Even if you don't have the budget to buy another SAN the same size, you can mirror the data rather easily with a bunch of servers, each with a bunch of SATA cards and a bunch of low-budget drives. Redundancy is the key here - if you have a drive that's 10x cheaper than the same storage in the SAN, you can replicate in a RAID1 with 10 drives. If each drive is 30% as reliable as the high-grade disks that go into the SAN, you are well on the winning side.
And I seriously doubt there is that much data. How much? A couple dozen terabytes? That's fits easily under my desk. And quite probably in my discretionary budget.
"We continue to advise customers to NOT reset their device by removing the battery or letting their battery drain completely, as any personal content that currently resides on your device will be lost."
This is the most shocking part of the article. Is the system really set up so that a failure on the server side causes data on the devices to be deleted? That strikes me as bad design. Yes, any sync system comes with a built-in danger that one endpoint will unexpectedly delete data on another, but if servers goes down the phone should hold onto its data until told otherwise.
I smell a migration of their infrastructure to Windows and a nice "Get the Facts" page about this disaster, how it could be blamed on non-Windows technology and how everything was saved (or, at least, made safe) by the deployment of Windows Server 2008.
Word on the street is that it was a botched SAN upgrade by a Hitachi tech. Of course, the fact that the Danger admins hadn't backed up the SAN before the upgrade made that fatal. Not even an offsite backup... crazy.
And that would be one of the bullet-points in the page: how hard it is to properly back-up stuff in *BSD as compared to how easy and foolproof it is under Windows 2008 Server.
In fact, I can almost smell a migration directly to Windows 7 Server or whatever the name they decide to use for that.
While this is true in principle, this is a bit of a Google talking point that has more PR cachet than practical implications.
Giving users the ability to make backups of their data certainly does not mean they will actually do so. How many Gmail users actually keep full backups of their email (using POP or IMAP or whatever)? And if Gmail had a case of massive data loss tomorrow, do you really think saying "well, you could've made a backup just fine" would have somehow made everything okay?
I don't deny that this sort of data loss is pretty awful, but "data liberation" is not the solution.
The reluctance of some people to back up their online data is a testament to the general dependability of online services. More screwups like this one, or like Magnolia recently, can only serve to scare folks from "the cloud", or teach them to back it up.
"Data Liberation", as in educated users, is the solution.
You back up your data because no cares as much about your data as you do (repeat...). It's like basic hygene. In the future, few people will be actual computer experts but everyone will know a few things. Backing up data will be one of them. A large enough density of people who understand this will mean those who will lack excuses - see main story!
I back up my Gmail by forwarding it to other providers (Yahoo, Comcast). If Gmail fails and loses all my data, it's in my comcast and yahoo account. If they both fail then ok Im screwed, but the Internet I hope would have to melt down if all three of these systems failed simultaneously.
I'm not sure how much is a tech feature or un-walled policy would solve the problem. I think it would fractionally minimize it. The current psychological barrier -- people not using a phone as a computer and its imperfections is profound.
Before, if a phone didn't work, it was one of a few things, usually the device or the carrier. Now, it's one of a dozen dozen or more products and services. People couldn't export before. They might be able to now.
A friend of mine once noted the subtle difference between how technology can protect you from failure, but a backup can help protect you from yourself.
It feels like we're moving towards a singularity. Expectations, responsibility and technology.
One nice thing of having your data on the server instead of on-device is that any syncing can happen between the server and your desktop, without even the need for USB cables.
Yes and these companies who don't know what the fuck they are doing make up the clouds we use everyday, so the cloud ends up being a hazy fog of data uncertainty instead.
At one of my haunts, I oft talk to a seasoned and grizzled developer. He's convinced everyone, and I mean everyone, should know how the internet works. My mom should know the OSI model, he argues.
I suggest it's unreasonable.
As a cell phone user to my provider, my relationship and contract starts and ends there. Then, it gets complex. And yet, I expect it to work without fail. It seems like it infrastructures are credit default swap complex to the average consumer. This is a bad thing. I don't expect an people to should understand the OSI model. I fully expect we'll find a balance between people trusting others with their information (features/convenience) and having it mirrored/synched without disaster locally (nice to be home).
They used the word "magnitude", which seems like the closest they could probably come to "clusterfuck" without the wording itself being legally actionable.
If it's something that cannot be fixed (these customers are never going to get their text messages back, for example), it is no longer an inconvenience.
It doesn't have to be the case of no backups. Many different things can happen... They could have a faulty backup design. They could have lost the backups. They could have a fault in their backup verification / checking procedure. They could miss some part of the data that wasn't crucial before, but after a couple of updates, they cannot restore contacts without it. Etc.
There are many ways this could have happened - not that they are excused because of that, but it's not fair to say "not to have backups" until we know the whole story.
Backups are only as good as your ability to recover from them. How many of us think we are doing a good job backing up data, AND have actually tried to run a disaster recovery drill and have verified that you can recover your systems from your backups? You don't want to find the gaps in you backup strategy during a real system failure, when you're under the pressure of the clock to get the system back up.
But you cannot be sure you can recover your data (you can be sure in theory only).
In practice, you don't know if you can recover your data unless you migrate the whole company to the backup copy and ask them to check every functionality they use -- every time you make a copy (and rely on them doing 100% coverage). The best you can have during the normal operation is a high probability that what you get is a proper backup, because that would be the case the last time you did a full test (if you ever did). Datasets change, ways of interfacing with data change, etc. Testing is not perfect either, because your tests can be broken or can be reporting false positives.
So yeah... noone can be sure they can recover the data. Does that mean that noone really has a backup?
Not to mention the planet could explode suddenly, so yeah, no one can be literally 100% sure that they can recover their data.
> Does that mean that noone really has a backup?
No, it means in some abstract sense no one can be 100% sure they have a proper backup. But there are some fairly obvious ways in which you can get that number pretty close to 100%.
The best way to prevent data loss is to put your data on a public server and have google index it. Google's cache is awesome and yes I have used this before :)
Based on the wording of the statement it seems likely they had no usable backup solution at all. Offsite or otherwise. Even a 24 hour daily backup should have provided some data. Sounds like a classic case of neglect to me. They probably had a solution in place but didn't verify it was actually working.
This probably (definitely?) makes me a jerk but I have to say it: They deserved it.
I'm sorry but in this day and age anyone trusting their data to a device that stores it exclusively on the server deserves what they get.
Some will say "the type of person who uses these devices isn't the type of person to understand where their data is located" but I say that's BS. We're in a data centric world now and I don't think it's asking that much for people to be aware of where their data is kept and to make sure they have some kind of "backup" guarantee.
To Anyone Downvoting This: Let me ask you one question. If someone without a seatbelt gets hit by a drunk driver does the fact that the accident's the drunk driver's fault mean it was ok for them not to wear a seatbelt? Or can someone be responsible for their own irresponsibility even though the damage was caused by a party that was more to blame?
If you use Gmail, Facebook, or any kind of SaaS, you are very much from "this day and age" and using the latest from the greatest and is trusting your data exclusively to a server!
Heck, you trust your money to a bank! You trust your bus driver not to crash the bus, your doctor to properly heal you, and your lawyer to put you out of trouble, you also trust your cook to make the meal you like.
Everyday, you give trust to many individuals and entities to whom you delegate tasks that you may or may not know how to do.
Sidekick screwed up, but I would never blame the ones who trusted them what they(the clients) did is completely normal!
I'm sorry but if you are content to leave the only copy of priceless things such as contacts, treasured photos, etc... in the hands of companies with no contractual obligation to you than yes, you're a fool.
The bank legally owes me my money. The bus driver is guilty of a crime if he crashes the bus. The Doctor is guilty of malpractice. If gmail deletes your account you have absolutely not recourse what so ever. None, Zip. Zilch. Zero.
Sidekick did screw up but it doesn't make the people who lost their data innocent either. You have to take responsibility for that which you value.
Every day there are posts here about how great the Cloud is, and how we should just dump our data there and not have worry about data centers ourselves. Isn't that exactly what was done here - individual handheld users put their personal data into "the Cloud" (in this case, Microsoft's cloud) and then got hosed when the Cloud busted, as happens in traditional Cloud-computing applications. Should we now say that everyone who loses data when EC2 crashes deserves what they get as well?
As for your second point about where users knowing where their data is located, I call BS on that. The majority of people out there over the age of 40 have no concept of the abstract notion of "data", or that it has a location. For them, data is located where they physically access it. Technology is (should be?) about making life simpler and abstracting away bits that people don't need to know about or have the knowledge to make decisions about - for 95% of people, this includes where the contacts on their mobile phone are stored.
If you think people should know where their individual bits of data are, try explaining to my mom how her contacts could live on her SIM card, in her device's memory, and on her desktop computer, and then ask her what the best place is for them.
1. I'm sorry but any admin who trusts their data solely to the cloud is foolish. It's just that simple. You really think an admin in a company is going to keep their job if they trust all their data to the cloud and then the cloud company screws up and loses it. No, because they were the one responsible for the data and they should have made backups.
Same here. Sidekick screwed up. No question. But that doesn't make it ok for the users not to have this data backed up.
2. How about you read my post before commenting on it. I didn't say they did know where their data is I said they had a responsibility to and if they weren't they were being irresponsible.
> To Anyone Downvoting This: Let me ask you one question. If someone without a seatbelt gets hit by a drunk driver does the fact that the accident's the drunk driver's fault mean it was ok for them not to wear a seatbelt?
Is a lack of a seatbelt the cause of the accident? Would having a seatbelt on prevent the accident? Is the drunk driver not the 'most guilty' party here?
No offense, but this is the type of situation where the company (Microsoft/Danger) needs to really get smacked back into the Stone Age. It's a case of huge neglect if they just lost all of their customers' data. I mean what will probably happen is just an apology with a, 'sorry this won't happen again folks' is wholly inadequate for what they caused.
Reaching into your other comment, while the bus driver and doctor have laws stating that they have legal responsibilities, there seriously needs to be one here. Regardless of consumers' ability to backup their own data, this sort of incompetence is unacceptable from any company operating on this sort of scale.
It really enrages me that the likely outcome of this whole thing will probably be less than a slap on the wrist, if that. While I'm not really a proponent of 'cloud services,' the fact that companies feel that they can offer these services and try to weasel out of any sort of damages they might cause through their incompetence by trying to add things like, "we are no liable for any damages" in some contract pisses me off. This is the same sort of legal nonsense as the 'arbitration' clauses in EULAs where the company is the one that gets to choose the person that will decide who is right and wrong in disputes... and someone somewhere in the company actually believes that this is fair.
Totally agree with you here. However, most of the large companies have TOS that explicitly say they are not responsible for your data. That is where I think there needs to be a Bill of Rights for data. If you pay for a service, there should be some level of guarantee that they will at least try... and remove those clauses in their TOS.
I'd agree with you if you said that they deserved an outage of a few hours; that's the risk you run when someone else controls your data. Once in a while, things fail; when it happens to you and you control the data, you spend those few hours fixing whatever's wrong. When your data are in someone else's hands, you sit around writing angry tweets about it until it's fixed.
In the worst case, maybe you'll lose up to 24 hours of changes in case a backup needs to be restored.
For the data to be lost entirely, with no backup whatsoever, simply beggars belief.
I was one of the first Sidekick customers, long before the MS buyout. It was an amazing email/texting device, far easier and more comfortable to use than a Blackberry. Plus the web browsing was decent, and the UI was simple and elegant. All in all, a really cool phone.
Unfortunately, the first generation of hardware was super flaky. I had five of them die on me in a year, often failing within weeks. Since there wasn't any PC sync tool, the only thing that saved my bacon was the server-side data model. When a phone died, I'd get a replacement, drop in the SIM card and all my stuff would be there same as always.
Back in the day, there was no data sync with your PC. I think they eventually released Outlook syncronization, but most consumers don't use Exchange/Outlook. I would guarantee that a huge number of Sidekick owners (most? all?) have never had any of their phone data anywhere but the Danger servers.
To make matters worse, the Sidekick becomes kind of a brick when the servers are down. I can see a lot of people resetting their device because it's suddenly become unresponsive, just expecting their data to magically come back from the server.
This is an unmitigated disaster.