This is a great letter from Microsoft Nouveau and I applaud them for taking a stand. It's on us as citizens to exercise our rights and bring about change and reform.
I think we should also remember those that took a stand early and paid the price, such as former Quest CEO Joseph Nacchio who was run under the bus for not being an accomplice to the NSA [1].
The story of Joseph Nacchio was also mentioned in a blog post by Bruce Schneier from 2013 [1]. He cites several sources including BusinessInsider [2] and the earlier USA TODAY article from 2006 [3].
Yeah, a link to one of Putin's propaganda sites. Stuff like that happens in Russia, but the following doesn't often happen here:
- Co-opt the SEC into bringing charges of financial fraud[1]
- convince a federal judge to go along with that, and convict him
- convince a majority of a US Appeals court to go along with that
Or, alternatively, and probably closer to the truth: Mr. Nacchio was convicted in 2007 on 19 counts of insider trading for illegally selling $52 million worth of stock six years earlier, after insiders warned him that Qwest could not meet its targets. [2]
Edit: even better than that was Nacchio's excuse. It's one for the ages: Nacchio claimed that he was not in a rightful state of mind when he sold his shares because of problems with his son, and the imminent announcement of a number of government contracts. [1]
What people have been saying, is that if most CEOs do insider trading, and only a few have charges brought against them, then that might indicate an abuse of power.
Since the other CEOs (those whose companies wilfully broke the law), weren't investigated, it's a little difficult to be sure so long after the facts. As far as I know, none of them have been indited for helping the US government break the law - arguably a more serious crime than insider trading.
The argument is that it's an instance of: "You bring me the man, I'll find you the crime."[1]
Now, it could be that in the telecom industry at the time, in the US, there was little or no corruption and insider trading, and Nacchio was an exception. I certainly don't think he was framed. But there are at least some indications that it could have been a "tit for tat" that landed him in prison.
[1] (reportedly stated by Stalin's chief of secret police, Lavrentiy Beria -- but I've been able to find a source).
All news sites carry a bias - all of them. Often main stream media in the home country wants/has to tow the line and omit details or not carry stories that might upset local authority.
So, to say a story is bunk because it appears in RT as a source demonstrates a wilful ignorance, or incredible naiveté.
I trust/distrust all media sources in equal measure, but the fun thing is that by reading a lot of them at the same time across the globe, you can more easily see the biases, and get to the root of the stories.
> to say a story is bunk because it appears in RT as a
> source demonstrates a wilful ignorance, or incredible
> naiveté
I didn't. I said an argument that relies upon RT due to the story not being carried elswhere is "bunk".
> All news sites carry a bias
But few are sanctioned by Ofcom for regularly broadcasting "materially misleading" content, called out by all manner of former employees for being propaganda machines, or widely derided by virtually every other serious media outlet. Attempting to cover that with a general statement of all media bias - having substantially misrepresented (or just not properly read) the comment you're replying to - "demonstrates a wilful ignorance, or incredible naiveté".
A little off-topic, is there a name for this kind of rhetoric? I've seen it used a lot, for example:
- When pointed to government corruption they would reply "all the other governments have corruption too", implying there's no difference.
- When asked about low level of life, they would reply something like "15% of your own people are living below the poverty line", implying nothing out of normal is going on.
It's like, one guy has his boot covered in shit and the second is covered in it fully, the first says "you have problem of being in shit" and the second responds "you have that problem too (pointing to his boot) so it's ok".
This argument basically implies something like "we must not discriminate on the grounds of this problem because it is normal for everyone to have it", avoiding the possibly disadvantageous discussion of what is level of the problem and whether this level is normal.
Because something is printed in RT doesn't make it untrue. That something - critical of Russia's enemies - is only reported in RT and not in any other major new source - is a strong contender for being untrue.
For something to an example of the ad-hominem fallacy, it has to be discarded purely on the basis of the claimant. If something is being rejected on the basis that the claimant is unreliable and contradicts generally reliable sources, that is something quite different.
Should I start discounting RTs weather report for NYC based on their political reporting feel free to call me out on fallacious thinking.
You may be right, but your logic is fallacious. If there's only one news source and that news source is biased, then it doesn't follow that anything they say is untrue.
You are calling something false based on the source. This is simply illogical: a broken clock is right twice a day. Following your reasoning it would be wrong even when right.
Either the news is true or it isn't. Judge it based on its own merits, not from its source.
If the government can require Microsoft to break the contractual and fiduciary commitments to customers to protect data and report on what happens to it, can the government also require individual employees to break their commitments to their employers? Agents show up at data warehouse on 1000 Main Street, tell the employees they are prohibited from contacting their bosses, ever. What is the limiting principle, where does it end?
The right way to deal with this is to set up your system such that you can't turn over customer data because it's encrypted with a key to which only the customer access.
Well, maybe, but you deal with the legal environment in which you find yourself. Right now, there's no such requirement.
You could probably get around one, too, by selling pieces of the system and having the customer use OSS for the rest. That is, sell storage and key dongles while pointing your customer to front end software. The same way they used to sell crushed grapes during prohibition.
This is precisely what the Burr-Feinstein bill that was just introduced would require - providing access to the government no matter what, even if you have deployed end-to-end encryption. rodgerd is correct, this is a political problem.
If you and your friend have previously exchanged public/private key pairs and used gmail to transfer encrypted messages, (which is what parent is referring to about providing service to only one part of the process) the government cannot impose anything on gmail to hand over your actual message.
Sure but that doesn't stop the government from charging me (the sender/receiver) with illegally encrypting communication which the bill looks like it could provide. It's a political problem because NO ONE should be charged with using math. When you consider how badly all computer crime laws are enforced, I think that fighting the bill via political methods is the only safe way forward.
Sure, embed your encrypted messages in plain text or cat pictures using steganography. Assuming encrypted form of messages are uniformly distributed, how're they going to find out you're doing it unless you confess or they find where you hide your keys? If the judge can charge you because you and a friend exchanged cat memes, you've got bigger problems.
surely you agree that this puts the feds in a much tougher spot though?
I think we should try to get from a 4 to 5, even if we can't get all the way to 10 right away. We shouldn't just punt on the problem because it's not possible to have an immediately perfect solution. 4 to 5 might mean using keys that only the customer has access to. 7 through 10 might all be political, sure, but let's keep moving.
Had the feds not been so abusive, many of the people who are completely lawful wouldn't be at the point of being okay with the feds having a hard time. I think going tit for tat would be fine, but I totally understand why some people are saying "okay, screw it, we'll just make something no one can get into".
Where are you going to host, then? None of the Five Eyes countries, so that rules out the UK, Canada, New Zealand, Australia. (Indeed, New Zealand is ahead of the totalitarian curve here, with proposals to give blanket exemptions to any law broken by the security services as long as they claim they had a good faith belief they needed to).
France? Much worse history on encryption than the US.
Germany? Officially better, but the intelligence services have been revealed to be essentially ignoring German law when it gets in the way of working with their US counterparts.
I wonder how true that is. There seems to be a whole lot of people in recent years who've been ordered, by a judge, not to talk about something a three letter agency is doing.
In theory that judge is supposed to make sure the law is being followed. My impression, though, is that some judges have a much more expansive view of government power than I think is warranted.
I don't think it's fair to classify your parent poster as "someone that can't see past their bigotry". There are serious issues, like Microsoft opening up a large surface for state fishing expeditions, like:
List is long, and much of it, likely not generally know. Beyond just that Microsoft's actions were not done for the greater good, but to unjustifiably enrich and entrench itself.
No corporate acts for the greater good. That would be at odds with their legal requirement to act for their shareholders first.
Microsoft has simply discovered that a previous business strategy - monopolistic embrace, extend and extinguish-is no longer a good one in today's open, networked world. Hence they have moved to new strategies which are more palatable to the tech community. Slightly late, but still in time to remain relevant.
Feel free to trust whoever you want, though if you're disputing that Microsoft hasn't done anything that would result in some not trusting it ever again, then yes, I take issue with that.
Lava Bit was not actually architected in such a way that only the customer could read the emails even though it made that claim: https://moxie.org/blog/lavabit-critique/
Yeah, technical solutions alone are not the answer. There's still the $5 wrench, or in the government's case, the threat of jail time for contempt of court should a real encryption bill pass.
To really protect the right to free speech we need both technical tools that are easy for everyone to verify, and a society who believes laws banning encryption are worthless.
Anything short of that is a risk that we flop into a state that is afraid of words.
However, it would be hard to see how Microsoft could architect things that way too -- they supply an Operating System with constant security updates...
This should not have come as a surprise to anyone relying on encryption to maintain privacy, e.g. Snowden. Email simply does not support this without PGP.
Microsoft shouldn't collect that much consumer data in the first place to begin with. Their tracking and spying that cannot be turned off in their most recent product version is not fine, it's a show stopper.
I wonder if the end result is just going to be mass production and adoption of memory, drives, etc that physically self-destruct. A combination of Apple's Touch ID as a fail-deadly switch, and physical destruction would just end this.
One kind-of-nice thing about firmware-level-encrypted SSDs, is that you don't need to overwrite every block on them to "secure wipe" them. When you send the drive an "erase yourself fully" command, it just securely overwrites the sector containing the encryption key with a new one. In terms of getting access to the rest of the data at that point, you've effectively just blown up the drive.
People think there's no point to hardware drive encryption, because they look at it wrong; they think it's meant to protect your data in the same way that OS-level user-passphrase-derived-master-key drive encryption is. But it's not about "unlocking" the drive at all; it's really about this exact command, where you can change the key and thus, in an instant, permanently garble all the data on the drive.
(It's is also extremely necessary if you want computer refurbishers to be able to reuse SSDs. Trying to "securely overwrite" all the blocks on an SSD is both impossible at an OS level—some blocks are just unaddressible overprovisioned blocks that only come online when other blocks fail—and devastating at a physical level: the write multiplication of running e.g. DBaN on an SSD would completely burn out the disk. Overwriting the key, meanwhile, is a single write.)
How do we know that the firmware is actually erasing the block containing the encryption key?
Maybe it just leaves the block/key intact and alters the NVRAM key-address to point to another block or key-location, resulting in a history of every key used on the storage device.
In which case no matter how many times the device was secure-erased a well-equipped adversary could have ways to extract previous keys, either whilst the device is 'online' via undocumented commands, or whilst 'offline' with dedicated equipment.
The same 'destroy-the-key-not-the-data' procedure can be used from the user-controlled software level - Linux with LUKS/dm_crypt uses the same approach via the LUKS header, which can also be 'detached' - only stored on a different (possibly removable) device entirely.
As a bonus it is secure even against malicious (snooping) drive firmware since only encrypted data is seen by the I/O controller and storage device.
One might turn that argument around: with OS-level encryption, you're trusting your OS and your CPU (even if it has a TPM) to not include NSA backdoors that will make it divulge its key, or use a different key for encrypting the drive than the one it says it's using.
With drive-level encryption, the drive effectively acts as its own TPM: the drive's encryption keys never leave the drive (and there's no API to ask for them), so—excepting drives that allow their firmware to be upgraded—there's no possibility of some other untrusted component of your system pulling off a record of the drive keys for later offline recovery.
The best procedure is to combine the two, of course. There's no added cost to disk-firmware-level encryption, since each disk-block is going through a rather complex transformation anyway to protect it from the vagaries of flash array storage. And CPU time for doing OS-level encryption is cheap. Turning both on—and telling the OS to secure-erase its key before telling the disk to secure-wipe—protects you from both attackers.
---
If, for some reason, you only have one or the other available to you, though, I'd honestly prefer the disk-firmware-level encryption, with a foreign-made drive. (This is going to be a bit of a tangent.)
Both CPUs and disks can do encryption. Both CPUs and disks can keep the key locked inside themselves, basically acting as TPMs. And both CPUs and disks can have been suborned at the design or manufacturing stage by a state actor.
My main choices of CPU (Intel, AMD, ARM), regardless of where they end up being manufactured, were all designed by either US or British firms—that is, firms within the jurisdiction of the Five Eyes SIGINT-sharing agreement. I have many choices of disks, though, and many of those options are both designed and manufactured outside of that jurisdiction, in e.g. China.
Now, while I might not trust foreign state-level SIGINT any more than domestic, the incentives are different. Even if Chinese drives have suborned firmware, China has no reason to care what's on my drives—because I'm not a Chinese citizen—or any way to force me to hand the drive over—because I'm not physically in China—and no way to make me be in China, because Canada (where I live) has no extradition treaty with China.
Meanwhile, if I relied on a US-designed/manufactured drive, the NSA would care what's on my drives (because Canada's NSA-equivalent, the CSE, asks them to)—and, because Canada and the US do have an extradition treaty, I could get extradited to the US for a US law the NSA says I broke—and I then would be forced to hand the drive over, where the backdoored keys could be extracted and used to recover the drive's contents.
All these arguments are reversed, of course, if you deal in state secrets, or industrial trade-secrets, that foreign state-actors would actively target you for. The military has every reason to only want domestic CPUs and domestic drives. But I'm not a spy, or a diplomat, or a military officer, or an electrical-utility tycoon; I'm just a private citizen. The only country that cares about me, for better or worse, is my own (and, y'know, those other ones that they attend SIGINT club with on Friday nights.)
it just securely overwrites the sector containing the encryption key with a new one
This technique isn't just for SSDs. Something similar can be used with normal hard disks. Just store the true key as metadata on a hard disk. Erase the few sectors of metadata and the disk is no longer readable.
I think that ultimately would lead to a pretty sad situation for future historians, despite how great it may seem for the present. At least we have the trash and various discarded bits of previous civilizations to study and learn from. In the future with everything encrypted and secured as it is today, there might not be much left.
While it may be sad if history goes back to where it was before modern technology, where most people disappear without a trace, I think it is more important to ensure the world continues to be a viable place in which historians can operate uncensored, and privacy is a very important part of that.
And loose most of your customers because your products are too complicated to use. (Not saying there isn't market for it. But it is not the mass market Microsoft probably is after)
Easy web access etc don't work without the service provider being able to get the data.
I have a growing impression that big tech players have developed a cooperative strategy and coordinate their moves to protect users' data in the cloud. And that movement has no altruistic or politic roots but strong economic ones. They just HAVE to ward off any needle threatening to stick the cloud bubble they made huge bets on.
I haven't seen any significant trend of people stopping using cloud services because they are worried about the NSA. I doubt they have to do this stuff. Maybe fighting the government might move useage a few percent. Can't see it being a game changer really.
As a contractor working in Germany, I have encountered many small and medium businesses that are leery of American cloud providers for data security reasons, sometimes unreasonably so. They would like to use the clouds but can't bring themselves to do so. It's a little breathtaking to watch Amazon Google or Microsoft pass up significant revenue in real time because of us policy.
Perhaps they do not have to, but they should if they want to retain and grow customers in the future.
Anyone who's been bitten by data theft will tune into this. I imagine that includes any major business. MS is jumping on the bandwagon before it falls behind in the PR campaign. This could be the next campaign similar to environmental friendliness or human rights. It's not enough for tech companies to be "green" or have good factory conditions. Now that they hold so much user data, they must also demonstrate their commitment to security and transparency.
Nice PR spin, but I do not believe their lies. They are just riding the Apple-PR-train, and people are eating it up wholesale. The whole of win10 is open to machinations and spying on you. You even sign your privacy away in their eula.
Do not believe their lies. Microsoft is a harmful entity.
I would tend to think this is more to defend their cloud service, which they see now as a core business. If you are even a small supplier of airbus, after the Snowden revelations, you would be very brave to save any file in a Microsoft/Google/Amazon controlled server. These intrusions are an existential threat for these companies.
Could you elaborate, with some examples? I am aware of the risks involved in using US cloud services, as every data on the cloud is exposed to hostile access to some point. Basically, keep your mission critical files of the internet.
I just would like to have some sources / explanations to your statement.
As someone far removed from practicing law, what are the ramifications if Microsoft fails? Would nothing change? Would others be barred from ever suing the U.S. Government for the same reason?
There's no lawsuits yet. This is the continuation of a PR battle.
If something does go to court the government will probably just stop doing it and try something else (the government loses in this sense but only very narrowly).
A lot of these "tools" won't stand up to scrutiny by the Supreme Court because they are so broadly applied, and then the game would stop. Easier just to lose dmall battles but to keep the game going.
Are you saying the first paragraph is a lie? Or is there something else I'm missing?
"This morning we filed a new lawsuit in federal court against the United States government to stand up for what we believe are our customers’ constitutional and fundamental rights – rights that help protect privacy and promote free expression." (my emphasis)
Microsoft did in fact file a lawsuit. That's precisely why it made the news pretty much everywhere.
"The lawsuit, filed on Thursday in federal court in Seattle, argues that the government is violating the U.S. Constitution by preventing Microsoft from notifying thousands of customers about government requests for their emails and other documents."
It just occurred to me, reading: "To be clear, we appreciate that there are times when secrecy around a government warrant is needed. This is the case, for example, when disclosure of the government’s warrant would (...) allow people to destroy evidence and thwart an investigation.", that we shouldn't be too broad in denying the ability to destroy evidence or thwart investigations.
Consider charges of conspiracy, or of access to classified material. If the suspect destroys the evidence, and commits/have not already committed any other crimes -- should we really use the resources to investigate and prosecute such thought crimes?
We risk loosing sight of the fact that punishment is not a goal, it's a means to an end. Hopefully that end is a free and safe society.
A common situation, as I understand it from movies and TV, is in cases of domestic violence.
A woman breaks up with her abusive husband and he threatens to kill her. There's some record of him saying this. Should society do nothing to protect her? Men are known to be physically stronger. What kind of society would we be if we did not provide her some protection by putting space between her and the husband? Presumably courts would decide if it's necessary to jail him or just use a restraining order. And ultimately his punishment wouldn't be the same as if he actually committed the murder.
This becomes a real problem in the case of a repeat offender. The offender realizes he can deliver some "light" abuse and threats. Officials try to lock him up, but he only stays behind bars for so long, and unless the wife agrees to press charges and testify, there is little law enforcement can do. The wife is often terrified and won't testify.
It is situations like these that authoritarian regimes like China and North Korea will point to to suggest that democracy is nuts. Under their authority, they could easily jail or kill such an individual.
In real life, there is a balance struck between laws and rights in the interest of furthering a trustful society. Different cultures draw the line in different places. I'm not familiar with what countries do not have laws against threatening someone's life but you could try to find one and see if you might like to live there. My guess is there aren't many and the bigger a country gets, the more likely they have this kind of law.
Yes, I can be objectively defined (to a good approximation.) And really, the conflicting cases are rare enough that preventing false positives doesn't justify undermining the entire concept of crime prevention.
It is one thing for me to say "let's kill Bill". It is an entirely different thing for me and you to surveil his movements, his schedule, to develop code names, to discuss scenarios for action, to procure means to assassinate him. You can show evidence of serious intent in a conspiracy.
So, a company that has added telemetry services to 90% of the desktop devices that are impossible to turn off completely (unless you block their entire IP range) is suing someone for not respecting user's privacy? What a joke.
Microsoft's ability to convince it's customer to trust Microsoft with their data is enhanced by protecting that data from third parties (such as the US government).
Can the government force a private business entity to lie to their customer? If not, then why can't Microsoft just setup some sort of service/status report that basically outputs "no" if and only if the government did not access data and "unknown" otherwise.
I still think MS should pursue their case but I wonder why they have not set up the system you're describing, given that another commenter's link suggests it is legal [1]
Let's Encrypt has issued 1.7 million free certificates for more than 3.8 million websites over the past six months. Let's Encrypt also enables companies like WordPress and DreamHost to offer free, easy HTTPS to their customers.
i wish they would offer wildcard subdomain support (forget the official term right now) - for services which offer subdomains to their users. I run one such service for free, and would love to offer HTTPS to my customers, but don't want to add yet another running cost for something I don't profit from
That isn't what it's for. You can assume the government (really any government) has control over at least one CA.
What Let's Encrypt does against state-level actors is let people easily use HTTPS instead of HTTP without subjecting their users to self-signed certificate warnings.
A government could still MITM the connection but that requires an active attack rather than passive surveillance. And active attacks are subject to detection. So it protects against undetectable mass surveillance.
That would allow the government to issue new certificates, not to decrypt traffic to sites using letsencrypt certificates. Active man in the middle attacks are rather noisy, and things like HPKP exist to prevent such attacks
I've been working on teaching developers to not just use encryption, but to use it properly, for the past year or so. I'd like to think it's making a difference, but only time will tell.
For our blog posts: We don't have an official rubric, but we try to emphasize topics that are immediately useful and provide simple steps towards better long-term security for anyone who reads it.
For example, developers know not to roll their own cryptography, but how can someone who has never rolled their own crypto evaluate existing libraries? There are a lot of bad ones out there, and "I didn't roll my own crypto" doesn't equate to "I used a well-studied library thought to be secure by industry experts".
Consequently, we delved into specific recommendations and explained why we include them in our list.
(Historical context, this was published after I discovered CVE-2015-7503 in Zend Framework 2. A lot of my peers said it was good, but I don't recommend code until I've audited it. Lo and behold, I found a problem with their RSA implementation.)
For our reading list: We focus on application security, not physical security (e.g. data center security, full disk encryption), social engineering (e.g. phishing, scamming), or system security (e.g. malware and OS-level exploit mitigation).
Application security can encompass cryptography, information theory, etc. but the target audience is programmers.
Material for any programming language will be considered for inclusion, but some absurd ones (e.g. Brainfuck) will probably be declined.
They force push windows 10 on all fronts with all its telemetric inbuilt leakage and also i suppose some nsa backdoors and after that they write some PR paper like this, put on a cardboard Ubuntu mask and throw around some open source confetti.
This -- or this topic -- should be the top-level thread. I refuse to use Windows 10, and am not sure what to do going forward for software that I use which only runs on Windows.
Glad to see someone being positive if 95% of the way down the comments. I agree with Microsoft that saying you can never say what the government it up to is a violation of free speech and only justified in extreme cases.
> Over the past 18 months, the U.S. government has required that we maintain secrecy regarding 2,576 legal demands, effectively silencing Microsoft from speaking to customers about warrants or other legal process seeking their data.
So should Outlook.com be considered insecure if the US government can access it at any time without you knowing? Microsoft should be able to inform you whether or not your information has been leaked. I hope Microsoft wins.
> To be clear, we appreciate that there are times when secrecy around a government warrant is needed. This is the case, for example, when disclosure of the government’s warrant would create a real risk of harm to another individual or when disclosure would allow people to destroy evidence and thwart an investigation.
So, like, every criminal investigation of a person who uses email?
Secrecy is great for Microsoft when it's in the form of extorting Android OEMs with bogus patents for billions of dollars a year. Or copying Google's search results. But not when complying with government data requests. Got it.
Because it's a web browser. What kind of proprietary trade secrets are they afraid of when every other browser is open, especially when they are touting the important of encryption and user security? More like they don't want users to know how much information they are actually collecting.
I think we should also remember those that took a stand early and paid the price, such as former Quest CEO Joseph Nacchio who was run under the bus for not being an accomplice to the NSA [1].
1. https://www.rt.com/usa/qwest-ceo-nsa-jail-604/