martes, 28 de abril de 2015

Police Were Harassing Students Before Monday's Outrage

anonymous source have obtained documents that suggest a pattern of harassment over the past two weeks since the death of Gray. Police officers have been arresting mostly young african american students after they got let out of schools and after they refused police orders to get on the bus and go home. According to the Baltimore Police arrest reports acquired by THE REAL NEWS, officers arrested teenagers as young as 14 and 15 years old. According to the documents, Police were asking students which bus they were waiting for. When the bus arrived and students didn’t board, presumably in an act of protest, the police officers placed them under arrest for trespassing and loitering on MTA property. Many students yelled back. For example, one 16 year old girl who we won’t name defended her cousin, calling the police “fake […] y’all don’t have to lock her up,” she said. While it’s unclear whether those sorts of arrests repeated themselves on Monday and maybe even triggered the uproar, it’s very likely that it contributed to the deep sense of anger and frustration that has rocked Baltimore since the death of Freddie Gray.

Free online study for everyone!

Choose from hundreds of free online courses: from Language & Culture to Business & Management; Science & Technology to Health & Psychology.
Discuss and debate your ideas with other learners from around the world. Learning is as easy and natural as chatting with a group of friends.
Meet educators from top universities and cultural institutions, who'll share their experience through videos, articles, quizzes and discussions

https://www.open2study.com


https://www.futurelearn.com/

Freddie Gray’s funeral.

Scene at Mondawmin Mall, North Baltimore, students had circulated a meme to organize a march after Freddie Gray’s funeral. Students were met at the origination point by a horde of Riot Police who had the intent on dispersing the students. Police had found out about the event via twitter. Situation is still escalating. Just hours after Freddie Gray’s funeral.

 

Baltimore police have been unable to offer any explanation

The 27-year-old African-American man died Sunday from spinal injuries, one week after Baltimore police arrested him. His family and attorney say his voice box was crushed and his spine was “80 percent severed at his neck.” A preliminary autopsy report showed Gray died of a spinal injury.

Video shot by a bystander shows Gray screaming in apparent agony as police drag him to a van. Another witness said the police bent Gray like a pretzel. Baltimore police have been unable to offer any explanation for the death of Freddie Gray. Our reports from Saturday’s and Monday’s demonstrations

Detroit man shot by federal agent died from multiple gunshots, autopsy finds

An autopsy has revealed that a 20-year-old man who was shot to death in a Detroit home by a federal agent serving a warrant died from multiple gunshot wounds.
Wayne County medical examiner's office spokesman Ryan Bridges said Tuesday that Terrance Kellom's death has been ruled a homicide. Bridges said he couldn't confirm the exact amount of wounds Kellom suffered.
According to court records, Kellom was wanted on armed robbery and weapons charges.

3 people shot on Detroit's east side, Police believe all are connected

Detroit Police had a very busy night. The work continues this morning as they try to find answers to three separate shootings last night on Detroit’s east side.
Police say they believe all three are connected.
First, a 16-year-old girl was shot and is in critical condition. This happened during a vigil on the 17300 block of Dresden Street, which is near Hoover Street and East McNichols Road.
And then, less than five blocks away, another woman was shot multiple times. This was on the 19200 block of Westphalia, which is between Hoover and Schoenherr Streets. We don’t know her age or her condition at this time.
And finally, police say a man showed up at the hospital with a gunshot wound, and said he was shot in the area of Harper and Chalmers Avenues.
Stay with Action News for the very latest on this story, we're working to get more information.

Cold Boot Attacks on Encryption Keys

If you completely shutdown, yes truecrypt protects you. But if you have a abrupt power interruption, (like in the case of a forced reset) or if you go into sleep or hibernation mode, then truecrypt is just as vulnerable as any other entryption method.
cool boot attacks

How to defeat end-to-end encryption Attacking key distribution



Many end-to-end encrypted messaging systems, including WhatsApp and iMessage, generate a long-term public and secret keypair for every device you own. The public portion of this keypair is distributed to anyone who might want to send you messages. The secret key never leaves the device.

Before you can initiate a connection with your intended recipient, you first have to obtain a copy of the recipient's public key. This is commonly handled using a key server that's operated by the provider. The key server may hand back one, or multiple public keys (depending on how many devices you've registered). As long as those keys all legitimately belong to your intended recipient, everything works fine.

Intercepting messages is possible, however, if the provider is willing to substitute its own public keys -- keys for which it (or the government) actually knows the secret half. In theory this is relatively simple -- in practice it can be something of a bear, due to the high complexity of protocols such as iMessage.

Key fingerprints.
The main problem with key distribution attacks is -- unlike a traditional wiretap -- substitute keys are at least in theory detectable by the target. Some communication systems, like Signal, allow users to compare key fingerprints in order to verify that each received the right public key. Others, like iMessage and WhatsApp, don't offer this technology -- but could easily be modified to do so (even using third party clients). Systems like CONIKS may even automate this process in the future -- allowing applications to monitor changes to their own keys in real time as they're distributed by a server.

A final, and salient feature on the key distribution approach is that it allows only prospective eavesdropping -- that is, law enforcement must first target a particular user, and only then can they eavesdrop on her connections. There's no way to look backwards in time. I see this is a generally good thing. Others may disagree.

Key Escrow
Structure of the Clipper 'LEAF'.
The techniques above don't help much for systems without public key servers, Moreover, they do nothing for systems that don't use public keys at all, the prime example being device encryption. In this case, the only real alternative is to mandate some sort of key escrow.

Abstractly, the purpose of an escrow system is to place decryption keys on file ('escrow' them) with some trusted authority, who can break them out when the need arises. In practice it's usually a bit more complex.

The first wrinkle is that modern encryption systems often feature many decryption keys, some of which can be derived on-the-fly while the system operates. (Systems such as TextSecure/WhatsApp actually derive new encryption keys for virtually every message you send.) Users with encrypted devices may change their password from time to time.

To deal with this issue, a preferred approach is to wrap these session keys up (encrypt them) under some master public key generated by the escrow authority -- and to store/send the resulting ciphertexts along with the rest of the encrypted data. In the 1990s Clipper specification these ciphertexts were referred to as Law Enforcement Access Fields, or LEAFs.***

With added LEAFs in your protocol, wiretapping becomes relatively straightforward. Law enforcement simply intercepts the encrypted data -- or obtains it from your confiscated device -- extract the LEAFs, and request that the escrow authority decrypt them. You can find variants of this design dating back to the PGP era. In fact, the whole concept is deceptively simple -- provided you don't go farther than the whiteboard.

Conceptual view of some encrypted data (left) and a LEAF (right).
It's only when you get into the details of actually implementing key escrow that things get hairy. These schemes require you to alter every protocol in your encryption system, at a pretty fundamental level -- in the process creating the mother of all security vulnerabilities -- but, more significantly, they force you to think very seriously about who you trust to hold those escrow keys.

Who does hold the keys?

This is the million dollar question for any escrow platform. The Post story devotes much energy to exploring various proposals for doing this.

Escrow key management is make-or-break, since the key server represents a universal vulnerability in any escrowed communication system. In the present debate there appear to be two solutions on the table. The first is to simply dump the problem onto individual providers, who will be responsible for managing their escrow keys -- using whatever technological means they deem appropriate. A few companies may get this right. Unfortunately, most companies suck at cryptography, so it seems reasonable to believe that the resulting systems will be quite fragile.

The second approach is for the government to hold the keys themselves. Since the escrow key is too valuable to entrust to one organization, one or more trustworthy U.S. departments would hold 'shares' of the master key, and would cooperate to provide decryption on a case-by-case basis. This was, in fact, the approach proposed for the Clipper chip.

The main problem with this proposal is that it's non-trivial to implement. If you're going to split keys across multiple agencies, you have to consider how you're going to store those keys, and how you're going to recover them when you need to access someone's data. The obvious approach -- bring the key shares back together at some centralized location -- seems quite risky, since the combined master key would be vulnerable in that moment.

A second approach is to use a threshold cryptosystem. Threshold crypto refers to a set of techniques for storing secret keys across multiple locations so that decryption can be done in place without recombining the key shares. This seems like an ideal solution, with only one problem: nobody has deployed threshold cryptosystems at this kind of scale before. In fact, many of the protocols we know of in this area have never even been implemented outside of the research literature. Moreover, it will require governments to precisely specify a set of protocols for tech companies to implement -- this seems incompatible with the original goal of letting technologists design their own systems.

Software implementations

A final issue to keep in mind is the complexity of the software we'll need to make all of this happen. Our encryption software is already so complex that it's literally at the breaking point. (If you don't believe me, take a look at OpenSSL's security advisories for the last year) While adding escrow mechanisms seems relatively straightforward, it will actually require quite a bit of careful coding, something we're just not good at.

Even if we do go forward with this plan, there are many unanswered questions. How widely can these software implementations be deployed? Will every application maker be forced to use escrow? Will we be required to offer a new set of system APIs in iOS, Windows and Android that we can use to get this right? Answering each of these questions will result in dramatic changes throughout the OS software stack. I don't envy the poor developers who will have to answer them.

How do we force people to use key escrow?

Leaving aside the technical questions, the real question is: how do you force anyone to do this stuff? Escrow requires breaking changes to most encryption protocols; it's costly as hell; and it introduces many new security concerns. Moreover, laws outlawing end-to-end encryption software seem destined to run afoul of the First Amendment.

I'm not a lawyer, so don't take my speculation too seriously -- but it seems intuitive to me that any potential legislation will be targeted at service providers, not software vendors or OSS developers. Thus the real leverage for mandating key escrow will apply to the Facebooks and Apples of the world. Your third-party PGP and OTR clients would be left alone, for the tiny percentage of the population who uses these tools.

Unfortunately, even small app developers are increasingly running their own back-end servers these days (e.g., Whisper Systems and Silent Circle) so this is less reassuring than it sounds. Probably the big takeaway for encryption app developers is that it might be good to think about how you'll function in a world where it's no longer possible to run your own back-end data transport service -- and where other commercial services may not be too friendly to moving your data for you.

How do we build encryption backdoors?



End-to-end encryption 101

Modern encryption schemes break down into several categories. For the purposes of this discussion we'll consider two: those systems for which the provider holds the key, and the set of systems where the provider doesn't.

We're not terribly interested in the first type of encryption, which includes protocols like SSL/TLS and Google Hangouts, since those only protect data at the the link layer, i.e., until it reaches your provider's servers. I think it's fairly well established that if Facebook, Apple, Google or Yahoo can access your data, then the government can access it as well -- simply by subpoenaing or compelling those companies. We've even seen how this can work.

The encryption systems we're interested all belong to the second class -- protocols where even the provider can't decrypt your information. This includes:

    Apple and Android device encryption (based on user passwords and/or a key that never leaves the device).
    End-to-end messaging applications such as WhatsApp, iMessage and Telegram*.
    Encrypted phone/videochat applications such as Facetime and Signal.
    Encrypted email systems like PGP, or Google/Yahoo's end-to-end.

This seems like a relatively short list, but in practice w're talking about an awful lot of data. The iMessage and WhatsApp systems alone process billions of instant messages every day, and Apple's device encryption is on by default for everyone with a recent(ly updated) iPhone.

How to defeat end-to-end encryption

If you've decided to go after end-to-end encryption through legal means, there are a relatively small number of ways to proceed.

By far the simplest is to simply ban end-to-end crypto altogether, or to mandate weak encryption. There's some precedent for this: throughout the 1990s, the NSA forced U.S. companies to ship 'export' grade encryption that was billed as being good enough for commercial use, but weak enough for governments to attack. The problem with this strategy is that attacks only get better -- but legacy crypto never dies.

Fortunately for this discussion, we have some parameters to work with. One of these is that Washington seems to genuinely want to avoid dictating technological designs to Silicon Valley. More importantly, President Obama himself has stated that "there’s no scenario in which we don’t want really strong encryption". Taking these statements at face value should mean that we can exclude outright crypto bans, mandated designs, and any modification has the effect of fundamentally weakening encryption against outside attackers.

If we mix this all together, we're left with only two real options:

    Attacks on key distribution. In systems that depend on centralized, provider-operated key servers, such as WhatsApp, Facetime, Signal and iMessage,** governments can force providers to distribute illegitimate public keys, or register shadow devices to a user's account. This is essentially a man-in-the-middle attack on encrypted communication systems.
    Key escrow. Just about any encryption scheme can be modified to encrypt a copy of a decryption (or session) key such that a 'master keyholder' (e.g., Apple, or the U.S. government) can still decrypt. A major advantage is that this works even for device encryption systems, which have no key servers to suborn.

Each approach requires some modifications to clients, servers or other components of the system.

Cryptographic backdoors

Truecrypt report
A few weeks back I wrote an update on the Truecrypt audit promising that we'd have some concrete results to show you soon. Thanks to some hard work by the NCC Crypto Services group, soon is now. We're grateful to Alex, Sean and Tom, and to Kenn White at OCAP for making this all happen.

You can find the full report over at the Open Crypto Audit Project website. Those who want to read it themselves should do so. This post will only give a brief summary.

The TL;DR is that based on this audit, Truecrypt appears to be a relatively well-designed piece of crypto software. The NCC audit found no evidence of deliberate backdoors, or any severe design flaws that will make the software insecure in most instances.

That doesn't mean Truecrypt is perfect. The auditors did find a few glitches and some incautious programming -- leading to a couple of issues that could, in the right circumstances, cause Truecrypt to give less assurance than we'd like it to.

For example: the most significant issue in the Truecrypt report is a finding related to the Windows version of Truecrypt's random number generator (RNG), which is responsible for generating the keys that encrypt Truecrypt volumes. This is an important piece of code, since a predictable RNG can spell disaster for the security of everything else in the system.

The Truecrypt developers implemented their RNG based on a 1998 design by Peter Guttman that uses an entropy pool to collect 'unpredictable' values from various sources in the system, including the Windows Crypto API itself. A problem in Truecrypt is that in some extremely rare circumstances, the Crypto API can fail to properly initialize. When this happens, Truecrypt should barf and catch fire. Instead it silently accepts this failure and continues to generate keys.


This is not the end of the world, since the likelihood of such a failure is extremely low. Moreover, even if the Windows Crypto API does fail on your system, Truecrypt still collects entropy from sources such as system pointers and mouse movements. These alternatives are probably good enough to protect you. But it's a bad design and should certainly be fixed in any Truecrypt forks.

In addition to the RNG issues, the NCC auditors also noted some concerns about the resilience of Truecrypt's AES code to cache timing attacks. This is probably not a concern unless you're perform encryption and decryption on a shared machine, or in an environment where the attacker can run code on your system (e.g., in a sandbox, or potentially in the browser). Still, this points the way to future hardening of any projects that use Truecrypt as a base.

Truecrypt is a really unique piece of software. The loss of Truecrypt's developers is keenly felt by a number of people who rely on full disk encryption to protect their data. With luck, the code will be carried on by others. We're hopeful that this review will provide some additional confidence in the code they're starting with.