Here is a great article being buried by BS articles, proving the gov knew about encryption issues before they had any scapegoats (snowden). As quoted in the article, these acts are pretty shameless.
Even if we assume that secure chats are reallysecure (which, as you point out, has been debated by many experts), the biggest problem with Telegram is that nobody I've come across uses them because they don't sync between devices. This is very crippling in this day and age. Wire handles that smoothly and Signal requires a strange Chrome plugin solution.
Matrix should be the obvious secure federated replacement for all of these softwares.
This may answer your question:
http://serverfault.com/questions/306345/certification-authority-root-certificate-expiry-and-renewal
Technically it can be done. I am not aware of any authority that would mandate otherwise.
I would get a copy of Tails and burn the ISO to a USB drive and boot your PC with that. It will force all of the communication over Tor for anonymity and includes other privacy tools.
In practice, applications (like TLS servers and web browsers) read random bits from /dev/urandom
or RtlGenRandom
on Windows. The OS CSPRNG collects noise/randomness/entropy from multiple sources and securely mixes it, with reseeding to maintain secrecy against attacker that had limited access to OS CSPRNG buffers.
All sane cryptographic functions internally use a randomness API that just provides the required number of random bytes (like reading from /dev/urandom
or getrandom). Some things like HMAC and AES-GCM can just use those bytes as-is, other things like RSA code generate RSA keypairs using the randomness to deterministically generate candidate primes and primality-test them until it finds two primes of the right size (half the RSA key size).
A CSPRNGS can just be something that takes 256 bits generated by an unpredictable physical process (fair dice, electronic noise, whatever), and then just uses that as key in AES-CTR (or chacha20 keystream), generating 2 raised to the 68th power number of random bytes. This is probably enough, but if you need more, rekey it and run the same code again. This runs at gigabytes per second per core and is secure. The method to collect the initial seed is up to you - some don't trust Intel RDRAND and prefer to use the least significant bits from the timing of IO interrupts or whatever.
See for example this for more: https://buttondown.email/cryptography-dispatches/archive/cryptography-dispatches-the-linux-csprng-is-now/
Trial division to start with: https://primes.utm.edu/prove/prove2_1.html
After that it does a Fermat test and more rigorous tests.
It is actually documented :-) https://www.gnupg.org/documentation/manuals/gcrypt/Prime_002dNumber_002dGenerator-Subsystem-Architecture.html
Aside from the books others mentioned, I wanted to also suggest one that recently came out: "Serious Cryptography" by Jean-Philippe Aumasson. What I like about this book is that it focuses on teaching crypto via programming practical implementations. So if you like coding and learning by example I highly recommend it.
Link: https://www.amazon.com/dp/1593278268/ref=cm_sw_r_cp_apa_RSWKAbQZMK4FV
They may not be boneheads. Even if we are to assume incompetence, we should not rule out malice, and we should definitely strive to fight potentially malicious actors.
For example, by phasing out 1024b, which appears not to be done using reasoning I consider harmful. Compatibility does not beat security. The costs for breaches can be much greater than the costs for loss of compatibility, and you may never even find out about the former costs.
i encourage you to read those slides
but tl;dr the libressl code and especially their new libtls API offer stronger implementation security guarantees, is easier to work with, and is probably more 'future proof' against bus factors.
No. This is about the "Device Encryption" not letting the user know their Recovery Key is stored on their "cloud profile" or even notifying them it stored it there.
> This policy is also in contrast to Microsoft’s premium disk encryption product called BitLocker, which isn’t the same thing as what Microsoft refers to as device encryption. When you turn on BitLocker you’re forced to make a backup of your recovery key, but you get three options: Save it in your Microsoft account, save it to a USB stick, or print it.
Atleast Bitlocker and OS X give options as to where to store the recovery key.
You can check if any of your devices recovery key is stored on microsoft here: https://onedrive.live.com/recoverykey . (This was also in the article)
They posted a nice blog post recently:
https://blog.whiteout.io/2015/02/06/making-pgp-key-management-invisible-so-johnny-can-encrypt/
The key exchange problem is hard. For example, I have several lost PGP keys bouncing around keyservers that I can't revoke. After the first few, I learned that I need to set reasonable expiration dates on the keys.
I think the work that the Keybase.io folks are doing can help where you can verify your identity via different social media platforms. It would be great if they would work with the existing key infrastructure to update the keys when they've been signed by other people.
The HackerNews comments are quite good: https://news.ycombinator.com/item?id=9010894
ProtonMail's claim is that they have developed a web site which can encrypt secrets in such a way that the site cannot access those secrets:
https://protonmail.com/blog/what-is-end-to-end-encryption/
> When you use E2EE to send an email or a message to someone, no one monitoring the network can see the content of your message — not hackers, not the government, and not even the company (e.g. ProtonMail) that facilitates your communication.
> It keeps your data safe from hacks. E2EE means fewer parties have access to your unencrypted data. Even if hackers compromise the servers where your data is stored (e.g. Yahoo mail hack), they cannot decrypt your data because the does not possess the decryption keys.
A web application cannot provide these properties, because these properties run contrary to the fundamental threat model of a browser.
This is not how web browsers work. Any scripts running on a particular origin have the full authority granted to them by the CSP of that origin.
ProtonMail always has the option of exfiltrating your plaintexts and/or cryptographic keys any time you load any page on the same origin where you read encrypted mail, using an attack which is extremely difficult to detect, can happen at a particular point in time leaving virtually no trace, and makes it very easy to target individual users without any potential for detection by anyone else.
The result is the very thing we don't want out of a secure messaging application: a system which appears secure on the surface, but with vast attack surface that allows for targeted, stealthy attacks on individual users which leave behind little evidence the attack ever occurred.
Seems like a decent time to mention that if an attacker is able to compromise Telegram's servers they could, undetected, alter the group membership to add themselves to the group, read all past messages of the group, read all future messages of the group and no one in the group would receive a notification.
>Q: So how do you encrypt data?
>We support two layers of secure encryption. Server-client encryption is used in Cloud Chats (private and group chats)
From their website. https://telegram.org/faq#q-so-how-do-you-encrypt-data
A large portion of their user base is, I'm quite sure, under the impression group chats are secure and imho that is due in no small part to telegrams marketing and muddying of the waters.
No, not breaking AES. That would be major. I bet he's talking about brute forcing weak passwords on encrypted disks. I've heard that TrueCrypt doesn't stretch users keys much, so it's easier to go through all passwords of a certain length (not sure how long). They will also use known password and passphrase attacks.
https://code.google.com/p/truecrack/
That's why it's very important to use a long nonsensical passphrase to protect your encrypted disks.
Cryptool is a program that allows you to play with a bunch of difference methods of encryption from basic to modern. It also has analysis functions that allows you to break old ciphers as well. I've only used version 1 so I'm not sure about the other versions.
The how it works page is shallow. Explain how the device knows that it is getting the right public encryption key for the person you are communicating with.
Seems like they had some organizational problems:
>At July 11, 2015, Tox developers officially announced their disassociation with Tox Foundation, due to actions of Sean Qureshi (also known as Stqism, AlexStraunoff and NikolaiToryzin), head and sole board member of Tox Foundation, who "took a loan against Tox Foundation, and used the entirety of the foundation’s funds on personal expenses completely unrelated to the project".[15] Most of the foundations' funds consisted of prize fees after TF participation in Google Summer of Code 2014, with small part of users' donations.
>Exact amount of money Qureshi took is unknown—according to developer's statement in project's blog, it was in "low-thousands"; prior to that, on Reddit's thread explaining the situation, total sum of stolen money was reported as 3000 USD.[16] These events were also the cause of yet another project's move to the new domain, https://tox.chat, since Qureshi also provided hosting services for Tox Foundation needs and controlled all domains.
>Despite these events, developers stated that they will continue with Tox, and that source code was not compromised in any matter, as it was stored on a GitHub repo, controlled by one of the head developers, known as irungentoo; though it was noted that users should immediately switch to new software repositories. As of August 2015, no statement on situation from Qureshi is available.
Flairing it "unverified" because it recommends the seemingly proprietary and unaudited solution from NordVPN. They misuse "Zero-knowledge" terminology and use shady marketing and other stuff like that, which make them seem quite questionable.
Interesting article. These are clever designs that use cryptography to optimally use technology.
These designs have drawbacks, though. There are some technical drawbacks (as mentioned in the article's conclusion), but perhaps more important is how these designs affect the code maintainers. I know from experience that developers will get this stuff confused. There are checksums (called "hash functions"), cryptographic hash functions (called "hash functions"), and the HMAC construction (called "Isn't crc32 good enough?").
I can here the voices now:
These confusions can be cleared. But we have a choice between a simple design and a complex design. If this were a personal project, sure, I might use the complex design. Yay crypto! But if other people will touch my code, and the cost of an extra DB query is minimal (as Knuth said, premature optimization is the root of all evil), then simplicity becomes more important.
What is this "post-database world", anyway?
Git still uses sha1. Tried to look why, found this, i.e. it is not for security. Kindah weird though, don't understand why it'd be hard to switch checksum at all. Seems like if i'd write something like that, it'd just be port-the-function & plug it in.
Maybe git has super-efficiency or something.
Yes.
Post quantum RSA: for optimistic assumptions (meaning assuming some limitations on quantum computers' peak performance), you use the standard RSA algorithm with keysizes of gigabytes. Pessimistic assumptions (assuming no engineering limits on quantum computers, hitting theoretical max performance) would require terabyte sized keys. Keep in mind that the PQ-RSA proposal isn't entirely serious - decryption times can be a full day per message on beefy hardware, it's purpose was to investigate the limits.
Tahoe-LAFS has this; https://tahoe-lafs.org/trac/tahoe-lafs/wiki/OneHundredYearCryptography
There's is "security by obesity" protocols with symmetric keys, where the key material is gigabytes or more (related to proof of storage schemes, used for things like authentication).
Psst: it's not! Check out their home page.
And their justification for not open-sourcing:
>Why Utopia is not open source?
>We may disclose certain parts of code, specifically related to communication and encryption. However, the decentralized protocol will not be released. Utopia is very knowledge-intensive software. A lot of time, effort and resources went into this product, and we do not want to share all of our know-how as it will result in forks which in turn may result in instability of our main network. Fork will lead to the division of the community, while our intention is the unification of the community of like-minded individuals. The bottom line here is that a lot of software is closed source, and this does not hurt them a bit. In addition, we will audit our code.
It's just lost all value in my mind if it can't even be casually audited by amateurs, let alone a professional, via source code availability.
I'll admit the homepage does look incredibly cool, though.
The only relevant example I can think of right now would be Signal's contact discovery feature;
https://signal.org/blog/private-contact-discovery/
They use SGX for protection. Other than that, isn't it mostly used for a few cases of DRM, and by a small number of businesses?
Nope
https://telegram.org/faq#q-how-secure-is-telegram
It's just mtproto for secret chats, plain client to server encryption for everything else. No usage of other protocols, that I can see.
The app Signal supports multiple devices. One device acts as a master, messages are encrypted to all your devices' own unique keys. Your master device signs the public keys of your other devices to prove they're yours.
You can search locally in your own copy of the logs.
Matrix.org lets the user handle keys however they wish. It acts much like XMPP.
Hey there, I'm a software developer at Amazon working on s2n, our TLS Implementation that encrypts a lot of the web traffic for Amazon. We've hired interns as full time employees in the past who worked on s2n over the summer.
s2n is written in C, and intended to be very small, very simple, easy to review, and easy to test.
If you would like to help out we have a lot of good beginner tasks that you can work on at your own pace here: https://github.com/awslabs/s2n/labels/good-first-patch
If you are looking for a book recommendation you should look at Bulletproof TLS: https://www.amazon.com/dp/1907117040
As 0x90-NOP pointed out in his/her 3rd point, the cascading xor or adjacent characters can be stripped out leaving a simple xor of the message with the repeated key. This is effectively a Vigenère cipher ( https://en.wikipedia.org/wiki/Vignere_cipher ), except it uses bitwise addition mod 2 (i.e. xor), instead of adding chars and mod 26.
If you find this interesting you could do the Cryptography 1 course for free at Coursera. (
https://www.coursera.org/course/crypto ) This stuff was covered in the first week. 8-)
This is type of executable obfuscation is called a packer. These techniques are common in a lot of programs, such as Spotify. Malware also uses this to evade antiviri and reverse engineers.
Overall this approach is security through obscurity and is generally considered unsustainable in the long run.
Point by point:
>* Obfuscate all variable names, function names, class names, etc...
This all depends on the language. In VM or JIT based languages you may need to do this, in machine code languages like C these don't exist on a binary level.
>* Remove all formatting and white-space.
Same as the last one.
>* Encrypt all the raw data
This is a common packer technique, but can be trivially reversed by looking at an executable running in memory.
>* Generate a hash value for each file's encrypted data
Have a look at this project
>* Generate a hash of final file name of all the installed files for project
I can't really see a reason why this would be useful
>* Store the raw file name, the file names hash value, and the encrypted data hash value in a plain raw-text-document
Again, check this out
Another useful technique is anti-debugging, it slows down reverse engineers making it very expensive to make sense of the machine code.
However, all of these things can be overcome, patched, or otherwise subverted. There is NO technique that will stop a dedicated reverse engineer from tampering with your software. If the processor can make sense of it, so can a human.
tl;dr There is no real way to stop a reverse engineer from understanding your software, but there are ways to slow them down, or make an attack too expensive.
>Readable for humans
But the example canary tells nothing about what is canaried. (E: The example proof is flat out useless as a canary. If the real canaries would have other information then maybe an example that has those information would be preferable.)
What I think is needed is some easy way to select which types of orders and warrants you want to tell you haven't received or complied with. Much like the list on cock.li's canary.
Also there should be more than the date up to which it is valid but also a date when you should at the latest except a new canary. Again, the cock.li canary specifies when it should be updated.
Also personally I feel like more short term canaries would be more useful. Instead of saying that you haven't received anything this month it would be per day or at most few days. This would make it easier to make educated guesses about what and how many the service has received.
And we don't just need a standardized canary format but also some service where you can see statuses of the standardized canaries and possibly get an RSS feed of them. It's a lot of wasted effort for everyone to check every service's canary individually. Since the messages are signed they can be served by anyone.
There used to be a website called canarywatch but when I try to connect to it now I only get TLS handshake errors.
How can it possibly upload the recovery key to your OneDrive account... if you don't have a OneDrive account? I believe that what you will find is that BitLocker will refuse to enable unless you link OneDrive or upgrade to Pro. It makes perfect sense to me that Microsoft doesn't want to make it easy for people to accidentally lock all their data away--unless they know what they are doing (which is the whole Home/Pro barrier).
Edit: BitLocker can still be used in the Pro and higher flavors of Windows without using OneDrive (this was also the case in Windows 8, I can't remember whether BitLocker was even available without at least Windows 7 Pro--I do know that I have had to specifically purchase Windows 7 Pro and Windows 8 Pro upgrades for BitLocker FDE support; I can't remember about Vista).
More here:
https://news.ycombinator.com/item?id=8546524
Honestly, this seems like a good improvement for people that would otherwise not encrypt their computers and who aren't capable of key management. I think it's a quite reasonable defense against loss/petty theft.
> What else can you do to your TOR browser to make it more secure and less vanilla?
Use Tails and not just Tor Browser on your native OS. If you cannot use Tails, at least use a Free, Open Source Operating system like GNU/Linux or *BSD - and keep proprietary add-ons like Adobe Flash and Sun Java off of it.
As the first thing you do in any Tor Browser session (in Tails or otherwise), click the Tor Button and select "Privacy and Security Settings..." and slide the slider all the way up to the "High" position. If that is unsuitable for any reason, at least click the NoScript icon in Tor Browser and select "Disable Scripts Globally." This is really not optional, at this point - I still think it's disgusting that the Tor devs enable scripts by default. Their concern is that it breaks a lot of modern websites, which it does, but JavaScript is also the single biggest vector for anonymity compromises both theoretically and historically.
For the ultimate security, use an Isolating Tor Proxy, with secure operating systems on all machines on the network. This requires some knowledge of Linux/UNIX, networking, and anonymity threat modeling in general. Ideally, run your workstation machine from some kind of Live medium (like Tails, though you'll probably want to hack it a bit so that it's not connecting to Tor through Tor). Whonix is an OS that's pre-configured for this kind of setup.
Yes. They offer two services - one, you (the hosted company) share your private key with them; two, you provide separately authenticated access to your private key via a key server. In both cases they can still see all of your traffic.
The web site is prohibiting keys from "openpgp.js", a javascript implementation of OpenPGP. They are not prohibiting all OpenPGP implementations.
It looks like they're prohibiting PGP keys that were likely to have been generated or stored on a server rather than by you on your personal computer. Most local applications should create usable keys.
Kleopatra uses GnuPG, so it should be fine. https://www.openpgp.org/software/kleopatra/
LUKS does something similar in that you need to store something in addition to your ciphertext:
On encryption, LUKS derives the key via PBKDF2 from the password. The key is kicked through PBKDF2 one more time and saved as the "key checksum". On decryption, a password is only accepted if PBKDF2(PBKDF2(password)) == key checksum. See Page 11 https://gitlab.com/cryptsetup/cryptsetup/-/wikis/LUKS-standard/on-disk-format.pdf
This construction reuses the assumptions that PBKDF2 can't be inverted. I chose this because I felt that mixing in cipher primitives added unnecessary parts to the whole construction.
People have been working on crypto communication networks for years. See whisper systems, bitmessage, et al.
Simply changing the keys every connect does not eliminate man in the middle attacks, without delving down to possible implementation issues, one threat strikes me straight away - what if you were intercepted on first contact? what if the attacker has the ability to selectively drop packets?
"Decentralize" is not a magic key word that just makes threats go away. If anything it creates a billion more of them. I would rather trust a couple servers and end to end encrypt my content. Decentralization would be awesome...but it is damn difficult to get it right.
A one-time pad is not secure....it has the property of perfect secrecy. THESE ARE NOT THE SAME THING.
Perfect Secrecy is the concept that a cryptanalyst can gain no information from the cipher text e.g. frequency analysis. It does not mean that the communication between the two parties is secure.
A naive one time pad is vulnerable to all forms of attack e.g. ciphertext malleability - if an attacker knows the structure of the underlying message they can reorder messages and swap bytes with little consequence.
While we are on the subject of flaws...subsequent key exchanges and forward secrecy are subject to all kind of threats. See https://otr.cypherpunks.ca/Protocol-v3-4.0.0.html and https://whispersystems.org/blog/advanced-ratcheting/
Basically what I am trying to say is....yes...we all want secure end-to-end messaging....lots of people are working on it. Email works because it can interoperate at a number of levels - any widespread replacement needs to have a similar level of flexibility - achieving all that and being transparent to the user is hard.
I share your desire for standardization, but OTR has massive problems, especially when used with mobile devices. The textsecure blog has a good summary of these issues. This is the entire reason the textsecure protocol was created.
From what I understand, this protects against low-ish entropy user passwords being brute-forced through an attack, either online or with access to the Facebook internal private network.
It reminds me a bit of Signal SVR, in that part of the protection is server-side temper-proof hardware (SGX, in the case of SVR).
As far I can see, the Bitwarden issue is that they don't specify the use of the MAC when they say "AES-CBC", leading people to assume they don't. If you look at their interactive crypto page:
https://bitwarden.com/help/crypto
You can see that the master password derives a MAC key. If you view page source, you can see that it it uses that key in a encrypt-then-mac fashion, using HMAC-SHA256. The final encrypted data is in the format encType + '.' + iv.b64 + '|' + ct.b64 + '|' + mac.b64.
Painfully unhip viral marketing.
Didn't they forget to mention their crypto is "Virtually Unbreakable"!? I hope they didn't use their "Simple, Efficient and Virtually Unbreakable" encryption on this challenge. Surely ye must abandon all hope!
More bad news: Using its own proprietary cryptography algorithms and a chip that will soon arrive in consumer electronics devices, DeTron Inc. wants to become the first company to “meet the evolving global demands for cloud-based applications and services.”
The company hopes that users will soon be able to use its system for encrypting healthcare and financial records, but also for making email more secure.
Here's an example of this I just tried. I took two frames from my webcam, took the pixel-wise difference, and auto-levelled to make the differences more visually noticeable. You're right that there's a good amount of noise in there. Of course you can make out my face and outlines of a few things, but there's a good chunk of noise.
What's interesting is the pure black parts in the upper-right part of the image. Those are my white cabinets (they came out overexposed in the original images) which is why there's no noise there, I suppose. If you're going to do this yourself, be sure to make sure you don't end up with things over- or underexposed.
The upper-left part of the image is where you see the most noise. That's my dark red wall. I guess that medium range of brightness works well for noise. If I were really interested I might poke at this more to see in what part of the signal noise is showing up the most (e.g., hue vs luminance) but I'll just say that it does, at first blush, seem like there's more than enough noise in there to get 500 bits of entropy from a webcam frame.
... to be continued
It sounds like you want to learn gpg. You don't even need two computers. You can email encrypted messages to yourself over the internet.
If you want to learn about the mechanics of encryption, use gpg to encrypt a file to yourself, and then decrypt it again.
Here's the manual: https://www.gnupg.org/gph/en/manual.html
> adds the salt
Careful, do not simply append the salt to the password like H(password + salt)
or H(salt + password)
as that is insecure for certain choices of H
. But yes, that is the correct idea.
Furthermore a single iteration is undesireable. A password hash is required to be slow for security, hence why multiple iterations (a la PBKDF2) are normally used.
Please use scrypt or even bcrypt instead of attempting to implement this yourself.
There are certain differences between Librevault and Syncthing now:
Also, it has a proper (!) desktop UI inspired by BitTorrent Sync 1.3.
Although RSA is getting old and clunky, it is fairly common to use it for signing JWTs. At a previous company, we used it. I tried to suggest ecdsa instead, but it was not supported in enough libraries so RSA was the only option for us.
Having said that, JWTs do carry risks that are more serious than the concerns of whether you should use RSA. The most common flaw is people changing a JWT signature to the ‘none’ algorithm to bypass signature verification altogether. There’s an additional risk when public key algorithms are used for signing: attacker changes your algorithm to hmac, then forges a hmac signature using the public key. These vulnerabilities are well known in security and described in many places, such as here.
Bottom line: it is okay to use RSA provided that your modulus is at least 2048 bits (many people will recommend higher), but regardless of what you use, make sure you test that your jwt implementation is not vulnerable to common attacks.
> Tldr just use OpenKeychain:
I wouldn't recommend putting your long-term private OpenPGP key on your phone though. I would recommend a throw-away key instead. Maybe I'm being overly paranoid, but I barely trust my phone, after taking all the necessary precautions.
I think instead, I would rather recommend Keybase.
I don't have:
Star.Trek.Voyager.Complete.NTSC.DVD.DD5.1.x264-JCH.mkv
hunter2
plains notice enter woman
I don't have that.
And the solution is DNSSEC? Aside from being a horrible solution, dnssec has some bad flaws that blow it out of the game.
As much as I like the need, the solutions are so far pretty bad. DNS wasn't meant for what were doing to it.
This is very difficult to get right, and it's only something to do as a last resort.
Interesting choice of title.
Hacker News links to the same article with "Vulnerability in Microsoft TLS library could allow remote code execution". (Same as the title of the linked bulletin post).
https://news.ycombinator.com/item?id=8591756
Glad to have GCM - shame it wasn't available before. Definitely not as big news as the vuln.
Suggested reading:
tl;dr: similar key sizes have similar security guarantees. Key generation, signing and verification have different speeds.
http://evs.sx/passphrase/js/pgp-word-list.js :
randomPGPWord: function(odd_position) { return PGPPassPhrase.pgp_word_list[Math.floor(Math.random()*256)][odd_position ? 1 : 0]; }
FWIW, bitwarden computes a portion of prehashing client-side. https://bitwarden.com/help/article/what-encryption-is-used/
>The default iteration count used with PBKDF2 is 100,001 iterations on the client (this client-side iteration count is configurable from your account settings), and then an additional 100,000 iterations when stored on our servers (for a total of 200,001 iterations by default).
Heh... mt_rand is a favorite
>Caution >The distribution of mt_rand() return values is biased towards even numbers on 64-bit builds of PHP when max is beyond 2^32.
I posted that to /r/lolphp very recently.
The intent in NaCl was to use Ed25519 as a signature system, although the original pre-TweetNaCl versions shipped an incomplete and incompatible implementation.
Signal's use of X25519 identity keys is largely due to legacy, and to make that work, Trevor Perrin had to develop a number of algorithms, i.e. XEdDSA and VXEdDSA. These algorithms have not gained adoption outside of Signal (not that there's anything wrong with them).
X25519 keys are Montgomery x-coordinates only and lose one bit of information versus Ed25519's compressed Edwards y-coordinates: the sign. This means if you convert between the two curve forms using the birational map between them, it's better to go Ed25519 -> X25519 as the other direction loses a bit of information (there were some proposals to include a sign bit with X25519 keys as well but they never materialized and most implementations set the bit to 0 unilaterally).
With Ed25519 you can also avoid converting between curve forms (which people seem to leap to overly eagerly, IMO) by using the Ed25519 key to sign an X25519 ephemeral key. This is supported by the Noise signature extension (e.g. with the XXsig
pattern)
Or as others on the thread have mentioned: use Ristretto, which eliminates some of the sharp edges (cofactors / small subgroup attacks) of Curve25519.
I know CopperheadOS, but sadly in works basically on nexus and pixel devices. If you don't have those devices, a good option would be using android distro withou GApps, like LineageOS without installing the GApps package , at least you will have a up to date android. Hope this helped
One problem is a codebook attack. If block size is only 8 bits, there are only 256 possible plaintext/ciphertext pairs. An enemy who is monitoring the communication and knows or guesses some can collect them all and break the cipher. http://en.citizendium.org/wiki/Code_book_attack
There are two promising projects building on TrueCrypt code, though I wouldn't trust them without looking at the diffs manually. That said I did quickly glance over VeraCrypt's changes and it seems that the majority of the patches are related to fixing the problems uncovered in the TC security audit.
DiskCryptor and Truecrypt 7.1a will both do full disk encryption. I have heard that Diskcryptor allows for key files or otherwise can be made to need a usb stick for a disk encryption boot, but I have never used it so I don't know for sure, or how it works. Truecrypt can NOT use anything except a password for whole disk encryption, so I know for sure it doesn't do what you want. I would look into the documentation on Diskcryptor.
So I came into this article thinking it couldn't possibly be that bad and play devil's advocate.
I didn't get past the second paragraph without changing camp.
> GNS employs the curve parameters of the twisted edwards representation of Curve25519 [RFC7748] (a.k.a. edwards25519) with the E*C*DSA scheme
WHAT THE FUCK
WHAT THE FUCK
WHAT THE FUCK
> The inclusion of a hash function under the hood of the signature algorithm is a moot point, especially since RFC 6979 also uses HMAC-SHA2 to generate deterministic nonces, thereby rendering their choice of RFC 6979.
Did you accidentally the end of that sentence?
> GnuTLS is an SSL/TLS library created by the same people who created (and then abandoned) libmcrypt
I never knew that they shared origins. This explains a lot, actually. Thanks for pointing that out.
> [libgcrypt sucks from a security standpoint]
It does, but have you ever tried to use the asymmetrical primitives? It also sucks from an API standpoint! It's complicated management of S-expressions all the way down. It's horrendous.
I rather like the rest of the libgcrypt API design though (at least as opposed to whatever it is OpenSSL's libcrypto is subjecting us all to), but that's the only part of libgcrypt I can say I genuinely like.
> If you see the letters GNU anywhere in a project that intersects with cryptography–except for its public license–it’s almost certainly an error-prone cryptographic design.
Do not look under the hood of any GNU project. I mean it. You do not want to see it. There are exceptions, but in genreal, you'll sleep much better if you don't know that GNU true(1) has introduced an obscure failure mode that makes it possible for it to return false (true --help>/dev/full
; note you may need to use env true
to bypass your shell's builtin).
You probably want to use GPG, not OpenSSL.
With GPG, you each generate a public and private key. You share each-others' public keys, and use them to sign and optionally encrypt messages between yourselves. This allows you to know that the message came from your friend, and when encrypted, that only your friend can read it- not even you can decrypt the message.
Thanks for your suggestion. Everyone can participate: https://www.privacytools.io/#participate
I've just added DD-WRT to the router firmware category under "worth mentioning".
> it's the merging of two products already used to encrypt
No. CryptSync is it's own program. CryptSync is using the installations of 7-Zip and Gpg4win to save you having to use the 7-Zip GUI (or even with the Command Prompt) or Gpg4win with the Command Prompt all the time.
> plus some cloud services added in?
No. It doesn't connect to the internet (except for the update check).
I wouldn't trust either of them. Microsoft has been avoiding questions like "are you able to decrypt a Bitlocker drive?" from the media, and HDD vendors could be just as malicious and/or incompetent when it comes to this stuff, so I don't expect them to get it right. It's like trusting a router vendor to give you a highly secure router firmware. It just isn't going to happen.
Use VeraCrypt if you want full drive encryption. As an alternative (if you don't have tens of Gigs that need to be encrypted), I would suggest using Cryptomator to encrypt the files locally before you sync them to a cloud storage provider like Drive, Dropbox or OneNote.
This was written because Zend\Crypt was still unavoidably using PKCS1v1.5 padding and phpseclib wasn't simple enough by default.
Obviously libsodium is much better than RSA via OpenSSL, but libsodium still isn't a core PHP extension and must be required via PECL (which is a no-go for most PHP devs).
I think this video is just an oversimplification of how the system really works. Here's a whitepaper that describes it in more detail:
https://www.wickr.com/uploads/files/700869603163179165-wickr-whitepaper-final.pdf
Basically, when the app is run for the first time, it generates a bunch of keypairs and uses a known public key to encrypt those keys and send them to the server. Then, when somebody wants to send you a message, their app uses that same known key to download one of your public keys that they then use as shown in the video. There's some more stuff that they do to get forward secrecy, message integrity, etc., but the embedded RSA and AES keys are essentially how they get around the MITM problem.
I have signed at key-signing parties before, but I'm currently inclined now not to sign keys unless I've met the person multiple times and recognize them, or unless I'm at an event with a bunch of other people I know who all recognize the person (i.e., they seem to be the person known by that name in that community). This is a lower standard than knowing the person super well, but higher than just seeing ID once. I will sometimes sign keys if I get the chance to examine a US passport or an in-state license from my own state, where I know what the document looks like. I really have no way to make sense of pre-electronic passports from Elbonia.
Anyone sending you files ought to be verifying your full ID in some other way. If you're sending unencrypted, unsigned email, the key ID be modified in transit, anyway.
I've come around to being a fan of the Keybase approach: even though the actual names are a centralized directory run by Keybase, the protocol is all publicly-verifiable, and hashes of their DB are stored in the Bitcoin blockchain periodically, so you don't need to trust Keybase at all to verify that the information on their site is accurate. You could just link your Keybase profile.
thank you for posting this... has helped me understand the protocol a lot more than I was able to with only the WhatsApp white paper (PDF) and the Marlinspike blog post
Have you seen OpenKeychain and Keybase.io?
https://play.google.com/store/apps/details?id=org.sufficientlysecure.keychain
https://play.google.com/store/apps/details?id=io.keybase.ossifrage
They use of a custom keyboard is a nice idea, though
They are taking steps to the right direction, but it's still pretty far away from TextSecure/Signal.
There isn't anything wrong with people practicing with crypto. The problem is that a lot of people will look at crypto algorithms and go "wow, this is simple. why should I use someone else's crypto?" and then they will proceed to believe that their implementation is secure. It's harsh to tell people not to do their own crypto, but like most simplifications, it is intended to get across a difficult message, not to be the full explanation.
Very few people outside of the crypto community have an appreciation for why it is difficult to make secure crypto. If someone wants to learn more about crypto, the very first place I would recommend they start is a free, online crypto course like this one. Then that person will understand what they're dealing with and be able to safely enjoy playing with crypto algorithms, and they will understand the algorithms much better to begin with.
That's just my two cents on the topic. I'm not a crypto expert by any means, but I know enough to know how little I do know on the topic, and I've gone through a crypto course or two that introduced me to how several of the popular algorithms work and security considerations behind them and just securing information in general.
The Tor project has been struggling with this for a long time. It's a hard problem to solve when you actually have governments trying to block your connections. Tor bridges attempt to solve this by making the traffic look like innocuous encrypted (XMPP) traffic, and of course, relying on a whole series of unpublished IPs.
> If you are skeptical, we can show that the security of PGP reduces to that of HTTPS anyway: the PGP key has no signatures and everyone will trust it because they fetched it from https://golang.org/security.
“We don't use PGP as it's intended to be used, and that proves that PGP provides no security.”
Also calling people who are quite rightly unimpressed by the whole CA situation “decentralization nutters”. The language alone just makes me want to take a shower.
>"trusted" just means that it's allowed as the sole source of entropy.
This is still incorrect. CONFIG_RANDOM_TRUST_CPU=y
in /boot/config-$(uname -r)
or random.trust_cpu=on
in /etc/sysctl.conf
only seeds the kernel RNG. Never at any point does RDRAND replace the kernel RNG. From the documentation:
random.trust_cpu={on,off} [KNL] Enable or disable trusting the use of the CPU's random number generator (if available) to fully seed the kernel's CRNG. Default is controlled by CONFIG_RANDOM_TRUST_CPU.
This configuration parameter only ensures that the kernel is sufficiently seeded in early boot before userspace daemons need cryptographic bytes. The userspace process still get data through the CSPRNG, which is several steps removed from seeding.
Wasn't the Silk Road deanonymized through their use of captcha?
If you're targeted by the government, they'll hunt for any misconfiguration, anything at all that leaks any data whatsoever, like Silk Road using captcha through the normal internet, if that is truly how they were caught. I'd suspect the others were deanonymized in a similar, obscure way, or simply hacked.
I believe the attack described in Locating Hidden Servers was fixed, as it is mentioned here: https://www.torproject.org/docs/hidden-services.html.en
>At this point it is of special importance that the hidden service sticks to the same set of entry guards when creating new circuits. Otherwise an attacker could run his own relay and force a hidden service to create an arbitrary number of circuits in the hope that the corrupt relay is picked as entry node and he learns the hidden server's IP address via timing analysis. This attack was described by Øverlier and Syverson in their paper titled Locating Hidden Servers.
If your attacker is law enforcement / government, an entity with effectively unlimited time and resources, you're pretty much screwed. I doubt they exploited Tor on its own. I don't reject Tor security based on their success.
If however, a researcher was able to find a problem specific to Tor and able to deanonymize a number of hidden services, not through a misconfiguration but through traffic analysis or something like that, I'd say Tor is insecure. The closest thing I've heard to that lately was the paper by the students that was pulled, I believe because it involved a DoS attack and illegal to pull off. That's the only one I've heard lately I was concerned about.
Their explanation of why the OTP is uncrackable is absolute gold. https://www.kickstarter.com/projects/857935876/privus-fully-encrypted-email-chat-and-texting-made#project_faq_83852
The problem here is the use of v3 keys, right? Such keys are no longer supported by GnuPG as of version 2.1, which was released in November 2014. (The originally-linked message is from October 2014.)
If one is using GPG 2.1.x, is there any reason to worry about the issue presented here?
perhaps it's late, I've had too much to drink, and misunderstood the question. you definitely can capture directly from within wireshark from a card in promisc.
https://www.wireshark.org/docs/wsug_html_chunked/ChapterCapture.html
whether or not that gives you useful time resolution I guess I'm not informed enough on.
Why not use KeePassXC? Use a key file (never share the key file via online methods, transfer directly to any devices you need to share it on), memorize a 10-word Diceware Passphrase, and if you have them also enable Yubikey Challenge-Response.
"Assuming password managers encrypt your login ids and passwords to the various services you use using a secure, one way hash."
Incorrect assumption, also almost all password managers explain how they do this.
https://support.1password.com/1password-security/ https://keepass.info/help/base/security.html etc
Libsodium had has such a an audit. Maybe you could ask Private Internet Access or /u/jedisct1 how much it cost?
One of the larger issues with JWT a while back was in how applications would just accept the JWT and read out the data, validating it per the algorithm specified.
If the application served a JWT signed by its cert, it would accept one provided back that had its algo changed so that the signature wasn't necessary anymore. I'd have to dig out the slide deck link, but, functionally speaking it was not well considered for the average use-case.
EDIT: https://www.slideshare.net/snyff/jwt-insecurity - really good read on some of the attacks against JWT implementations at the time. Not sure how things have matured since then.
As /u/moxiemarlinspike commented in hn:
> I get a lot of credit for the stuff that Open Whisper Systems does, but it's not all me by a long shot. Trevor Perrin, Frederic Jacobs, Christine Corbett, Tyler Reinhard, Lilia Kai, Jake McGinty, and Rhodey Orbits are the crew that really made all this work happen.
In moxie and his team we trust (our privacy).
Also, the new open source TLS library by Amazon s2n. Reportedly just ~12.6 kLoC according to the top comment in the HN discussion about the news. Additionally there's external code but still the total is quite small.
> It is highly likely that this is the work of a troll. The RSA subkey that was factored has an invalid self-signature in hpa's public key, which means that it wasn't really hpa who added the subkey. Since the sks-keyserver pool doesn't verify signatures, anyone could have inserted that subkey. So anyone could have purposefully picked an exploitable RSA subkey, added a fake signature to it, and uploaded it to the sks-keyserver pool. Luckily, GPG will drop the subkey when retrieving hpa's public key since it doesn't have a valid self-signature. But for anyone scanning all the public keys without verifying signatures (for research, etc.), this key might get recognized and cause a shitstorm. Which is exactly what has happened. So far, there's no evidence that there is a conspiracy to weaken RSA keys. There is only evidence that someone inserted a bogus subkey into hpa's public key. There will be evidence of a conspiracy if we find a weak RSA key in the strongset that has a valid self-signature.
https://www.coursera.org/course/cryptography
The crypto course by Jonathan Katz on Coursera is also very good - covers a lot of the same stuff as Dan's course but goes more heavily into some areas and a bit lighter in others.
https://www.schneier.com/books/cryptography_engineering/
This book is a great but fairly heavy intro as well
For x448, I would say that my implementation (https://sourceforge.net/p/ed448goldilocks/code/ci/x448/tree/) is at least kind of similar to Donna. While the main branch (https://sourceforge.net/p/ed448goldilocks/code/ci/curve25519-work/tree/, since I haven't merged to master yet) implements Ed448, it's a big complicated code base so a simpler implementation would be a good idea.
From table 1:
Upper case:
TSAMCINB...
Lower case:
ETAONISR...
I would take a look at encfs. It's readily available in Linux/Ubuntu and there are ports to Windows.
Contrary to Truecrypt, encfs will keep each file encrypted separately. This means that the sync task will handle each file more easily than a single gigabyte chunk of Truecrypt data.
You can mount a private folder to access your files and sync the encrypted version to the cloud.
I have been using it with Dropbox.com and Copy.com for a while now. No problems at all.
"End-to-end encryption" is one of the most abused terms in the industry. I have always been sceptical of protonmail's security claims. When a friend of mine asked me why I don't use protonmail a few months ago, my email reply was:
"Just reading more about protonmail security: https://protonmail.com/security-details Sorry, I'm calling bullshit. There is no way they can stop key switcheroos without an installed application. See Mathew Green's article about Apples iMessage -- same problem. It's pseudo end-to-end, not real end-to-end."
PW manager does everything for you. It creates strong passwords for all sites, it stores them always with a strong encryption, it syncs the passwords so you can access them using different devices or browsers etc. It is a lot more secure than anything else. As long as your master passphrase is strong enough. And you can further increase the security by using 2FA.
Remember than your passwords are always stored and synced encrypted so nobody can access them besides you. They will be unencrypted only when you actually need to use them.
And because all passwords are unique, you don't need to be too worried if one of your passwords is compromised as no other site uses the same. The manager also generates password breach/strength reports for you. You can use as many "dumb websites" as you want. The manager will keep up.
At least check this out: https://bitwarden.com/
For computer security I use Windows Defender (the stock antivir). On my work laptop running macOS I think I have F-Secure.
Since game of life already is Turing complete and well known, yes it has been done. There's also various methods that are directly adapted for it, and don't directly implement existing algorithms converted for it, take a look at cellular automata ciphers;
https://www.academia.edu/10693460/Introduction_of_Cellular_Automata_in_designing_Stream_Cipher
http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6524946&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6524946
I'd be looking hard at QubesOS, a system that lets you partition things you do into separate VMs, possibly each with its own encrypted drive. So general browsing & word processing on one VM, banking on another, one for anonymous browsing, ... https://www.qubes-os.org/
Just to complicate matters, you can use either the product, as in phi(N)= (p-1)(q-1), or the least common multiple, lcm(p-1,q-1), and get a working algorithm either way. However, if you need to inter-operate with another implementation both must use the same definition. http://en.citizendium.org/wiki/RSA_algorithm#Implementation_differences
One reference, which I like since I wrote most of it, is at: http://en.citizendium.org/wiki/One-time_pad There are lots of other references, including a good FAQ linked in the above.
Note that while one-time pads are provably secure against all passive attacks, they are extremely weak against some active attacks; see the last section of that article.
From something I wrote at: http://en.citizendium.org/wiki/Cryptography#Cryptography_is_difficult
As for databases and real-time programming, cryptography looks deceptively simple. The basic ideas are indeed simple and almost any programmer can fairly easily implement something that handles straightforward cases. However, as in the other fields, there are also some quite tricky aspects to the problems and anyone who tackles the hard cases without both some study of relevant theory and considerable practical experience is almost certain to get it wrong. This is demonstrated far too often.
For example, companies that implement their own cryptography ...
I was trying to solve a very similar problem a while back and didn't really get anywhere. I wanted to use a proof-of-work system to avoid spam and CAPTCHAs (this part all worked) but then also use that proof-of-work to solve "real problems" (think BOINC). My problem was in verifying the solution. If I used a non-real world problem (e.g., find what number when salted with <given salt> using <given hash algorithm> produces <given hash value>) it was easy to accomplish but when I switched to useful problems I couldn't get it to work. Anyway, that didn't help you at all but if you make headway here I'd love to hear about it. Good luck!
We provide a bullet pointed list of something we have never done in our transparency report. The first point is: "CloudFlare has never turned over our SSL keys or our customers' SSL keys to anyone." https://www.cloudflare.com/transparency/
If you are indeed remembering this fact from a few years ago, it was quite possibly when people hadn't yet figured out that GPUs can be awesome for doing massively parallel tasks with simple calculations... like computing a hash.
In terms of the amount of effort involved compared to practical workability, buying a relatively high-end GPU (I think AMD are currently better at general compute) and running an off-the-internet program (example and example) capable of using it to run staggering amounts of hashes compared to any CPU, is far more likely to come up better than figuring out if a small chip can actually do anything.
Also, looking at it simply, you're facing off ~30nm custom designed silicon with likely 200nm, so the number of HDMI ports you can get for the price of an AMD 7970 better be pretty damn huge.
Disclosure: I am the maintainer of https://github.com/t-d-k/LibreCrypt
What Natanael_L says is correct, it's an encoding issue. In fact, you can use non-ascii characters in the password, and the container will open fine on a PC with the same default encoding.
At some point it will change to use pure UTF-8 internally, but that will be non-backwards compatible, so for now it's safer to only use ASCII.