A simple method of key verification for multi-device key exchange

There are tons of devices around with practically no user faced interface at all, which need to communicate securely with other devices. This includes devices such as a wireless thermometer communicating with a HVAC unit or a wireless lock on your door communicating with your phone when you tell it what keys to accept. The risks include violation of privacy, physical damage and economic loss.

With the current Internet of Things trend there will only be more of this type of devices in the future. To be able to use these devices securely you need to ensure there are no room for anybody to attempt to MITM these connections (to intercept it so that they are in the middle and can see and manipulate all data), but practically ensuring that can be incredibly hard if the devices don’t even have a screen.

My idea for how to achieve it securely, with minimal interaction required from the user that links the devices together, is to show a visual pattern derived from a shared key.

But since most devices don’t have any interface beyond a single LED light, that could typically be hard to achieve. But that’s fortunately not a dead end, because the simple solution is to let the two devices you’re linking together both show the exact same on/off blinking pattern, perfectly synchronized, while you hold them edge to edge. If the patterns are identical, they have the same key (details below on how this can be guaranteed). If you see that they don’t blink in perfect synchronization, then you know the devices you are trying to link do NOT have a secure direct connection to each other.

So how do you link them together in the first place? There’s lots of methods, including using NFC and holding them together, temporarily using a wired connection (this likely won’t be common for consumer grade devices), using radio based method similar to WiFi WPS (press a button on both devices), and more. The two options likely to become the most common of those are the simultaneous button press method for wireless devices and NFC. While NFC has reasonable MITM resistance as a result of its design (simultaneously interfering with both parties of a connection is nearly impossible), that doesn’t guarantee that the user will notice an attack (attacking by connecting to the devices one at a time would still work).

So by confirming that two devices have a secure communication link by comparing blink patterns, it becomes easy to ensure configuration can be done securely for a wide range of devices. But how can we be sure of this? What can two devices communicate to allow security through comparing a blink pattern? Thanks to cryptographic key exchange this is easy, since all the devices have to do is to generate a large secret number each and perform an algorithm together like Diffie-Hellman. When two devices perform DH together, they generate a shared large secret number that no other device can know. This allows the devices to communicate securely by using this large number as an encryption key. And it also allows us to verify that it is these two devices that are talking to each other by running that number through a one-way transformation like a cryptographiv hash function, and using that to generate the pattern to show – and only the two devices that were part of the same DH key exchange will show the same pattern.

If anybody tries to attack the connection and perform DH key exchange with the devices separately, they will end up having DIFFERENT secret numbers and will therefore NOT show the same blink pattern.

Note that due to human visual bias, there’s a certain risk with showing a pattern with very few components (to barely have more bits than what an attacker can bruteforce) you can’t just display the binary version of the hashed key this way, since the risk is too large that many different combinations of blink patterns would be confused with each other. This can however be solved easily, you can use a form of key expansion with a hash function to give you more unique bits to compare. One way to do this is by doing an iterated HMAC. With HMAC-SHA256 you get 256 bits to compare per HMAC key. So computing HMAC(Diffie-Hellman shared secret key, iteration number) for 10 iterations you get 2560 bits to compare. This means that if the user looks for long enough, he WILL be able to identify mismatches.

To achieve strong security, you only need for approximately 100+ pairs of bits to be identical to ensure bruteforce is unachievable – and in this setup, it means the user only needs to be able to verify that 4% of the full pattern is identical. So if you have a blink pattern where the blink rate is at 5 bits per second, continously comparing the pattern for any 20 seconds out of the 512 seconds it would take for the pattern to start repeating would correspond to verifying that 100 bits is identical. Of course the blinking would need to be kept synchronized, which would require the devices to synchronize their clocks before starting and could also require them to keep doing so while the blink pattern is showing to prevent “drift”.

There are of course other possible methods than just on/off blink. You could have an RGB LED to represent multiple bits for every blink. You could also have geometric patterns shown on a screen when holding the screens of two devices up against each other. You could even do the same thing for mechanical/haptic outputs like Braille screens so that blind people can do it too.

What if you can’t hold the two devices physically close to each other? You could use another device as a “courier”. As one example, by letting your smartphone perform key exchange through this method with both devices one by one, it could also then tell the two devices how to connect to each other and what encryption key to use. This way your smartphone would act as a trusted proxy for key exhange. It would also be possible to have a dedicated device for this, such as a small NFC tag with an RGB LED and a smartcard like chip to perform the key exchange with both devices. Using a tag like that would make configuration of new devices as simple as to hold it against the devices and comparing the pattern, and then the connection is secure, with minimal user interaction.

Then there’s the question of how to tell the devices that the key exchange was a success or not. Typically most devices will have at least ONE button somewhere. It could be as easy as one press = success, two presses = start over. If there’s no button, and they are the type of devices that just run one task as soon as they get power, then you could use multiple NFC taps in place of button presses. The device could respond with a long solid flash to confirm a successfull key exchange or repeated on/off blinking to show it did reset itself.

Originally posted here: http://www.reddit.com/r/Bitcoin/comments/2uah2b/weve_launched_the_coolwallet_on_indigogo/co6rru6

Relevant prior art (found via Google, there may be more):
http://citeseerx.ist.psu.edu/viewdoc/summary;jsessionid=7E99A2B9922A0AE79CF6CAC65634FD8C?doi=10.1.1.41.1574http://citeseerx.ist.psu.edu/viewdoc/summary;jsessionid=7E99A2B9922A0AE79CF6CAC65634FD8C?doi=10.1.1.126.4242

Why it is possible for cryptocurrencies to gain and sustain value

This text is in large part based on the arguments from the NPR article for why gold historically has become the standard currency, “A Chemist Explains Why Gold Beat Out Lithium, Osmium, Einsteinium”, and on my own comparison between the valuable properties of gold and the equivalent properties of Bitcoin and other cryptocurrencies.

Link: http://www.npr.org/blogs/money/2011/02/15/131430755/a-chemist-explains-why-gold-beat-out-lithium-osmium-einsteinium

So why DID gold win thousands of years ago over other forms of money and stay popular until now?

There’s a few basic properties which is necessary for something to useful as money: It is easy to store, easy to move, it is easy to accurately divide in parts, it doesn’t corrode and isn’t otherwise fragile or deteriorate over time and it isn’t dangerous to handle. Those are the basic physical properties, and without those nobody will want to use it.

And for the economic properties: It is scarce (unlike sand and practically all relevant metal alloys), it is hard to forge (or else you’ll get counterfeits everywhere) and supply is reasonably predictable and don’t increase too fast (something which is scarce on a global scale but doubles every month isn’t useful as money, and something you don’t know the supply of is too uncertain). Another important property is fungibility, that the majority of samples of it is similar enough to be interchangable – which gold fulfills since it is an atom that allows you to purify a sample of the metal by melting it and clearing out the unwanted elements, leaving you with pure gold which always will be the same (without fungibility every sample needs to be valued independently, which is a major PITA).

And since gold has fulfilled all those requirements better than the alternatives (as an example, it is more scarce and corrodes much less than silver), it has become highly valuable. You can with relative ease melt it into whatever shape and size you want, divide it in chunks of arbitary size and store it safely for centuries without it going bad. And you could fairly easily verify that the gold indeed is real gold. So when people wanted to make trades with each other for valuable items, gold was one of the simplest options because there’s always somebody willing to accept it. All the other options were lacking in one or more of these properties compared to gold.

So how does cryptocurrencies like Bitcoin compare?

The comparison is quite straightforward: Scarcity is guaranteed by the blockchain (ledger of transactions) and the accompanying rules which all miners and Bitcoin wallets obey (anybody breaking the rules will be detected and ignored!), the rules of Bitcoin guarantee a maximum of just below 21 million coins and there’s no way around it. You can trivially confirm if the “coins” somebody claims to have is real by looking at the blockchain to see if the referenced transaction is there or not, and if it has been moved away or not. And fungibility is provided as well since on the blockchain all “coins” are essentially equivalent, they are all a form of “statement” in the ledger/database which the blockchain is (“X coins belongs to address Y”). The divisibility goes down to 8 decimals, making for a total of 2,099,999,997,690,000 subunits (that’s two thousand trillion) and more decimals can be added if necessary.

To pay with gold you need to make sure it already is divided in parts with equal value to what you’re buying. No such need with Bitcoin, the software takes care of it automatically. Verifying that the gold is real is much harder than to verify Bitcoins. Bitcoins are far more lightweight – you just need to store the private keys that your addresses are connected to (using public key cryptography) and that can be done on paper, which means storage is far easier by a huge margin once you reach larger values. Like gold, Bitcoins which you hold don’t deteriorate over time. The supply for Bitcoin is highly predictable, scarcity is certain, similar to gold (it is actually far less certain for gold, with the potential for asteroid mining in the future).

Using a Bitcoin wallet is simple. Some of the most common ones are Electrum or Bitcoin Core on computers, Mycelium and Schildbach’s Bitcoin Wallet on Android, and Breadwallet on iOS. None of them need any registration of any kind to use and they can all verify that the “coins” sent to you is real with no extra work required on your part. To send a transaction all you need is an internet connection. Making transactions takes merely seconds, and you can send money globally without a problem. Receiving coins is equally simple, just install one of those wallets and start it, and give the sender the address which your wallet just automatically generated – you don’t even need to be online when recieving! That’s all you have to do, and the wallet tells you when the “coins” is yours to spend. The “coins” will stay there forever if you don’t touch them, and with the high divisibility of Bitcoin you can easily send exactly the sum you want (one thousandth of a dollar? no problem!). No third party needs to be involved, neither part needs to trust the other anymore than they normally would if it were a cash payment or if gold was used to pay.

So then we have established that Bitcoin can match the properties which enabled gold to gain and sustain value, but why would it gain value in the first place? Why do people want start to use it, where is the demand coming from?

I have already mentioned some of the first reasons above – it can be used globally without any need for shipping anything around, it is easier to verify and it is easier to store. But that’s not all, far from it. Thanks to the combination of the blockchain and proof-of-work mining, Bitcoin had the ability to introduce a bunch of new features which are unparalleled – Bitcoin has a scripting language, making it programmable money! It is the first truly decentralized cryptocurrency, all the predecessors relied on central servers and was under the control of a third party.

Can you imagine being able to program a piece of gold to teleport back into your vault if the seller didn’t fulfill the terms you agreed to? With Bitcoin you can do something with just that effect that using 2-of-2 multisignature escrow. Can you imagine being able to securely ensure that something like 3 of 5, or 7 of 10 (or any other combination of numbers you like), people on the board of a company MUST sign all transactions that spend money from the reserves of the company, as if a bar of gold would refuse to move unless enough board members agreed? With Bitcoin you can achieve just that using m-of-n multisignature transactions. Can you imagine being able to prevent a sum of money to be spent before a certain date, as if you could make a bar of gold refuse to move until a given day? With Bitcoin you can do that using timelock transactions. And that’s just the beginning!

So not only does Bitcoin match the properties of gold which enabled it to gain and sustain value, it also provides entirely new and unmatched incentives to use it. If you are involved in just about anything where you want to enforce a certain set of rules on how the money can be spent, Bitcoin can make your life much simpler. If Bitcoin is the best option available to achieve a goal, then there also exists demand for it. And when there’s both demand and a limited supply, it gains value and will have a market price.

What about altcoins (“alternative coins”, other blockchain based cryptocurrencies), why wouldn’t one of them replace Bitcoin? That answer could fill an entire book, but the short answer is that because of the network effect most people will want to use the most popular cryptocurrency, a spot that Bitcoin holds and has held since shortly after its release.

Cryptocurrencies become exponentially more useful the more people that accepts it. It’s the same reason for why there’s usually just a few social networks that’s big at a time, being considered the place to go for discussions and organizing events, and so on. It is the same reason for why the phone networks of most countries are compatible and interconnected. Bitcoin was both first out and good enough to make sure that any competitor needs to be substantially better to be able to beat it. Any competitor would need features that Bitcoin is unable to replicate, but since Bitcoin fundamentally is a computer protocol implemented in software it can also be updated to replicate any features of a competitor before that competitor would gain momentum. So the probability that an altcoin would overtake Bitcoin is very slim, and any software developer capable of creating a better altcoin likely would gain more from working on improving Bitcoin itself instead.

Then there’s the question of how valuable it will become. Since the demand on global markets is inherently unpredictable (you can never be certain that current trends continue), nobody can possibly know for certain. There’s no guarantee it will ever go up from here, because for all we know it might already have found its niche in the market. My personal opinion is that what it offers is so much better than the current options (mainly fiat currencies, also known as state issued paper money) and payment mechanisms (such as credit cards and paypal) that the demand should grow in the future when other people takes a closer look and decide that its features is desirable.

One thing we can know for certain is that it will be interesting to follow its progress in the future, no matter where it goes.

If you have any questions, feel free to ask below. I’ll try my best to answer most questions, anything from questions about the technology to the economic incentives and how to use it.

Web-of-Trust DNS

Originally published here, copied below (edits and references coming later): http://www.reddit.com/r/Meshnet/comments/o3wex/wotdns_web_of_trust_based_domain_name_system

Previously mentioned on my blog here: https://roamingaroundatrandom.wordpress.com/2010/12/06/my-ideas-for-dns-p2p/

WoT-DNS – Description

Link: https://en.wikipedia.org/wiki/Web_of_trust

TL;DR: A system for deciding where domain names should go based on who you trust.

WoT-DNS is my proposal for a new P2P based DNS system.

This system decides where a domain name like reddit.wot should go based on your trust, as an invidividual; it does not care about the opinion of random strangers. You are the one who choose who’s trusted and who’s not, since it’s using WoT (web of trust). Also, domain names are intentionally NOT globally unique, since the only way to achieve that is with a centralized service or a first-come, first-serve system like Namecoin, and I dislike both those solutions. This means that if you would ask for a sitename like reddit.wot, you could get many results instead of going straight to one site. But whenever one site is trusted (for you) much more than the rest (like reddit’s official site would be), that’s where you’ll go.

Basic idea: Gather site registrations for a domain name from the network and from friends -> calculate your WoT metrics for each of the results -> pick the top site if one stands out at the top as most trusted -> let the application go to that site.

Basics

Every participant runs a WoT-DNS client. There are several ways to enable browsers and IM clients, etc, to use this system. One is to run a local proxy where only .wot domains are intercepted, and normal traffic are untouched. When connecting, it would start by asking the WoT-DNS network about who has registered their site with that domain name.

Every client has a unique asymmetric keypair, both regular users and servers have them. Servers additionally generate one unique keypair per registered domain. Registered .wot domains are identified by their key. Each registered domain has at least two addresses: The readable one, such as example-domain.wot, and one that contains it’s public key hash (like I2P, [the 52 base32 characters of the SHA256 hashed public key].key.wot, so “key.wot” are one of those domains you can’t register). That means you can always go directly to a particular site by entering it’s key hash.

A domain registration has to contain at least this: The domain name, the server’s public key, addresses (yes, more than one if you like, useful for load balancing and to additionally specify I2P/Tor addresses along with regular-internet IP addresses). Additionally, you can add all the data that ordinary DNS servers can hold for a domain. Also, it can hold a site name and a description of the site, which is useful for telling sites with the same domain name apart. All registrations are also timestamped. I would also like to see a trusted timestamping system built in, to ensure that nobody claims that their domain registrations are older than they are, and the point is to prevent phishing by faking a site’s age.

Domain registrations are stored in a distributed database. This means that every node keeps local copies of plenty of registrations. Updates will be continously added to the distributed database (such as when IP addresses change), and the old registrations are then replaced (but only if the keys and signature match). I suggest that we use some DHT system (“distributed hash table”) like Kademelia for the database, or something similiar that provides the features we need.

The Web of Trust part:

The keypairs make this possible. Since everybody has a unique key pair that consists of a public key and a secret one (using asymmetric cryptography, public key encryption), PGP makes it possible to create signatures of data that likely can’t be forged in our lifetimes. 2048 & 4096 bit keys using RSA are highly secure (while I prefer larger and safer 4096 bit keys, they’re unfortunately also about 5-6 times slower). Keypairs are both used by the site owners for signing their domain registrations, as well as by users that additionally sign them as a means to show that they trust that that site. You can also sign a site as untrusted.

WoT details: You have a list of trusted people and organizations, including their public keys. Organizations like Verisign (SSL certificate authority) could be predefined for the sake of newcomers, this will make it like SSL out of the box. If a site has been signed by a friend or by a trusted organization your client will detect that and calculate what level of trust (trust metric) that site gets based on it. Since there can be several sites for a domain name, the site with the highest trust metric are the site your client chooses to go to. If both Microsoft and a spammer registered microsoft.wot and only MS has a signature from Verisign, then Microsoft’s site will be more trusted so your client will prefer to go to Microsoft’s site if your client is set to trust Verisign.

If the site in the top don’t have a trust metric that’s high enough (not enough trusted signatures or less than around 30% higher trust than the runner-up) it triggers some an alert (some spam/scam detection should also be built in), then you won’t be sent to the top site right away – instead you get a list of the matching sites, ranked by the trust metrics.

So, how are trust metrics calculated? There are PLENTY of ways. One is to assign various levels of trust to your friends, and then simply take a look at how trusted a site is by the people in your web of trust, such as your friends friend. If it’s fully trusted by somebody you fully trust, then you fully trust the site. If it’s a bit trusted by somebody you trust a bit, it’s just a little bit trusted by you. And that’s just the short version!

Note that a signature of a domain from a user or organization as Verisign aren’t intended as a method to indicate how trustable the site owner is, it’s primarily a means of voting in this case (choosing who gets what domain name). The trust part is secondary, but necessary to make sure that scammers and spammers won’t be able to take over popular domain names to trick people.

So how do you get started? If you want to clear out Verisign and those from the predefined list because you don’t trust them, how do you add people you trust? Well, one way is to “bootstrap” using social networks. Let your client announce on Facebook, Twitter or Google+ that you now are using WoT-DNS with a message that contains the key. When your friends start using WoT-DNS, their clients will automatically find your key and connect to you (if they choose to connect to the same social network). Then you’ll have a list of your friends in your client, and can set the trust levels there. And we don’t need to limit it to social networks.

For site admins: While sites will have one keypair, it’s not the only one. Your client also have your personal (or corporate) keypair that your site’s key will be signed with. This “master keypair” for that site can be kept away from your servers, so you can keep it encrypted on a drive in a safe (obviously you can have multiple separate keypairs, so you don’t need that level of security for the rest). If the server is hacked and somebody get your site key, you can issue a revokation signature with your master key pair, which will tell everybody that the site’s old keys now are revoked.

Then you can restore the servers and generate a new site key, and all the old trust signatures can be “moved over”. This won’t be automatic, but everybody who has signed the site key will get notified about the replacement key pair so that they can sign it.

Problems

  • Vulnerable to targeted social engineering. A scammer could try to trick several close friends of some CEO to sign his site, in order to convince the CEO that his site is legitimate.
  • Trust metrics. How do we calculate them? How do we make them hard to “game”/mess with?
  • Evaluating trust. How do you know if your friend can judge if a site is legitimate? How do you yourself know if a site is legitimate?

NON-issues

  • Botnets/spammers that mass-sign phishing sites’ keys. This is only a problem if a significant part of YOUR Web of Trust (your friends) sign the site’s public key and it hasn’t been flagged yet by somebody like Microsoft or Google (they keep their own blacklists already for spam domains for use in Chrome and IE).
  • A bunch of strangers or Group X or Group Y signing the key for a site that’s in conflict with the one you want to go to from Group Z. This will NOT prevent you from getting to the site you want. Just don’t set your client to trust X or Y. But yes, this means that followers of different groups can end up on different sites for the same domain name. This is by design, as I can’t come up with any other solution that isn’t first come, first serve, and that would make domain names globally unique. So I’m allowing domain name conflicts and letting different people get to different sites for them. I do not see this as an issue.
  • Non-static URL:s. We can have those too, but you need to use the key hash domain names. A static URL could look like this: abcdef0123456789abcdef0123456789abcdef0123456789abcd.key.wot/news/global/reddit-is-awesome.php
  • Single point of failures/hacked Certificate Authorities. Remember that we are computing a site’s trust based on what ALL of the nodes that WE trust think of it. A single flag from somebody you trust could alert you about a malicious site. If Verisign were to be hacked, it could be a flag from StartSSL. Or from somebody else. Doesn’t matter. All it needs is one warning. But the scammer has to trick almost everybody you trust into trusting him.

Feedback and questions, please! Please contribute by giving me feature suggestions, or by pointing out possible problems, or by just telling me about any useful idea you might have. All feedback is welcome! If you don’t like my idea, tell me why!

[This is not finished yet, it’s a work in progress…]

Tamper resistant full-disk encryption

There are various problems with many of the common methods of applying full disk encryption (FDE) that isn’t always obvious right away. The common FDE programs also typically have a number of limitations or drawbacks that make them less than ideal.

One class of attacks one wouldn’t necessarily consider is called evil maid attacks (tampering with the ciphertext, altering the bootloader), another is comparing different versions of the ciphertext over time. One particular type of tampering attack is a ciphertext modification attack against certain implementations of CBC cipher mode, which allows the attacker to essentially replace parts of the plaintext by altering the ciphertext in a certain way (which however will randomly scramble the first block in the series that you tamper with). For most FDE variants you can see exactly which parts of the ciphertext has changed and which changes has been undone (a previously seen ciphertext block returns), and much more. There is also the risk of an attacker simply reversing selected parts of the ciphertext to a previous version, which in some cases could reintroduce vulnerabilities in software that is in the encrypted volume. Some methods of getting around the problems is highly complex, and don’t always solve all of the problems.

I’m suggesting one way of implementing full disk encryption that should be secure against a wide range of attacks, even against an attacker in full control of the storage space such as a compromised cloud storage host, both preserving secrecy/privacy and ensuring any tampering with the data can’t be undetected.

First of all we need to be able to encrypt blocks of an arbitrary size, because that’s one of the limitations with trying to implement efficient full disk encryption as the smallest writable block can have varying sizes. XTS mode handles this, has good performance and is widely used.

While it doesn’t allow you to tamper with it in a way that can control the plaintext (unlike CBC) one can see on the ciphertext when the plaintext have been reversed, and when used alone it don’t stop an attacker from reversing the ciphertext to a previous version or scrambling it (which could allow an attacker to reintroduce security holes in software, or to scramble plaintext undetected). So we need to add authentication to the encryption so that modified ciphertexts will be detected, and further add a method to make sure that no individual blocks can be reversed to previous states.

Exactly how it should be implemented isn’t my expertise, but the most simple (but inefficient) method would be to generate authentication tags through using HMAC on all XTS blocks, and then further HMAC that list of HMAC’s such that they can’t be individually reversed, and store it encrypted. The method I’m suggesting later will have some similarities to that method. Ideally I would want a type of authentication tag generation integrated into the XTS cipher mode, or some other efficient method of generating authentication tags. Another approach would be to generate something like a Merkle hash tree of the ciphertexts and HMAC that as the authentication method, which allows you to save space as you don’t need to store all the authentication tags (generating it might not be very efficient, however). Yet another option (in case it would end up performing better) would be to combine those two and use an authenticated version of XTS and generate a Merkle hash tree of the tags for storage rather than storing them directly. The ideal solution is some form of authenticated block cipher which can handle arbitary block sizes.

Then we need to make sure that any attacker can’t easily see what you have changed in the encrypted volume. To do this, I’m suggesting that each time you mount the disk as writable you generate a new encryption IV (or key) for that session, which is used to encrypt all blocks you edit. This IV is also used to generate the key for the encryption authentication for the session. All generated IV:s are encrypted with the master key, and there’s a means of tracking which block is encrypted with which IV (some form of database). The IV list is also authenticated together with the list of authentication tags such that modifying any single block in the volume, even replacing them with previous versions, would lead to the authentication of the ciphertext failing (as it would only validate if the the stored authentication tags for the latest version of that block verifies for that ciphertext).

To improve the security of this approach, one could use a form of ratcheting where each new IV or key is derived from the last key, a counter and fresh collected entropy. Using a counter together with the above approach of making sure everything is authenticated also enables you to ensure the entire volume is intact, and that nothing have been replaced with a previous version as all you need to see is that your software decrypts the volume successfully without warnings and that the counter is one number higher than last time, because an attacker can’t get your software to show a slightly higher counter value and also not warn about tampering without knowing your encryption password.

On top of that one can also add features like read-write access controls by adding in public key cryptography. Use a public key from a master keypair that is stored in the header, together with a signed ACL and signed public keys of everybody with editing rights. These keys would be required to sign the IV/key list as well, such that somebody who only have the decryption password but not a a keypair with editing rights can’t make any edits without the cryptographic authentication failing. Detailed ACL:s would require some form of support in the OS to not cause errors, potentially this could be done by treating sections with different access rules as separate volumes, some of which would be read-only.

One way to speed up detection of attemps to modify the ciphertext is random pre-boot verification of a subset of authentication tags. Checking a few hundreds to a few thousand tags can take less than a second, and has a high probability of detecting modifications if an attacker (or a disk error!) has caused any part larger than a few thousand blocks in a row being modified. After boot, on-access verification is performed and blocks which verification fails for is reported as corrupted to the OS.

Then further we need to make sure that an attacker can’t easily tell exactly which parts actually have been modified each time, and the easy way to do this is to randomly select additional ranges of blocks to reencrypt with the new session IV, which you didn’t touch yourself. Once a previous session IV no longer is used for any blocks (all blocks that was encrypted with it has been overwritten) it can be deleted from the IV list. Another way could be to randomly reorder sections of blocks, like reverse defragmentation.

My take on the ideal password manager

There’s a few variants of password managers. The simplest are a list of password entries in am encrypted file. Some are fancier and can automatically fill in your password for you when logging in on websites. Some supports authentication types that aren’t passwords (HOTP/TOTP secrets, etc). But I’m thinking more of the backend here, since all those other features can be added on top.

I want a password manager where you can add entries without unlocking it first. This isn’t difficult to achieve, just use public key encryption with a keypair associated to the password database. But the details can be finicky. What if you have several devices synced via online file storage services, which are online and offline at varying times, and where you sometimes make offline edits on several devices independently before syncing? My idea here is for how to make syncing easy to achieve silently, while being able to add password entries from anywhere, anytime (and yes, this turns the password database into an append-only database during normal usage, but you can clear out old entries manually to save space).

First of all we need an encrypted database, and SQLCipher should do just fine. Password entries are database entries with all the relevant data: entry name, service name and address, username, authentication details (passwords would be the standard but not the only one), comments. To add entries when it is locked we need a keypair for asymmetric encryption, so the private key is stored in the database with the public key stored unencrypted in the same type as the database.

But how exactly should entries by added? The simplest method is to create a fresh encrypted SQLCipher database with its encryption key itself being encrypted with the public key of the main password database. This is stored in a separate file. The encrypted key is stored appended to the encrypted new entries, with a flag that identifies the password database it should be merged into. When you unlock the main database, the private key is used to decrypt the key for the new entries, and they are then added to the main database. This allows for adding passwords from several devices in parallel and merging them in. Once merged with the main one, those temporary database files can be deleted.

And how do we handle conflicts? What if you end up doing password resets from a library computer you don’t trust much to access some service, and then do it again elsewhere, create entries at both occasions and don’t sync them until later? The simplest way is to keep all versions and store a version history for every entry, so you don’t lose what might be the real current password because you thought it got changed or thought it happened in a different order. But what about devices that have been offline for a while? How would your old laptop know how to sync in a new version of the database with its old version when it hasn’t seen every added entry up until the latest version (considering the new version might lack entries the laptop has, but have others)? The simplest method would be to let the devices use a separate working copy from the one on the file syncing service so it can compare the versions, and then it compare all entries. The history of entries should be identified by hashes of the details, so that a direct comparison is simple (add all entries with unknown hashes). But when the histories differ, what do you do? You could sort the entry versions by timestamp and assume the latest timestamp is the current correct password, allowing the user to change later. It would also keep a history of deleted entries by their hashes, to simplify sync with devices that have been offline for a while (so they don’t add back old deleted entries).

(More details and a simplified summary coming later)

An MPC based privacy-preserving flexible cryptographic voting scheme

Some of the big problems with cryptographic voting schemes are to ensure anonymity for the voters, ensuring that votes can’t be manipulated or otherwise tampered with, that you can be certain your vote has been included and counted correctly, that the full vote count is performed correctly, that the implementation is secure, that votes can’t be selectively excluded, that fake votes won’t be counted, etc…

My suggested voting scheme below attempts to account for all these problems, as well as some more. Originally I posted about this idea on Reddit here; http://www.reddit.com/r/crypto/comments/r003r/are_others_interested_in_cryptographybased_voting/c42lo83

This voting scheme relies on the usage of public key encryption (asymmetric cryptography), Secure Multiparty Computation (MPC), Shamir’s Secure Sharing Scheme (SSSS), Zero-knowledge proofs (ZKP) and personal smartcards to implement signing and encryption of the votes.

Every voter has their a personal keypair, using asymmetric key cryptography, on a smartcard that may be embedded on a smartcard on for example state issued ID cards. As a simple way of improving the security margin (to reduce the risk of the private key having been  logged and/or of the key being extracted in transmit through TEMPEST class attacks), a new keypair is generated on the card when the owner has received it and digitally signs a notification to replace the old keypair. The card issuing entity verifies the identity of the voters and thus of the card owners, and tracks which public key is linked to each card.

Secure Multiparty Computation (MPC) can described as a way of letting several entities create a shared “virtual machine” that nobody can manipulate or see the inside of, in order to simulate a secure trusted third party server. Thanks to advanced cryptography, we can use distrust to our advantage since strong implementations of MPC can’t be exploited unless the majority of the participants collude maliciously against the rest. A number of different organizations with conflicting interests participate in the MPC based voting process, such as EFF, ACLU, NSA, FBI, White House, those running the election and more. Because they all run one node each following the MPC protocols, the know nothing other than what they put in and what they are supposed to get as output from it – and because they DO NOT want to work together to spy on or alter the result, it’s safe!

As a part of the initial setup process, they all create a random “seed” each (a large random number) that they provide as input to the MPC. First of all, when the MPC system has the random seeds, it XOR them all together to ensure it’s random (XOR anything with a random string and the output is random – this means that only one participant needs to be honest and use a true random number). Then that output is used as the seed for generating secure keys and random numbers, including the main MPC voting system main keypair. The MPC participants also provide a list of the eligible voters and their respective public keys. All participants must provide IDENTICAL lists, or the MPC algorithm’s logic will detect it and just stop with an error. This means that all MPC participants have an equal chance to verify the list of voters in advance, because the list can’t be altered after they all have decided together which to use. Something like a “vote manifest” is also included to identify the particular vote and declare the rules in use.

The MPC system will then use its main keypair to sign the voter list and the manifest, and then it will use Shamir’s Secure Sharing Scheme (SSSS) to split it’s private key into one part for each MPC participant, and provide each MPC participant with the public key, the signed manifest, the voter list and an individual share of the main keypair’s private key. SSSS is a method of splitting up data so that it only can be recovered if you have enough shares, which in the case of the vote system would be all all the shares of all the MPC participants (setting other tresholds is possible, such as 2 of 3 or 45 of 57 or anything else you need). If you have less shares than the treshold then you aren’t any better off than if you had none if you are trying to restore the data.

Time for voting. The public MPC key is now distributed EVERYWHERE. On every advertisement about the vote, the key is there (maybe in Qr code form). This ensures that everybody knows what it is, and thus we prevent man-in-the-middle (MITM) attacks against voters (which would be somebody swapping out the MPC key to find out what people voted for).

Now, the voter makes his vote. He generate a nonce (unique number used once), makes his vote, signs it with his keypair, and encrypts this with the public MPC key (the signing and encryption is both done on the personal smartcard in one go). This vote is now sent to the voting management organization (maybe this is done on-the-spot if the voter is at a voting booth). Since the vote wasn’t encrypted with the keypair voter, he CAN NOT decrypt it which means that nobody can prove what he voted for using just the encrypted message. To know what a person votes for, you need to physically watch him vote.

To add a level of transparency in the vote submission process, all votes are registered on a blockchain (a series of data blocks all linked to the previous block in the chain using cryptographic hashes, so that you can’t replace or modify a block without all the hashes in the chain after that changing) such as Bitcoin’s or Namecoin’s, and they are digitally signed by the voting management organization to prove they have seen them. This means that you can nearly instantly verify that your vote is going to be included in the count and won’t be excluded. Attempts at excluding votes from certain areas or certain users would be obvious and provable within hours. Encrypted votes can’t be modified without detection, and they can also NOT be modified in a way which would change what it would count towards and remain valid – any modified votes WILL be detected by the MPC system and rejected. Fake votes will also be detected and rejected. To make sure your encrypted vote will be counted, you just need to make sure it is included unmodified. When the time to vote ends, new submissions is no longer accepted or signed by the vote management organization.

For efficiency in the MPC counting and for transparency, the voting management organization gathers all the encrypted votes that was signed and registered in the blockchain, takes the hash of the last block and generates a Zero-knowledge proof of that all votes submitted before that last block with the given hash is included in the vote list. They digitally sign this vote list and publishes it with the Zero-knowledge proof.

Then it is time for the vote counting. The MPC participants then hands the MPC their individual SSSS shares for the master keypair, the signed vote list with the blockchain hash and the Zero-knowledge proof, the manifest and list of voters, the counting rules, and random seeds, and all other data it needs. The MPC keypair is reassembled inside the MPC system using SSSS. It verifies the Zero-knowledge proof of the vote list being complete, decrypts the votes, verifies all votes (checks signatures, syntax and that it follows the rules from the manifest), checks that no voter’s key is used more than once (duplicates are discarded; also, a vote of yours registered later in the blockchain could replace previous ones), and counts them according to the chosen method of vote counting. When it is done it generates the voting statistics as output where each vote is listed together with all vote nonces listed next to it, it specifies which blockchain hash it was given (to show it has processed all votes registered in the blockchain), references the manifest, and the MPC then signs this output. Except for the vote result itself, the statistics could also include things like the number of possible voters (how many there was in the voting list), the number of votes, how many parties there were, how many votes each party got, etc…

So now you search for your nonce in the output and checks that the vote is correct. The nonce CAN NOT be tied to you, it’s just some random number. You can lie that it belongs to somebody else, you can pretend to have another one. The number of votes can be verified. However, done in this way we’re vulnerable to a so called “birthday attack”. The thing is that if there’s been 20 000 votes for political party X and their followers threaten 5 000 people, chances are that more than one voter will claim the same nonce voting for party X is theirs (roughly 22% risk per-voter). So how do we solve this? Simple: Let the voter make both one real vote and several fake votes (“decoy votes”). Then the voter has several false nonces that he can give, including one that says that he voted for party X. Only the voter himself can know which nonce belongs to the real vote! To prevent the adversary that threaten him from figuring out if and how many false votes the voter made, the size of the encrypted voting messages should be static with enough margin for a number of “decoy votes” (if there’s several possible adversaries that could threaten you based on your vote). Now these guys could threaten 30 000 people, but even if there’s just 20 000 voters for their party, they can’t say which 10 000 it was that voted for somebody else or prove anybody wrong.

The best part? We can use ANY type of voting, such as preferential, approval, wheighted, ranked, etc! It’s just a piece of text anyway that allows for arbitary syntax, and you can “encode” ANY kind of vote in it! You can use a simple most-number-of-votes, or score from 1-10, etc…

In the end, you know that your vote has been counted correctly, everybody knows no fake votes have been added, that none has been removed, it’s anonymous, and the only way to force individual voters to vote as you wish is to physically watch them vote.

If you trust that these maybe +10 agencies won’t all conspire together against the voters (including EFF & ACLU?), you can be pretty sure the voting has been anonymous AND secure. The only way to alter the counting or other computational parts on the side of the voting management requires nearly full cooperation between people in ALL participating organizations that have full access to the machines running the Secure Multiparty Computation protocol – and they MUST avoid ALL suspiscion while at it!

Advantages

If you can distribute personal keypairs securely to the voters, nobody can alter/fake votes outside the Secure Multiparty Computation system.

  • A majority of the Secure Multiparty Computation participants have to collude and be in (near) full agreement to break the security of the system. If their interests are conflicting, it just won’t happen.
  • The security of the system relies on the cryptographic security + the low risk of collusion among enough MPC participants. If you accept both of these points as strong, this system is strong enough for you.
  • It’s anonymous
  • You can verify your vote
  • You can’t be blackmailed/forced to reveal your vote, because you can fake *any* vote

Potential weaknesses

  • The public won’t fully understand it
  • The ID smartcards with the personal keypairs must be protected, the new personal keys must be generated securely
  • We need to ensure that the MPC and Zero-knowledge proof algorithms really are as secure as we assume they are

I’ve changed the scheme a bit now from the original version. It should be entirely secure against all “plausible” attacks except for hacking all the MPC participants at once or against an attacker that can watch you physically while you make the vote. The latter should not be an issue in most places and can probably not be defended against with any cryptographic scheme, while the first is all about infrastructure security, and also not cryptographic security.

Feedback is welcome. Am I missing anything? Do you have any suggestions for useful additions or modifications? Comment below.

Basic blueprint for a link encryption protocol with modular authentication

The last few years we have seen more and more criticism build up against one of the most commonly used link encryption protocols on the internet, called SSL (Secure Socket Layer, or more precisely it’s current successor TLS, Transport Layer Security) for various reasons. A big part of it is the Certificate Authority issued certificates model of authenticating websites where national security agencies easily can get fake certificates issued, and another big part is the complexity who have lead to numerous implementation bugs such as OpenSSL’s Heartbleed and Apple’s Goto Fail and many more, due to the sheer mass of code where you end up not being able to ensure all of it is secure simply because the effort required would be far too great. Another (although relatively minor) problem is that SSL is quite focused on the server-client model, despite that there’s a whole lot of peer-to-peer software using it where that model don’t make sense, and more.

There’s been requests for something simpler which can be verified as secure, something with opportunistic encryption enabled by default (to thwart passive mass surveillance and increase the cost of spying on connections), something with a better authentication model, and with more modern authenticated encryption algorithms. I’m going to make a high-level description here of a link encryption protocol blueprint with modular authentication, that has been inspired by the low-level opportunistic encryption protocol TCPcrypt and the PGP Web of Trust based connection authentication software Monkeysphere (which currently only hooks into SSH). In essence it is about the separation and simplification of the encryption and the authentication. The basic idea is quite simple, but what it enables is a huge amount of flexibility and features.

The link encryption layer is quite simple. While the protocol don’t really have separate defined server / client roles, I’m going to describe how the connections work with that terminology for simplicity. This will be a very high-level description. Applying it to P2P models won’t be difficult. So here it goes (and to the professional cryptographers in case any would read this, please don’t hit me if something is wrong or flawed, please tell me how and why it is bad and suggest corrections so I can try to fix it);

The short summary: A key exchange is made, an encrypted link is established and a unique session authentication token is derived from the session key.

A little longer summary: The client initiates the connection by sending a connection request to the server where it initates a key exchange (assuming a 3-step key exchange will be used). The server responds by continuing the key exchange and replying with it’s list of supported ciphers and cipher modes (prioritization supported). Then the client finishes the key exchange and generates a session key and selects a cipher from the list (if there is an acceptable option on the list), and tells the server what it chose (this choice can be hidden from the network since the client can send the HMAC or an encrypted message or similar of it’s choice to the server). The server then confirms the encryption choice, and the rest of the encryption is then encrypted using that session key using the chosen cipher. A session authentication token is derived from the session key, such as through hashing the session key with a predefined constant, and is the same for both the client and the server, and the token is exposed to the authentication system to be used to authenticate the connection (for this reason it is important that it is globally unique, untamperable and unpredictable). Note that to prevent cipher downgrade attacks the cipher lists must also be authenticated, which could be done by verifying the hashes of the lists together with the session auth token – if the hashes is incorrect, somebody has tampered with the cipher lists and the connection is shut down.

And for the modular authentication mechanism:

The short summary: Authentication is made through both cryptographically verifying that the other end is who he claims to be and verifying that both ends have the same session auth token (it must not be possible to manipulate the key exchange to control the value of the session key and thus the session auth token). It is important that the proof of knowing the session auth token and the authentication is combined and inseparable and can’t be replayed in other sessions, so the token should be used as a verifiable input in the authentication mechanism.

A little longer summary: What type of authentication is required varies among types of applications. Since the authentication is modular, both ends has to tell the other what type of authentication it supports. A public server would often only care about authenticating itself to visitors and not care about authenticating the visitors themselves. A browser would usually only care about identifying the servers it connects to. Not all supported methods must be declared (for privacy/anonymity and because listing them all rarely is needed), some can be secondary and manually activated. The particular list of authentication methods used can also be selected by the application based on several rules, including based on what server the user is connecting to.

There could be authentication modules hooking into DNSSEC + DANE, Namecoin, Monkeysphere, good old SSL certificates, custom corporate authentication modules, Kerberos, PAKE/SRP and other password based auth, or purely unauthenticated opportunistic encryption, and much more. The browser could use only the custom corporate authentication module (remotely managed by the corporate IT department) against intranet servers while using certificate based authentication against servers on the internet, or a maybe a Google specific authentication module against Google servers, and so on. The potential is endless, and the applications is free to choose what modules to use and how. It would also be possible to use multiple authentication modules in both directions, which sometimes could be useful for multifactor authentication systems like using a TOTP token & smartcards & PAKE towards the server with DNSSEC + DANE & custom certificates towards the client. It could also be possible for the authentication modules on both ends to request the continous presence of a smartcard or HSM on both ends to keep the connection active, which could be useful for high-security applications where simply pulling the smartcard out of the reader would instantly kill the connection. When multiple authentication modules is used, one should be the “primary” module which in turn invokes the others (such as a dedicated multifactor auth module, in turn invoking the smartcard and TOTP token modules) to simplify the base protocol.

Practically, the authentication could be done like in these examples: For SRP/PAKE and HMAC and other algorithms based on a pre-shared key (PSK) both sides generate a hash of the shared password/key, the session auth token, the cipher lists and potentially of additional nonces (one from each party) as a form of additional challenge and reply-resistance. If both sides have the same data, then the mutual authentication will work. For OpenPGP based authentication like with Monkeysphere, a signature would be generated for the session auth token, both parties’ public keys and nonces from both parties, and then that signature would be sent stand-alone to the other party (because the other party already have the input data if he is the intended recipient), potentially encrypted with the public key of the other party. For unauthenticated opportunistic encryption, you would just compare the cipher lists together with the session auth token (maybe using simple HMAC together with challenge nonces) to make a downgrade attack expensive (it might be cheaper to manipulate the initial data packet with the cipher list for many connections so that the ciphertext later can be decrypted if one of the algorithms is weak, than to outright pull off a full active MITM on all connections).

I have also thought about how to try to authenticate semi-anonymously, i.e. such that neither party reveals who they are unless both parties know each other. The only way I think this is possuble is through the usage of Secure Multiparty Computation (MPC) and similar algorithms (SRP/PAKE is capable of something similar, but would need on average a total of x*y/2 comparisons of shared passwords if party A has x passwords and B has y passwords). Algorithms like MPC can be said to cryptographically mimic a trusted third party server. It could be used in this way: Both parties have a list of public keys of entities it would be willing to identify itself to, and a list of corresponding keypairs it would use to identify itself with. Using MPC, both parties would compare those lists without revealing their contents to the other party – and if they both are found to have a matching set of keypairs the other recognize and is willing to authenticate towards, the MPC algorithm tells both parties which keypairs matches. If there’s no match, it just tells them that instead. If you use this over an anonymizing network like Tor or I2P, you can then suddenly connect to arbitary services and be able to prove who you are to those you already know, while remaining anonymous towards everybody else.

It would even be possible for an application to recognize a server it is connecting to as a front-end for several services, and tell the authentication manager to authenticate towards those services separately over encrypted connections (possibly relayed by the front-end server) – in particular this allows for secure authentication towards a site that uses both outsourced cache services (like Akamai) and encryption accelerator hardware (which you no longer have to trust with sensitive private keys), making it cheaper to securely implement services like private video hosting. In this case the device performing the server-side authentication could even be a separate HSM, performing authentication towards clients on the behalf of the server.

The protocol is also aware of who initiated the connection, but otherwise have no defined server / client roles. Although the authentication modules are free to introduce their own roles if they want to, for example based on the knowledge of who initated the connection and/or who the two parties of the connection is. It is also aware of the choice of cipher, and can therefore choose to provide limited access to clients who connects using ciphers that are considered having low security, but still secure enough to be granted access to certain services (this would mainly be important for reasons such as backwards compatibility and/or performance on embedded devices).

The authentication module could also request rekeying on the link encryption layer, which both could be done using a new key exchange or through ratcheting like in the Axolotl protocol, or simply through hashing the current session key to generate a new one and deleting the old one from RAM (to limit the room for cryptanalysis, and to limit how much of the encrypted session data can be recovered if the server is breached and the current session keys is extracted).

But what if you already have a link encryption layer with opportunistic encryption or other mechanism that allow you to generate a secure session auth token? You shouldn’t have to stack another layer of encryption on top of it just to be compatible if the one you already are using is secure enough. There’s a reason the link encryption and authentication is separate here – rather than hardcoding them together, they would be combined using a standardized API. Basically, if you didn’t use the “default” link encryption protocol, you would be using custom “wrapper software” that would make the link encryption you are using look like the default one to the authentication manager and provide the same set of basic features. The authentication manager is meant to only rely on the session auth token being globally unique and secure (unpredictable) to be able to authenticate the connection, so if you can achieve that for the link encryption then you’re good to go.

(More updates coming later)

References:

http://web.monkeysphere.info/

http://tcpcrypt.org/

https://gist.github.com/Natanael90/556350

https://mailman.stanford.edu/pipermail/tcpcrypt-dev/2010-August/000007.html

https://whispersystems.org/blog/advanced-ratcheting/

http://www.metzdowd.com/pipermail/cryptography/2014-May/021475.html

A decentralized hash-chained discussion system

After thinking for a while about how I want a discussion system to work. Since I’ve seen numerous forums get abandoned, old archived discussions getting lost when servers crash, discussions jumping between various forums and chat rooms and blogs and more, etc, I came to the conclusion that I want a commenting system that is directly focused on the posts themselves, and which is decentralized and can easily reference external discussions, and where you don’t simply lose the history of old discussions because one server went down.

So with inspiration by Git and the Bitcoin blockchain, I’ve got an idea about a discussion system based on posts encoded in JSON (or a similar format, maybe XML), where each comment is signed by it’s author(s), references its author’s profile/ID (ideally in a verifiable manner, such as referencing a cryptographic public key), has topic tagging, which references to the comments it replies to with the hashes of the comments (so that anybody reading it can verify exactly what the author was responding to), and more.

The default use case I’m considering is the one that would work like mailing lists (email servers that relay messages sent to it to its subscribers, so you can discuss topics with a group of people by email). In this system the equivalent would be what I call a channel. A channel would be defined by an initial “channel definition post” that declares that a specific method of delivery (or several!) of the messages is to be used, what topics is allowed, moderation rules, where you can find an archive of messages, and other relevant details. You could then register with the server from your client (if registration would be required on the channel), send in your messages through the defined method (uploading to the server by default – serverless distribution methods would also be possible) and it would then relay it to all subscribers. On open lists where anybody can post directly, moderation posts could be used to declare that certain messages that went through are to be considered spam or breaks the rules, so that the subscribers’ software clients could delete or hide them automatically so that you don’t have to see them when you open your client to read the discussions on the channel. In a similar manner, moderation posts could also flag for rule updates and more in the channel.

Since we would be defining a standard format for how to encode the comments from scratch, we could also enable full semantic tagging from the start. Dates and events can be marked as such, just like addresses, phone numbers, even nicknames, and more. Disambiguation would be made far easier when you don’t have to wonder if you should write a long explanation or put details in paranthesis or omit it entirely hoping nobody will misunderstand you. Whenever you think a phrase or word or expression is unclear, you can just add a tag that shows what it means which would be hidden by default and that the readers can chose to display or not (and it would be possible to clarify that for example a word is used as a verb, or even make a link back to a previous or latter sentence in your post).

And since the whole discussions are defined simply by signed messages of a defined format that references each other by hashes, it suddenly becomes easy to let a discussion jump as a whole to other forums when the commenters agree that they want to continue it elsewhere. No longer do you have to simply cut-and-paste raw text if you want to import the discussion history, instead the posts can be reuploaded to the new place together and the whole history can be fetched by the subscribers’ client software when they want to see which posts is referenced or quoted, in a verifiable manner (the digital signatures allow you to verify the comments haven’t been modified).

This actually even enables the subscribers of two or more separate channels to crosstalk easily, since you can directly quote messages from the other channels / forums and at the same time include a reference to the “channel definition post” so that your client can see how to fetch the rest of the history of the discussion. So for example, in a channel about RC cars a quote could be made of a post in an electronics channel, allowing the RC folks to look up the rest of the discussion with just a few clicks, and to even join that channel to post questions which in turn reference the initial crosspost, further allowing the commenters on both channels to follow the discussions on each side. There’s even the possibility of having a shared discussion among several channels on multiple servers where all commenters only needs to reply to the discussion on their own channel, having it automatically synchronized among all the channels on all servers.

Author identities could be verified in several ways. Posts would as I mentioned be digitally signed (using public key cryptography such as ECDSA), and the public key would be included in the message. This public key could then be tied to for example a Twitter or Facebook account, or a GPG key, or a Namecoin profile (see onename.io), or whatever else you like. Your client would then (by default) verify that the public key used to sign the message can be found on the profile in question or is signed by it’s keypair. Combined with the previously mentioned address book software here on my blog, your client could automatically show which posts has been verified to be made by people in your address book, and the client could automatically verify Namecoin registered profiles through the signatures, etc. This way you can verify which posts have been made by the same person, and not just by different people with the same nickname. And since your profile also could have an index of all your previous public comments, your client could also trivially allow you to look up all public posts from people from all channels on all servers where they’ve participated in discussions.

(More updates coming later)

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: