Posts Tagged ‘ Security ’

A simple method of key verification for multi-device key exchange

There are tons of devices around with practically no user faced interface at all, which need to communicate securely with other devices. This includes devices such as a wireless thermometer communicating with a HVAC unit or a wireless lock on your door communicating with your phone when you tell it what keys to accept. The risks include violation of privacy, physical damage and economic loss.

With the current Internet of Things trend there will only be more of this type of devices in the future. To be able to use these devices securely you need to ensure there are no room for anybody to attempt to MITM these connections (to intercept it so that they are in the middle and can see and manipulate all data), but practically ensuring that can be incredibly hard if the devices don’t even have a screen.

My idea for how to achieve it securely, with minimal interaction required from the user that links the devices together, is to show a visual pattern derived from a shared key.

But since most devices don’t have any interface beyond a single LED light, that could typically be hard to achieve. But that’s fortunately not a dead end, because the simple solution is to let the two devices you’re linking together both show the exact same on/off blinking pattern, perfectly synchronized, while you hold them edge to edge. If the patterns are identical, they have the same key (details below on how this can be guaranteed). If you see that they don’t blink in perfect synchronization, then you know the devices you are trying to link do NOT have a secure direct connection to each other.

So how do you link them together in the first place? There’s lots of methods, including using NFC and holding them together, temporarily using a wired connection (this likely won’t be common for consumer grade devices), using radio based method similar to WiFi WPS (press a button on both devices), and more. The two options likely to become the most common of those are the simultaneous button press method for wireless devices and NFC. While NFC has reasonable MITM resistance as a result of its design (simultaneously interfering with both parties of a connection is nearly impossible), that doesn’t guarantee that the user will notice an attack (attacking by connecting to the devices one at a time would still work).

So by confirming that two devices have a secure communication link by comparing blink patterns, it becomes easy to ensure configuration can be done securely for a wide range of devices. But how can we be sure of this? What can two devices communicate to allow security through comparing a blink pattern? Thanks to cryptographic key exchange this is easy, since all the devices have to do is to generate a large secret number each and perform an algorithm together like Diffie-Hellman. When two devices perform DH together, they generate a shared large secret number that no other device can know. This allows the devices to communicate securely by using this large number as an encryption key. And it also allows us to verify that it is these two devices that are talking to each other by running that number through a one-way transformation like a cryptographic hash function, and using that to generate the pattern to show – and only the two devices that were part of the same DH key exchange will show the same pattern.

If anybody tries to attack the connection and perform DH key exchange with the devices separately, they will end up having DIFFERENT secret numbers and will therefore NOT show the same blink pattern.

Note that due to human visual bias, there’s a certain risk with showing a pattern with very few components (to barely have more bits than what an attacker can bruteforce) you can’t just display the binary version of the hashed key this way, since the risk is too large that many different combinations of blink patterns would be confused with each other. This can however be solved easily, you can use a form of key expansion with a hash function to give you more unique bits to compare. One way to do this is by doing an iterated HMAC. With HMAC-SHA256 you get 256 bits to compare per HMAC key. So computing HMAC(Diffie-Hellman shared secret key, iteration number) for 10 iterations you get 2560 bits to compare. There’s actually a better way to expand the shared key into enough bits, that’s reliable and fairly independent of what key exchange algorithm you deploy: SHA3’s SHAKE256 algorithm. It’s something kind of in between a hash and a stream cipher, called an extendable-output function (XOF). You get to select how many bits of output you want, and it will process the input data and give you precisely that many bits out. You want 2500 bits exactly? That’s what it will give you. This means that if the user looks at the pattern for long enough, he WILL be able to identify mismatches. 

To achieve strong security, you only need for approximately 100+ pairs of bits to be identical to ensure bruteforce is unachievable – and in this setup, it means the user only needs to be able to verify that 4% of the full pattern is identical. So if you have a blink pattern where the blink rate is at 5 bits per second, continously comparing the pattern for any 20 seconds out of the 512 seconds it would take for the pattern to start repeating would correspond to verifying that 100 bits is identical. Of course the blinking would need to be kept synchronized, which would require the devices to synchronize their clocks before starting and could also require them to keep doing so while the blink pattern is showing to prevent “drift”.

There are of course other possible methods than just on/off blink. You could have an RGB LED to represent multiple bits for every blink. You could also have geometric patterns shown on a screen when holding the screens of two devices up against each other. You could even do the same thing for mechanical/haptic outputs like Braille screens so that blind people can do it too.

What if you can’t hold the two devices physically close to each other? You could use another device as a “courier”. As one example, by letting your smartphone perform key exchange through this method with both devices one by one, it could also then tell the two devices how to connect to each other and what encryption key to use. This way your smartphone would act as a trusted proxy for key exhange. It would also be possible to have a dedicated device for this, such as a small NFC tag with an RGB LED and a smartcard like chip to perform the key exchange with both devices. Using a tag like that would make configuration of new devices as simple as to hold it against the devices and comparing the pattern, and then the connection is secure, with minimal user interaction.

Then there’s the question of how to tell the devices that the key exchange was a success or not. Typically most devices will have at least ONE button somewhere. It could be as easy as one press = success, two presses = start over. If there’s no button, and they are the type of devices that just run one task as soon as they get power, then you could use multiple NFC taps in place of button presses. The device could respond with a long solid flash to confirm a successfull key exchange or repeated on/off blinking to show it did reset itself.

Originally posted here:

Relevant prior art (found via Google, there may be more):;jsessionid=7E99A2B9922A0AE79CF6CAC65634FD8C?doi=;jsessionid=7E99A2B9922A0AE79CF6CAC65634FD8C?doi=

My take on the ideal password manager

There’s a few variants of password managers. The simplest are a list of password entries in am encrypted file. Some are fancier and can automatically fill in your password for you when logging in on websites. Some supports authentication types that aren’t passwords (HOTP/TOTP secrets, etc). But I’m thinking more of the backend here, since all those other features can be added on top.

I want a password manager where you can add entries without unlocking it first. This isn’t difficult to achieve, just use public key encryption with a keypair associated to the password database. But the details can be finicky. What if you have several devices synced via online file storage services, which are online and offline at varying times, and where you sometimes make offline edits on several devices independently before syncing? My idea here is for how to make syncing easy to achieve silently, while being able to add password entries from anywhere, anytime (and yes, this turns the password database into an append-only database during normal usage, but you can clear out old entries manually to save space).

First of all we need an encrypted database, and SQLCipher should do just fine. Password entries are database entries with all the relevant data: entry name, service name and address, username, authentication details (passwords would be the standard but not the only one), comments. To add entries when it is locked we need a keypair for asymmetric encryption, so the private key is stored in the database with the public key stored unencrypted in the same type as the database.

But how exactly should entries by added? The simplest method is to create a fresh encrypted SQLCipher database with its encryption key itself being encrypted with the public key of the main password database. This is stored in a separate file. The encrypted key is stored appended to the encrypted new entries, with a flag that identifies the password database it should be merged into. When you unlock the main database, the private key is used to decrypt the key for the new entries, and they are then added to the main database. This allows for adding passwords from several devices in parallel and merging them in. Once merged with the main one, those temporary database files can be deleted.

And how do we handle conflicts? What if you end up doing password resets from a library computer you don’t trust much to access some service, and then do it again elsewhere, create entries at both occasions and don’t sync them until later? The simplest way is to keep all versions and store a version history for every entry, so you don’t lose what might be the real current password because you thought it got changed or thought it happened in a different order. But what about devices that have been offline for a while? How would your old laptop know how to sync in a new version of the database with its old version when it hasn’t seen every added entry up until the latest version (considering the new version might lack entries the laptop has, but have others)? The simplest method would be to let the devices use a separate working copy from the one on the file syncing service so it can compare the versions, and then it compare all entries. The history of entries should be identified by hashes of the details, so that a direct comparison is simple (add all entries with unknown hashes). But when the histories differ, what do you do? You could sort the entry versions by timestamp and assume the latest timestamp is the current correct password, allowing the user to change later. It would also keep a history of deleted entries by their hashes, to simplify sync with devices that have been offline for a while (so they don’t add back old deleted entries).

(More details and a simplified summary coming later)

An MPC based privacy-preserving flexible cryptographic voting scheme

There are various reasons for why electronic voting isn’t widely used, and some of their biggest problems are to ensure anonymity for the voters, ensuring that votes can’t be manipulated or otherwise tampered with, that you can be certain your vote has been included and counted correctly, that the full vote count is performed correctly, that the implementation is secure, that votes can’t be selectively excluded, that fake votes won’t be counted, etc…

That’s a pretty long list of dangers!

My own idea for a cryptographic voting scheme below attempts to account for all these problems, as well as some more. Originally I posted about this idea on Reddit here.

This voting scheme relies on the usage of a variety of cryptographic primitives, including symmetric cryptography like key derivation functions (KDF, like HKDF) and encryption (such as XChaCha20+Poly1305), public key encryption / asymmetric encryption (ECC / ECIES), Secure Multiparty Computation (MPC, like SCALE-MAMBA), Shamir’s Secure Sharing Scheme (SSSS), Zero-knowledge proofs (ZKP, like ZK-STARKs) and personal smartcards to implement signing and encryption of the votes.

As a fundamental requirement every voter must have their own personal cryptographic asymmetric keypair on a smartcard. This card could for example be integrated in a state issued ID card, like they do in Estonia. As a simple way of improving the security margin for these keys (to avoid risks like insecure key generation), a new keypair is generated on the card when the owner has received it, and they digitally sign a notification to the issuer to replace the old keypair and register the new one. The card issuing entity verifies the identity of the voters and thus of the card owners, and tracks which public key is linked to each card.

Secure Multiparty Computation (MPC) can described as a way of letting several entities create a shared “virtual machine” that nobody can manipulate or see the inside of, in order to simulate a secure trusted third party server. Thanks to advanced cryptography, we can use distrust to our advantage since strong implementations of MPC can’t be exploited unless the majority of the participants collude maliciously against the rest.

The MPC participants would include a number of different organizations involved in the voting process which has conflicting interests (to prevent them from willingly collaborating), such as the major parties (as an assurance for them), civil organizations like EFF and ACLU (as an assurance for the people), federal agencies like NSA, FBI and the White House (as a assurance for the government), the department running the election, and more.

Because they all only runs one node each following the MPC protocols, they know nothing more than what they put in and what they are supposed to get as an output from it – and because they DO NOT want to work together (due to conflicting goals) to spy on or alter the result, it’s safe*!

  • For various probabilities of safe. You also have to assume nobody’s able to hack a majority of the participants, or blackmail enough participants, or break the cryptography.

As a part of the initial setup process, they all create a random “seed” each (a large random number) that they provide as input to the MPC. First of all, when the MPC system has the random input seeds, it combines them with a HKDF to ensure the output is properly random – this means that only one participant needs to be honest and use a true random number, in order for the result to be both unpredictability random and secret from all the participants. This result is the MPC seed.

Then that output is used as the seed for generating secure keys and random numbers, including the main MPC voting system’s main keypair. The MPC participants also provides a list of the registered eligible voters and their respective public keys. All participants must provide IDENTICAL lists, or the MPC algorithm’s logic will detect it and just stop with an error. This means that all MPC participants have an equal chance to audit the list of voters in advance, because the list can’t be altered after they all have decided together on which version to use. Something like a “vote manifest” is also included to identify the particular vote and declare the rules in use.

The MPC system will then use its main keypair to sign the voter list and the manifest, and then it will use Shamir’s Secure Sharing Scheme (SSSS) and encryption to split it’s private key into one part for each MPC participant (more on this below), and provide each MPC participant with the MPC public key, the signed manifest, the voter list and an individual share of the main keypair’s private key.

SSSS is a method of splitting up data so that it only can be recovered if you have enough shares (reaching a defined threshold), which in the case of the vote system would be all all the shares of all the MPC participants (if you don’t have enough shares to reach the threshold, the key can’t be recovered). Setting other tresholds is possible, such as 2 of 3 or 45 of 57 or anything else you need.

Time for voting. The public MPC key is now distributed EVERYWHERE. On every advertisement about the vote, the key is there (maybe in Qr code form). This ensures that everybody knows what it is, and thus we prevent man-in-the-middle (MITM) attacks against voters (which would be somebody swapping out the MPC key to find out what people voted for).

Now, the voter makes his vote. He generate a nonce (unique number used once), makes his vote, signs it with his keypair, and encrypts this with the public MPC key (the signing and encryption is both done on the personal smartcard in one go). This vote is now sent to the voting management organization (maybe this is done on-the-spot if the voter is at a voting booth).

Since the vote wasn’t encrypted with his own keypair, he CAN NOT decrypt it which means that nobody can prove what he voted for using just the encrypted message (for as long as the MPC remains secure!). To know what a person voted for, you need to physically watch him vote!

To add a level of transparency in the vote submission process, all votes are registered on a blockchain or similar timestamping mechanism such as through Bitcoin, and they are digitally signed by the voting management organization to that prove they too have seen them. This means that you can nearly instantly verify that your vote is going to be included unmodified in the count. Attempts at excluding votes from certain areas or certain users would be obvious and provable as soon as the voting result is published.

Encrypted votes can’t be modified without detection, and once timestamped they can also NOT be modified in a way which would change what it would count towards and yet remain valid – any modified votes WILL be detected by the MPC system and rejected. Fake votes will also be detected and rejected. To make sure your encrypted vote will be counted, you just need to make sure it is included unmodified. When the time to vote ends, new submissions is no longer accepted or signed by the vote management organization. After the deadline, a final list of encrypted votes is signed and published.

For efficiency in the MPC counting and for transparency, the voting management organization gathers all the encrypted votes that was signed and registered in the blockchain, takes the hash of the last block and generates a Zero-knowledge proof of that all votes submitted before that last block, with the given hash, is included in the vote list. The signed vote list is published with the Zero-knowledge proof.

Then it is time for the vote counting. The MPC participants then hands the MPC their individual SSSS shares for the master keypair, the signed vote list with the blockchain hash and the Zero-knowledge proof, the manifest and list of voters, the counting rules, and random seeds, and all other data it needs.

The MPC keypair is reassembled and decrypted inside the MPC system. It verifies the Zero-knowledge proof of the vote list being complete, decrypts the votes, verifies all votes (checks signatures, syntax and that it follows the rules from the manifest), checks that no voter’s key is used more than once (duplicates are discarded; alternatively a more recent vote in the blockchain could replace previous ones), and counts them according to the chosen method of vote counting.

When it is done it generates the voting statistics as output where each vote option is listed together with all vote nonces listed next to it, it specifies which blockchain hash it was given (to show it has processed all votes registered in the blockchain), references the manifest, and the MPC then signs this output. Except for the vote result itself, the statistics could also include things like the number of possible voters (how many there was in the voting list), the number of votes, how many parties there were, how many votes each party got, etc…

So now you search for your nonce in the output and checks that the vote is correct. The nonce CAN NOT be tied to you, it’s just some random number. You can lie that yours belongs to somebody else, you can pretend to have another one. The number of votes can be verified.

However, done in this way we’re vulnerable to a so called “birthday attack”. The thing is that if there’s been 20 000 votes for political party X and their followers threaten 5 000 people, chances are that more than one voter will claim the same nonce voting for party X is theirs (roughly 22% risk per-voter). So how do we solve this? Simple: Let the voter make both one real vote and several fake votes (“decoy votes”). Then the voter has several false nonces that he can give, including one that says that he voted for party X. Only the voter himself can know which nonce belongs to the real vote! To prevent the adversary that threaten him from figuring out if and how many false votes the voter made, the size of the encrypted voting messages should be static with enough margin for a number of “decoy votes” (if there’s several possible adversaries that could threaten you based on your vote). Now these guys could threaten 30 000 people, but even if there’s just 20 000 voters for their party they still can’t say which 10 000 (or more) it was that voted for somebody else or prove anybody wrong. (The MPC would then also report the total number of decoy nonces vs real ones).

The best part? We can use ANY type of voting, such as preferential, approval, wheighted, ranked, etc! It’s just a piece of text anyway that allows for arbitary syntax, and you can “encode” ANY kind of vote in it! You can use a simple most-number-of-votes, or score from 1-10, etc…

In the end, you know that your vote has been counted correctly, everybody knows no fake votes have been added, that none has been removed, it’s anonymous, and the only way to force individual voters to vote as you wish is to physically watch them vote.

If you trust that these maybe +10 organizations won’t all conspire together against the voters, you can be pretty sure the voting has been anonymous AND secure. The only way to alter the counting or other computational parts on the side of the voting management requires nearly full cooperation between people in ALL participating organizations that have full access to the machines running the Secure Multiparty Computation protocol – and they MUST avoid ALL suspiscion while at it!


If you can distribute personal keypairs securely to the voters, nobody can alter/fake votes outside the Secure Multiparty Computation system.

  • A majority of the Secure Multiparty Computation participants have to collude and be in (near) full agreement to break the security of the system. If their interests are conflicting, it just won’t happen.
  • The security of the system relies on the cryptographic security + the low risk of collusion among enough MPC participants. If you accept both of these points as strong, this system is strong enough for you.
  • It’s anonymous
  • You can verify your vote
  • You can’t be blackmailed/forced to reveal your vote, because you can fake *any* vote

Potential weaknesses

  • The public won’t fully understand it
  • The ID smartcards with the personal keypairs must be protected, the new personal keys must be generated securely
  • We need to ensure that the MPC and Zero-knowledge proof algorithms really are as secure as we assume they are

I’ve changed the scheme a bit now from the original version. It should be entirely secure against all “plausible” attacks except for hacking all the MPC participants at once or against an attacker that can watch you physically while you make the vote. The latter should not be an issue in most places and can probably not be defended against with any cryptographic scheme, while the first is all about infrastructure security, and also not cryptographic security.

Feedback is welcome. Am I missing anything? Do you have any suggestions for useful additions or modifications? Comment below.

Basic blueprint for a link encryption protocol with modular authentication

The last few years we have seen more and more criticism build up against one of the most commonly used link encryption protocols on the internet, called SSL (Secure Socket Layer, or more precisely it’s current successor TLS, Transport Layer Security) for various reasons. A big part of it is the Certificate Authority issued certificates model of authenticating websites where national security agencies easily can get fake certificates issued, and another big part is the complexity who have lead to numerous implementation bugs such as OpenSSL’s Heartbleed and Apple’s Goto Fail and many more, due to the sheer mass of code where you end up not being able to ensure all of it is secure simply because the effort required would be far too great. Another (although relatively minor) problem is that SSL is quite focused on the server-client model, despite that there’s a whole lot of peer-to-peer software using it where that model don’t make sense, and more.

There’s been requests for something simpler which can be verified as secure, something with opportunistic encryption enabled by default (to thwart passive mass surveillance and increase the cost of spying on connections), something with a better authentication model, and with more modern authenticated encryption algorithms. I’m going to make a high-level description here of a link encryption protocol blueprint with modular authentication, that has been inspired by the low-level opportunistic encryption protocol TCPcrypt and the PGP Web of Trust based connection authentication software Monkeysphere (which currently only hooks into SSH). In essence it is about the separation and simplification of the encryption and the authentication. The basic idea is quite simple, but what it enables is a huge amount of flexibility and features.

The link encryption layer is quite simple. While the protocol don’t really have separate defined server / client roles, I’m going to describe how the connections work with that terminology for simplicity. This will be a very high-level description. Applying it to P2P models won’t be difficult. So here it goes (and to the professional cryptographers in case any would read this, please don’t hit me if something is wrong or flawed, please tell me how and why it is bad and suggest corrections so I can try to fix it);

The short summary: A key exchange is made, an encrypted link is established and a unique session authentication token is derived from the session key.

A little longer summary: The client initiates the connection by sending a connection request to the server where it initates a key exchange (assuming a 3-step key exchange will be used). The server responds by continuing the key exchange and replying with it’s list of supported ciphers and cipher modes (prioritization supported). Then the client finishes the key exchange and generates a session key and selects a cipher from the list (if there is an acceptable option on the list), and tells the server what it chose (this choice can be hidden from the network since the client can send the HMAC or an encrypted message or similar of it’s choice to the server). The server then confirms the encryption choice, and the rest of the encryption is then encrypted using that session key using the chosen cipher. A session authentication token is derived from the session key, such as through hashing the session key with a predefined constant, and is the same for both the client and the server, and the token is exposed to the authentication system to be used to authenticate the connection (for this reason it is important that it is globally unique, untamperable and unpredictable). Note that to prevent cipher downgrade attacks the cipher lists must also be authenticated, which could be done by verifying the hashes of the lists together with the session auth token – if the hashes is incorrect, somebody has tampered with the cipher lists and the connection is shut down.

And for the modular authentication mechanism:

The short summary: Authentication is made through both cryptographically verifying that the other end is who he claims to be and verifying that both ends have the same session auth token (it must not be possible to manipulate the key exchange to control the value of the session key and thus the session auth token). It is important that the proof of knowing the session auth token and the authentication is combined and inseparable and can’t be replayed in other sessions, so the token should be used as a verifiable input in the authentication mechanism.

A little longer summary: What type of authentication is required varies among types of applications. Since the authentication is modular, both ends has to tell the other what type of authentication it supports. A public server would often only care about authenticating itself to visitors and not care about authenticating the visitors themselves. A browser would usually only care about identifying the servers it connects to. Not all supported methods must be declared (for privacy/anonymity and because listing them all rarely is needed), some can be secondary and manually activated. The particular list of authentication methods used can also be selected by the application based on several rules, including based on what server the user is connecting to.

There could be authentication modules hooking into DNSSEC + DANE, Namecoin, Monkeysphere, good old SSL certificates, custom corporate authentication modules, Kerberos, PAKE/SRP and other password based auth, or purely unauthenticated opportunistic encryption, and much more. The browser could use only the custom corporate authentication module (remotely managed by the corporate IT department) against intranet servers while using certificate based authentication against servers on the internet, or a maybe a Google specific authentication module against Google servers, and so on. The potential is endless, and the applications is free to choose what modules to use and how. It would also be possible to use multiple authentication modules in both directions, which sometimes could be useful for multifactor authentication systems like using a TOTP token & smartcards & PAKE towards the server with DNSSEC + DANE & custom certificates towards the client. It could also be possible for the authentication modules on both ends to request the continous presence of a smartcard or HSM on both ends to keep the connection active, which could be useful for high-security applications where simply pulling the smartcard out of the reader would instantly kill the connection. When multiple authentication modules is used, one should be the “primary” module which in turn invokes the others (such as a dedicated multifactor auth module, in turn invoking the smartcard and TOTP token modules) to simplify the base protocol.

Practically, the authentication could be done like in these examples: For SRP/PAKE and HMAC and other algorithms based on a pre-shared key (PSK) both sides generate a hash of the shared password/key, the session auth token, the cipher lists and potentially of additional nonces (one from each party) as a form of additional challenge and reply-resistance. If both sides have the same data, then the mutual authentication will work. For OpenPGP based authentication like with Monkeysphere, a signature would be generated for the session auth token, both parties’ public keys and nonces from both parties, and then that signature would be sent stand-alone to the other party (because the other party already have the input data if he is the intended recipient), potentially encrypted with the public key of the other party. For unauthenticated opportunistic encryption, you would just compare the cipher lists together with the session auth token (maybe using simple HMAC together with challenge nonces) to make a downgrade attack expensive (it might be cheaper to manipulate the initial data packet with the cipher list for many connections so that the ciphertext later can be decrypted if one of the algorithms is weak, than to outright pull off a full active MITM on all connections).

I have also thought about how to try to authenticate semi-anonymously, i.e. such that neither party reveals who they are unless both parties know each other. The only way I think this is possuble is through the usage of Secure Multiparty Computation (MPC) and similar algorithms (SRP/PAKE is capable of something similar, but would need on average a total of x*y/2 comparisons of shared passwords if party A has x passwords and B has y passwords). Algorithms like MPC can be said to cryptographically mimic a trusted third party server. It could be used in this way: Both parties have a list of public keys of entities it would be willing to identify itself to, and a list of corresponding keypairs it would use to identify itself with. Using MPC, both parties would compare those lists without revealing their contents to the other party – and if they both are found to have a matching set of keypairs the other recognize and is willing to authenticate towards, the MPC algorithm tells both parties which keypairs matches. If there’s no match, it just tells them that instead. If you use this over an anonymizing network like Tor or I2P, you can then suddenly connect to arbitary services and be able to prove who you are to those you already know, while remaining anonymous towards everybody else.

It would even be possible for an application to recognize a server it is connecting to as a front-end for several services, and tell the authentication manager to authenticate towards those services separately over encrypted connections (possibly relayed by the front-end server) – in particular this allows for secure authentication towards a site that uses both outsourced cache services (like Akamai) and encryption accelerator hardware (which you no longer have to trust with sensitive private keys), making it cheaper to securely implement services like private video hosting. In this case the device performing the server-side authentication could even be a separate HSM, performing authentication towards clients on the behalf of the server.

The protocol is also aware of who initiated the connection, but otherwise have no defined server / client roles. Although the authentication modules are free to introduce their own roles if they want to, for example based on the knowledge of who initated the connection and/or who the two parties of the connection is. It is also aware of the choice of cipher, and can therefore choose to provide limited access to clients who connects using ciphers that are considered having low security, but still secure enough to be granted access to certain services (this would mainly be important for reasons such as backwards compatibility and/or performance on embedded devices).

The authentication module could also request rekeying on the link encryption layer, which both could be done using a new key exchange or through ratcheting like in the Axolotl protocol, or simply through hashing the current session key to generate a new one and deleting the old one from RAM (to limit the room for cryptanalysis, and to limit how much of the encrypted session data can be recovered if the server is breached and the current session keys is extracted).

But what if you already have a link encryption layer with opportunistic encryption or other mechanism that allow you to generate a secure session auth token? You shouldn’t have to stack another layer of encryption on top of it just to be compatible if the one you already are using is secure enough. There’s a reason the link encryption and authentication is separate here – rather than hardcoding them together, they would be combined using a standardized API. Basically, if you didn’t use the “default” link encryption protocol, you would be using custom “wrapper software” that would make the link encryption you are using look like the default one to the authentication manager and provide the same set of basic features. The authentication manager is meant to only rely on the session auth token being globally unique and secure (unpredictable) to be able to authenticate the connection, so if you can achieve that for the link encryption then you’re good to go.

(More updates coming later)


What I want in a smartwatch

With the smartwatch trend just starting, there’s a lot of interesting new things showing up now. The new devices includes the Pebble, Qualcomm Toq, Agent, Galaxy Gear, Metawatch, i’m Watch, Sony Smartwatch and their previous LiveView, and many more. I find these devices fascinating, and I hope that it really isn’t just a temporary fad that will pass since I think there’s quite a few things they could to that would be useful. So I’m going to write down what I’m looking for in a smartwatch, and explain what I think about the devices that are in the market now or coming soon. What I really hope is that somebody will release a smartwatch that simply gets it right, and that won’t just die out and be forgotten in a month, and I think that what I’m going to list here will be some of the things that will decide if a smartwatch will succeed or fail.

One thing that is more important than ever for these devices are the interface. You can’t have too many buttons, you can’t have a touchscreen-only interface with a bunch of on-screen buttons (too hard to aim right, and you’ll cover the interface with your fingers), you can’t have a screen too small, you can’t have a screen too big, the buttons must be easy to press at all times (I’ve experienced watches with buttons you can’t access from certain common angles), the screen can’t really be curved (can be too hard to read) which can make it harder to design something that fits well, the watch can’t be too thick (so the electronics are very limited), etc…

As for the hardware, some of the things I want is a battery that lasts at least a full week (in combination with wireless charging such as with Qi), straps that can be replaced (and you should be able to put a clip instead of straps on it, if you want it somewhere else than on your arm), waterproof and minimal bezel. Maybe even a flashlight LED, but that’s probably a bit too much.

The basic look and interface I want is something that looks like Sony’s Smartwatch and LiveView, with something around a 2″ touchscreen, decent screen resolution, a capacitive “slider” below the screen like the volume control that Sony’s Bluetooth headset SE MW600 has which lets you feel approximately where along it your finger is and how far you’re moving it, and I’d want 3 buttons just below that “slider”. Most interactions would be composed of scrolling through the options with the slider, selecting options with the buttons and using swipes on the screen to bring up additional options. Basic text input could be performed with the slider by selecting groups of characters through sliding your finger left/right over it and picking a character from the visible group with the buttons and/or the screen (if somebody else can come up with a better suggestion, please explain it in the comments below).

I would absolutely NOT want a LED notification light, my phone’s LED is bothering me enough the few times I have it on my desk with unread notifications. What I’d rather want it to have is custom vibration patterns for different types of notifications so I know what kind of messages it is I haven’t read yet, and a screen that can be always on (like e-paper screens and the refractive Mirasol screen that Qualcomm developed) so that I can glance over it quickly. An optional feature could be to let the notification-dependent vibration patterns repeat if you shake the watch, so you don’t have to look at the screen to know if it’s a call or an email that you missed (like how phone LED:s can show different colors for different notifications).

One of the major things I want from a smartwatch is being able to work as a remote control. This both means controlling my phone over Bluetooth as well as controlling my TV via IR and really anything else via my phone’s network connection. I want music controls, I want to be able to mute my TV with it when there’s an incoming call, I want to be able to lock my laptop with it when I’m walking away from it, I want to be able to unlock my door with it, and much more. Via my smartphone I could trigger tasks in the app Tasker to do just about anything, including turning my lights and stereo on/off if I’ve got a home automation device or changing the profile on my phone to turn off the sound and switch wallpaper, or just about anything else.

Connectivity is also one of the most important part for a device like this. Since battery life is so important (and since a separate cell phone contract for it would be too expensive in many places), it will really have to depend on another device for internet connectivity and more. But it should still be able to work stand-alone. I don’t really see a need for it to have WiFi, since you’re going to want to use it wherever you want, and most places don’t actually have a WiFi network you can use. Chances are that Bluetooth tethering will be the best choice in most cases. You’re likely going to have your phone directly connected to the internet more often the watch (and the watch would likely not have as good antennas as the phone anyway). But what other kinds of connectivity would be useful in a smartwatch? I’ve already mentioned IR (and I’m hoping for full IrDA support which will be fast enough for small data exchanges over small distances without having to touch anything), but two things that really would be great is NFC and something that’s mostly unknown, often called electric field modulation or body area network (BAN) or personal area network (PAN). This type of wireless communication technology modifies the electric field of the body (that field is what makes half of all FM radios go crazy when you touch them) in order to send signals to objects you touch. And one of the more awesome things it can do is to act as a key, unlocking the door when you touch it’s handle, unlock your phone when you pick it up, unlock your laptop when you put your hands on it, and it can also let you exchange contact details when you shake hands, and much more. And NFC would mostly be used to link the smartwatch to all the devices such as smartphones already equipped with NFC, which makes pairing it easy (just let the devices touch and press a button on both to accept).

The kind of information I’d want to see on the screen is summaries of notifications (who called, how many unread emails are there, are there important news headlines, etc), who’s currently calling, unread text messages, music controls, and information from various phone apps. That info could be just about anything, including sports scores, game stats, comments on my blog, Reddit replies, and more. It would also be awesome in combination with navigation apps, with a compass + gyro + accelerometer combo you could put your phone back down in your pocket and let the smartwatch show you arrows for where to go and how many meters there are to the next turn, and other simple instructions – this would be amazing for everybody on a bike who sometimes have to rely on a map and be forced to stop to check it or wobble ahead with a paper map or their phone in their hands. Being able to use it for shopping checklists (and not have to unlock the phone every 3 minutes) would also be incredibly convenient, as well as using it for voice memos.

This thing would also be useful for two-factor authentication. If you’ve used the Google Authenticator app or one of those security tokens / one-time code dongles, you know what I’m talking about – let the device generate a one-time code based on a secret key, and enter that code on your computer (or smartphone app) to log in. In combination with a PIN on the smartwatch to be able to unlock the two-factor app, this would raise the security far above what most people currently are using to log in. Today most people are using passwords that are easy to guess, and even the hardest passwords are often snatched by spyware on the computer. With a PIN protected smartwatch, it’s faaar harder for an attacker to take control of your accounts.

The possibilities are endless, you just need to be a little creative to see them.

I have a lot more ideas and will update this post with them later on, but for now this will do.

Does Google Glass spell the end for mechanical lock security?

Here’s an interesting thought – the most widely used security system on earth might go down in flames this year because of a certain category of devices becoming much more common – personal cameras directly connected to computers, that can record things 24/7.

So what exactly is the big problem?

Most keys can be replicated from a single photo. So if your keyring ever end up in the viewpoint of somebody who uses Google Glass (or any other camera device that can capture images constantly), then most of your keys can be replicated in a matter of minutes.

To really show you what the problem is, some animations of how mechanical locks can help you to understand why avoiding it is hard.

Here is one video for one lock type;

And an animation of the most common type:

As you can see very clearly, it is the outside of the keys that interact with the inside of the locks. And this outside can be caught on picture by all these cameras. So, how well can keys be reconstructed from photos? If it is hard, this isn’t much of a problem, right?

“Child’s play”. That easy. Source:

Edit: Original source:

A pair of Google Glass, linked to a CNC machine that can cut out keys from a metal block, or even just a 3D printer that use plastic, is enough to quickly copy keys with minimal effort. Add image recognition directly in the glasses that even instructs you how to get a good view of the keys, and the user don’t even have to puch any buttons anymore, he just has to turn his head right.

So what can we do?

There are still a few types of lock types that in theory could be mostly safe. Round keys, that looks like hollow cylinders (if the key pattern is on the inside), could be safe from cameras, simply because it is hard to get a photo from the right angle and with the right light to replicate it. But due to how these locks are designed mechanically, they are often easy to “lock pick”, so you can unlock them without any key.

Then we have the digital locks. RFID locks (too many of these are insecure, Mifare RFID cards and others can be replicated, but some are secure), biometry (incredibly insecure in most cases, most types REQUIRES trained armed guards to be secure against “spoofing”), smartcards (generally pretty secure) and some more. The problem with many of them is that they are hard to use, run on batteries or stops working when there’s a blackout.

Whatever we switch to must be easy to use and work reliably and securely. Personally I am biased towards digital locks, but I know there are plenty of obstacles before we can use them everywhere.

%d bloggers like this: