Saturday, July 18, 2020

My collection is complete(ish)!

I drive an electric car, a Tesla Model S. Generally, I never worry about how or where to charge it. It has a large battery and a long range, ~400 miles, and I have a charging setup in the garage that is where it gets charged nearly all of the time, generally only to about 60% of capacity, because keeping the battery closer to 50% most of the time extends its life, and for daily driving I really only need about 20% of the range, so it tends to oscillate between 40% and 60% full. On long trips I also use Tesla's Supercharger network.

But, just in case, I'd like to ensure that I have the ability to plug it into whatever other sort of connector may be available. The whole collection needed for North American plugs is surprisingly large. And it's technically incomplete because there are a lot more plugs around, but these cover all of the common cases, I think.

The Collection
I'll go through them one by one.  There's one I have that isn't shown, because it's the one built into the car; the Tesla-style plug.  It's available at Tesla Superchargers and destination chargers. But this is about the set of adapters needed.

NEMA 5-15

Also called the North American Type B plug, this is the standard wall outlet plug. It's definitely not what you want to use to charge an electric car, but it can work. I drove a Nissan LEAF for two years with nothing other than a standard wall outlet for charging at home. Everyone has seen this, but here's a picture anyway:

NEMA 5-15, or Type B
The reason this isn't a great plug to charge with is because it's limited to 15A at 120V, which means it only provides at most 1.8 kW.  To charge a 100kWh battery from empty to full (which you never do, and could never actually do for complicated reasons I won't go into, but it's still a useful comparison point) on a NEMA 5-15 outlet would take about 56 hours. To put that another way, assuming 250 Wh/mile is the energy used when driving, each hour of charging nets you 7 miles of range. In practice a little less because there are some losses.0

NEMA TT-30

The TT-30 is a common RV plug ("TT" stands for "Travel Trailer") that delivers 30A, double the NEMA 5-15, but still only at 120V.  So it provides up to 3.6 kW, twice as much as the NEMA 5-15.  So a full 100kWh charge would take about 28 hours, and you get about 15 miles of range for every hour of charging.

NEMA TT-30


NEMA L14-30

The L14 is a plug commonly found on generators.  I actually have an 8kW generator with this plug, so having it is theoretically important for me. If my car were to run out of power reasonably close to home, I should be able to throw my generator in the back of the pickup, run it over to the car and charge the car up enough to get it home. Of course, that should never actually happen, but...

Like the TT-30, the L14-30 delivers up to 30A, but at up to 240V, meaning it can deliver up to 7.2 kW, charging the car from empty in around 14 hours, and delivering about 29 miles of range per hour of charging.


NEMA 6-30 and 6-50

The NEMA 6-series plugs are common in older homes for dryers, ranges and welders. They're no longer allowed by the electrical codes for new installations, because they lack a ground pin, instead making the neutral pin do double duty. Still, they're pretty common. We used a 6-30 when we plugged into a dryer outlet in McCall, Idaho, and the 6-50 when we plug into my brother-in-law's welder outlet.

The 6-30 delivers the same amount of electricity as the L14-30.  The 6-50 delivers up to 50A at 240V which is about as good as it gets outside of dedicated EV charging plugs.  50A @ 240V is 12kW which could theoretically charge a 100 kWh battery in just over 8 hours, and provides 48 miles of range per hour of charging. In practice, the onboard charger in my Model S can only accept 48A @ 240V, 11.5 kW, so unless I'm at a Supercharger, the fastest I could charge from empty to full (if that were possible) is 8.7 hours, gaining 46 miles of range per hour of charging.

NEMA 6-30

NEMA 6-50


NEMA 14-30 and 14-50

The 14-30 and 14-50 are the modern, properly-grounded versions of the 6-30 and 6-50.  They charge at the same rates; 30A @ 240V and 50A @ 240V.  Most RV parks have three outlets at each site: One or more NEMA 5-15s (standard household outlet), a TT-30 and a 14-50. Charging out in the boonies (say, northern Canada) where the Supercharger network isn't available would probably mostly be done at RV parks, so these three are the most useful.
NEMA 14-30

NEMA 14-50


SAE J1772

The J1772 is an EV charging outlet. It can deliver up to 80A @ 240V, which is 19.2 kW.  Were my car's internal charger able to accept that much, the 100kWh charge time would be 5.2 hours, and it would provide 76 miles of range per hour of charging. Of course, as I said above, the car's charger can only accept 48A.  This is the common plug found in most EV charging networks in North America. It's also what I have in my garage, since I originally set my charger up for a Nissan LEAF. In practice my garage charger doesn't seem to like charging faster than 40A; if I set it higher it gets too warm inside and turns itself down to 25A after an hour or so. So I always leave it set at 40A (~9.6 kW).

SAE J1772


Supercharger

The Tesla superchargers use Tesla's proprietary connector, and they work fundamentally differently than the above. The Superchargers provide DC, not AC, electricity, and they contain all of the battery charge controlling circuitry, rather than delegating that work to the car's onboard charger. The v3 Superchargers provide up to 520A @ 480V, which is 250kW.  That's a theoretical charge time of 24 minutes for a 100kWh battery, adding 1000 miles of range for each hour of charging.

The v2 Superchargers (which is what most Superchargers are) are slower, providing up to 312A @ 480V, or 150 kW.

Of course, you don't actually get those charge rates consistently. The Superchargers manage their charge rates based on the temperature and state of charge of your battery, and not all Teslas are capable of taking the maximum rates even when their batteries are near empty and at optimal charging temperature. The most I've seen on my car is 140 kW, which adds 560 miles of range for each hour of charging. But I've never used a version 3 Supercharger.  One is being installed about 20 miles from my home. I'll have to check it out.

Wednesday, May 13, 2020

Microsoft's "Immutable" Laws of Security vs Android

In 2011 Microsoft posted an updated copy of their "Ten Immutable Laws of Security".  It's interesting to look at these laws in the context of today's mobile operating systems, in particular the one I know best: Android.  I think many of the "laws" have been at least partially invalidated. Also, I think most of my comments would apply to iOS as well, though maybe not all, and I'll refrain from commenting since I don't know iOS security well.

Here are the laws, and my comments on each:

1. If a bad guy can persuade you to run his program on your computer, it’s not solely your computer anymore.

Android largely invalidates this law.  Every program (app) you run on an Android device is walled off from every other other app.  It can't access any storage but its own, the Android permission system enables the user to block it from accessing many system services and any app can be completely removed so even if the app does manage to do something bad, you can end its access.

Of course it's always possible that an attacker can find ways to exploit system vulnerabilities from his app and bypass these protections, but that's pretty rare.  Vulnerabilities of those sorts are hard to find and hard to execute on up-to-date Android devices.  It's also possible for apps to abuse legitimate system features, but only in narrow ways, and there are efforts to reduce those.

2. If a bad guy can alter the operating system on your computer, it's not your computer any more.

This is true, and it's why Android implements Verified Boot, using a few different mechanisms to make sure that a bad guy can't alter the operating system on your computer. If the bad guy alters your operating system where it's stored on your device, your device just won't boot.  If the bad guy alters your running, in-memory operating system (which is hard to do), his changes will disappear at the next reboot.

3. If a bad guy has unrestricted physical access to your computer, it's not your computer any more.

This is only somewhat true.  Android moves some important work out of the Android system entirely and into isolated environments that are much tougher to compromise through physical access.  Nothing is perfectly secure against an attacker who has complete control of the hardware, but it can be very, very difficult.  The Android security team puts a great deal of effort into ensuring that an attacker to steals or finds your phone can get basically no data out of it, and even has a very hard time wiping it and using it himself.

4. If you allow a bad guy to run active content in your website, it’s not your website anymore.

I'm less up to date on website security these days, but there are many tools that could be used to mitigate this risk.

5. Weak passwords trump strong security.

This is very context-dependent.  If we're talking about your Android lockscreen, it's arguably not true.  The device hardware imposes exponentially-increasing delays between authentication attempts, making brute force search of password spaces pointless.  Of course, if the attacker has some way to observe you entering your password, or some way to guess what password(s) you might use, that may not matter.

6. A computer is only as secure as the administrator is trustworthy.

On mobile devices, the administrator is the user... and Android does not trust the user to make good security decisions.  There are some 2-3 billion Android users in the world, and there's no way all of them are sufficiently educated and conscientious to make good security decisions.  In fact, hardly any of them are. As a result the system does very much try to protect from administrator untrustworthiness. There are limits to what can be done, of course.  We can't protect users who decide they really want to post all their personal data on Facebook, but we can make it hard for them to inadvertently screw up.

7. Encrypted data is only as secure as its encryption key.

This one is actually something I'm willing to call an "immutable law", but only because the whole purpose of encryption is to turn large secrets (encrypted data) into small secrets (encryption keys).  It's basically a tautology. That doesn't mean it's not worth keeping in mind. Any time anyone says they're protecting data with encryption, the very next question should always be "Where's the key and how is it secured?".

8. An out-of-date antimalware scanner is only marginally better than no scanner at all.

Anti-malware scanners are more of a threat than a help in the Android world. Lots of them want users to root their devices in order to let the scanner break out of the sandbox to "protect" the device... but even the scanners that aren't actively malicious (and there are more than a few of those) are buggier and less secure than Android itself. Android has a built-in scanner which just checks the apps you have installed to see if any of them are known to be harmful. That's always up to date. Everything else is just a bad idea.

9. Absolute anonymity is practically unachievable, online or offline.

This is true.  That said, you can often get pretty close given enough knowledge and effort.

10. Technology is not a panacea.

Also self-evidently true.

Monday, March 30, 2020

First (house) furniture project

Since I just finished it yesterday, I thought I'd post some photos of my first woodworking project intended to be good enough to go inside the house, albeit in a closet.

My granddaughter, Aislynn, has been sleeping in my wife's walk-in closet when she's at our house, more or less since she was born.  It's nice that it's very close to our bedroom, but not in our bedroom.  For a while there was a crib in there, then a playpen, then she outgrew those and we just put a small mattress on the floor.

But Kris (my wife) has been complaining that it takes up too much of her not-enormous closet, and Aislynn has been complaining that her bed "doesn't have legs.  Beds should have legs." (her words, and yes, she is that articulate, very good for a 2.5 year-old). So I decided to build a Murphy bed for Aislynn, a bed that folds up against the wall when not in use. With legs!

The bed is basically an open-topped box to hold the mattress, connected to a small platform via a piano hinge.  The platform is supported by sides that have a gap on the side against the wall, so I didn't have to remove the baseboard, and is attached to the wall with a couple of angle brackets.  The box and platform are make primarily of 3/4" Baltic birch plywood, and the box has a 3/4" strip of cherry around the top as a decorative trim.  I cheaped out on the floor of the box and used regular plywood there; I wish I'd used the hardwood ply.  There are two legs (per Aislynn's requirements!) supporting the far end of the box, on hinges so they fold flat when the bed is up against the wall.

I used a router to round all of the edges, so there are no sharp corners anywhere on it, and then varnished the whole thing first with Danish oil and then with a couple of layers of oil-based polyurethane.  I largely botched that, with lots of spots that didn't get even coverage and other spots that have globs of poly that I had to sand off.  I'm learning.  My shop furniture I just finish with paste wax, a more tedious but more forgiving finish.

After I installed it, Aislynn said it needed flowers.  Kris found some vinyl stickers of flowers, butterflies, etc., to put on it.  No, she didn't have to go buy them, she has lots of that sort of thing around.

I sized the box to fit her mattress, but I now see it's really too small.  She can just barely lie straight without either her head or her feet hitting the box, but that will change, probably by next week, the rate she's growing.

Here are some pictures:


Aislynn on her new bed. Happy girl, happy grandpa!


The bed in upright position.

I think I'm going to add two more legs near the hinge to make sure it has good support.  You can see the lower-quality plywood of the bottom.  It would look nicer if I'd used hardwood ply.  I suppose I could still take a sheet of thin ply (I have some) and cover the cheap stuff.  Maybe I will.


The platform and the hinge

Note that line of holes on the side against the wall. Oops.  After it was all varnished I drilled and screwed the hinge into the wrong side.  Argh.  Maybe I'll try filling the holes and then sand and re-varnish.

You can just barely see the two brackets holding it to the wall.  They're off-center because they needed to be where the wall studs are.

Tuesday, May 16, 2017

The Internet is eroding free speech

An article in The Economist about a year ago pointed out that free speech protections around the world have been eroding the last few years. I had already recognized that, but the article highlighted a cause that I hadn't considered: The Internet. How is it possible that the single greatest communication technology ever invented, one that was specifically designed to remove central controls and chokepoints, and one that has become ubiquitous in the developed world and has given every single person connected to it the ability to publish their ideas to the world, could actually damage free speech? If true, it certainly seems like the mother of all unintended consequences. The mechanisms are simple and in hindsight pretty obvious. There are two major elements, but they both boil down to one issue: If everyone has unfettered free speech, unlimited even by social constraints, some people will say stuff that really offends the majority, to the point that the majority decides to take action to shut it down. In the weaker and less worrisome of these mechanisms, the majority in question acts through government or other central power mechanisms. Many European nations ban many forms of Nazi-supporting speech, for example, as an arguably-reasonable reaction to a war that killed something approaching 100 million people. The Internet exacerbates the tendency towards these sorts of speech restriction by bringing disparate cultures with different ideas of what is offensive into collision. European ideas about what sorts of restrictions on speech are very different from American ideas, and both are quite different from Asian (particularly Chinese) ideas. So the Chinese have their Great Firewall, and Europe has used legal action to force American Internet companies to comply with some of their ideas. An even more stark contrasts exists between the norms of Muslim theocracies and Western secular democracies. Taken to its logical conclusion, this collision of views on what speech should be muzzled could result in a "least common denominator" version of speech on the Internet which, which allows only that which offends no one. But technology does enable services to distinguish, albeit imperfectly, between people in different locations so it seems more likely that we'll just continue building out a balkanized Internet, with content restricted by region. That's a bad thing, but it's much less worrisome than the other mechanism, which seems likely to erode free speech even in regions of the world that prize it highly. The United States, since its inception, has always considered the freedom of political speech as the highest principle and deepest foundation of freedom and democratic self-government. The Internet may destroy that, or at least seriously weaken it. Back in the early days of the Internet, we had a marvelous thing called USENET. I started using it in 1990, about a decade after it came into existence. USENET is a massive set of online discussion forums, covering almost every conceivable topic, and with rare exceptions these forums are complete free-for-alls. Anyone with access (which was primarily through University computers) can post anything they like. Back in those early days, users of USENET and mailing lists created an idea which now seems quaint: "netiquette", a set of rules for online interaction intended to keep discussions civil and the "signal to noise ratio" high. New users often violated these rules, but they were called out and by old hands and large they quickly learned how to behave. Those few who didn't were shut down simply by giving them the cold shoulder. An interesting phenomenon arose during those early days. Each September a new batch of freshmen started their University educations and discovered USENET. This annual surge of new users led to a pattern of battles with netiquette violations each fall, which settled down as the new users either became accustomed to the rules of on-line civility, or got bored and wandered away. In 1993, though, "Eternal September" began, and online conversation has never returned to the civil, educated dialogue of those early days. The cause of Eternal September was that America Online (AOL), a large online service provider, gave its user base access to USENET. This created a massive and perpetual influx of new users, at a pace that meant that the forums were always populated with them. In addition, the new users came from all walks of life, not just university students, staff, faculty and alumni, and they were posting from user accounts they paid AOL for, rather than accounts that were provided by a university -- and could be taken away for egregious violations. Justice Louis D. Brandeis famously wrote something in his opninion on Whitney v California which is often paraphrased as "the answer to bad speech is more speech", to educate and correct the speaker of the bad speech. His actual statement, though, is more nuanced, and recognizes that such a simple view doesn't always work. What he really said (emphasis mine) was:
If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.
What has been discovered time and again by operators of online forums is that if enough users are present, and absent some other sort of strong social mechanism, there is no time to avert the evil by education. Trying to educate, or drown out, bad speech without enforcing silence, just descends into an unending shouting match. Signal-to-noise ratio approaches zero as there is no room to have any constructive discussion.

The solution is various forms of control and moderation: Some people are empowered to remove the speech of others, or even remove their ability to speak entirely. In its immediate effects, this is a good thing. The fact that there are many, many online forums means that if people find the moderation in one place to be unacceptable, either because it's too heavy or too light, they can find somewhere else more to their liking, and they can speak there. As such, the prevalence of moderation mechanisms in online discussion doesn't appear to seriously impact freedom of speech.

But it does.

It does so by teaching people that it's not only okay but a Good Thing to have authorities who are empowered to silence bad speech. The generation growing up with social media sees online bullying and abuse as problems which can and should be shut down by invoking authoritarian mechanisms to silence them. The mechanisms range from users who delete or suppress comments on their own posts to intervention by the forum operator to ban users.

At that level, it is a good thing to silence bad speech. There's no social value in facilitating abuse and bullying. The problem is that the experience of people growing up seeing such control mechanisms and normal and desirable seems to be leading them to expect that the same approach should be applied to all bad speech, in all contexts.

The clearest example of this is the recent spate of protests by university students, who are demanding that administrators shut down all sorts of real-world speech they find unpleasant. While from one perspective it might seem like this is a logical extension of online protections against bullying and mechanisms to improve signal-to-noise, it's actually very different. University regulations have real-world teeth with no analogue in the online world.

Moreover, the profusion of online forums has enabled people to create or seek out "safe spaces" where their ideas and beliefs can remain unchallenged... and that, too, is something that we're seeing university students demand be created for them. This goes well beyond simply preventing bullying or abuse, to avoid or silence substantive debate or discussion on topics of great importance, merely because they're uncomfortable or controversial.

Universities are struggling with whether they should accede to these demands, and to what degree. That's an important battle, but the really important battles are those that are going to be occurring a decade or two in the future. Whatever university administrators do, in the future we're going to live in a world where policymakers see suppression of unpleasant speech as normal and expected.

Who knows what they'll choose to suppress?

I'm going to take some time to have a serious conversation about these issues with my teenage and young adult children. We need to ensure that they see very clearly that there's a vast difference between moderation on Facebook and legal restrictions on speech that some may find offensive. We need to teach them that it is critically important that intolerant, obnoxious and even extremely offensive speech be permitted.

Along the way it'd be nice to teach them netiquette, because social rules that lack central enforcement mechanisms are much less dangerous to speech, but that may be wishing for too much.

Friday, April 21, 2017

Fingerprint security

Hardly anyone understands fingerprint security

Yesterday there was a post on slashdot about MasterCard adding fingerprint scanners to credit cards. Predictably, to me anyway, the post generated a host of dismissive comments saying it's a stupid idea... and in the process revealing that they do not understand biometric security. I replied at length, and, as I always do, thought "I really need to write a blog post to explain this, so in the future I can just post a link rather than typing a ton."

This is that blog post.

Claim: Fingerprint authentication is serious James Bond shizzle and it's totally secure.

No. No, it's not. See below.

Claim: Fingerprint authentication is insecure because you only have ten fingers, and when you've used them all you have no more new "passwords".

This is wrong, because it assumes that fingerprints (or other biometrics) are just a slightly different sort of password. They're not. Biometric authenticators are nothing at all like passwords; the security model is completely different. To understand how and why, we first need to understand the password security model.

Why are passwords secure? Passwords are secure when the attacker doesn't know them. That seems simple and obvious, but subtleties arise when you think about how an attacker might get them. There are two primary ways: stealing copies, and repeated guessing, also known as a "brute force search". These interact  in some cases the attacker can steal part and guess the rest  and there are many methods of optimizing both, but it all boils down to getting a copy, or guessing.

Suppose the attacker has obtained a copy of your password, without your knowledge. Your security is compromised, but now the attacker has a choice. He can change your password, lock you out of your own account/device and use it for his own purposes, or he can leave your password and make covert use of your account. In many cases, the attacker opts for the latter approach because the former is too noticeable and the account/device often quickly gets shut down. Or suppose the attacker has obtained a copy of your password but hasn't gotten around to using it yet. In either case, changing your password shuts off the attacker's access, closing the window of vulnerability.

There's another reason to change your password from time to time, to protect it against compromise by guessing. Depending on how the system is built, what information the attacker has to start with and the attacker's resources, the attacker will be able to make guesses at some rate. If you change your password before the attacker can guess your password, the attacker has to start over. Another way to look at it is that as the attacker guesses, he gains knowledge about your password, by knowing what it is not. When you change your password, that knowledge is invalidated.

In a nutshell: Password security derives from password secrecy, and you remove whatever knowledge the attacker has when you change it (assuming you don't just change a character or two). Another way of looking at it is that password secrecy erodes over time, and rotation restores it.

But your fingerprints are not secret. You leave them on almost everything you touch. From a security perspective the only reasonable way to think about biometrics is that they are public information. We have to assume the attacker already has your fingerprints. In the case of smartphone or a credit card, odds are good that there are nice fingerprints on the device itself.

The purpose of password rotation is to restore eroding secrecy, but fingerprints aren't secret to begin with, so rotating would serve no purpose. It's completely irrelevant that you only have a limited number of fingerprints. Also, if fingerprint authentication security relies on the secrecy of non-secret information, it's broken. So either biometrics are just insecure or the security comes from something other than secrecy.

Claim: Fingerprints aren't passwords, they're usernames!

People who sort of recognize that fingerprints really aren't like passwords often fall into this trap, aided by some widely-shared blog posts like this one. This idea that fingerprints are identifiers seems to be buttressed by the fact that the criminal justice system often uses fingerprints to identify people (except it really doesn't). So if fingerprints don't seem to fit the model of passwords, maybe they're usernames?

No. They're not. Biometrics are lousy identifiers. Good identifiers should have uniqueness guarantees, biometrics don't. Good identifiers should always either match or not match, biometric matching is fuzzy, every match is a judgement call. If your database of potential identities is at all large this fuzziness invokes an interesting little statistical fact known as the Birthday Paradox.

In the context of birthdays, the paradox goes like this: Suppose you're at a party with 30 people. What are the odds that two of them have the same birthday? Most people guess that the odds are low, since there are many more days in the year than people. Actually, assuming uniform distribution of birthdays (no days more likely than others), there is a 71% chance that at least one same-birthday pair exists. If you can get someone to give you an even-odds bet at such a party (and you know the other person doesn't have knowledge of the attendees birthdays), take it. You may lose (29% chance) but over the course of a few such parties you're guaranteed to come out ahead.

Why is the probability of a match so high? While there are only 30 people at the party, there are 30  29 = 870 pairs of people, and still only 366 days. That's a very handwavy justification; see the Wikipedia article for the math if you're really interested.

What does this have to do with biometrics? Well, birthdays are one way of classifying people into sets, and biometrics are another.

If you think about the space of all possible fingerprints then my right index finger is a point in that space. There may in fact be no other person with a finger occupying that same point. But measurement of fingerprints is imprecise, so a fingerprint matcher actually accepts any point sufficiently close to my finger as being my finger. How close is close enough?

It's a tradeoff. A very tight bound means that very often when I put my finger on the scanner, the matcher will say it's not close enough to mine to be me. This is a false reject and the rate at which is happens is called, sensibly enough, the false reject rate, or FRR. A very loose bound means that often when someone else puts their finger on the scanner, the matcher will say it's close enough to be me. This is a false accept, and the rate is the FAR. Tuning the bound allows trading FAR for FRR and vice versa.

So, for any given bound, within the space of all fingerprints there is a set of people with prints who match me, and I them, though not every time because remember that the scanning process is imprecise. It's not quite the same as the very crisp categorization of birthdays, but it's close enough, and it's definitely the case that the Birthday Paradox applies.

Of course, fingerprint matchers distinguish much more finely than birthday categorization. Common systems have FAR values of 1:50,000 or less, whereas birthdays are 1:365.2425. But people want to create databases with far larger numbers than attend a party. If you have a database with 1,000 people in it, you have 999,000 pairs of people in your database and that 1:50,000 FAR looks pretty skimpy. Bump this up to databases with millions, or hundreds of millions, or billions of people and the FAR would have to be impossibly low to reliably and uniquely identify every one of them.

With usernames we address this problem by enforcing uniqueness. If you try to create an account with an already-taken username, the system demands that you pick a different username. We can't do that with biometrics.

So biometrics in general, and fingerprints in particular, are not good usernames.

Claim: Fingerprints are bad usernames (not unique, fuzzy) and bad passwords (not secrets), so fingerprints authentication is useless.

This is also wrong. This view implicitly assumes that the only possible authentication security model is the password model, which relies on secrecy. It's not. The reason passwords have to be secret is because if the attacker knows the password, the attacker can present the password to "prove" his identity. Biometrics are different. Merely knowing what your fingerprint looks like does not enable the attacker to present it to the system. More is required... and that more is the source of security provided by biometrics.

So... just how hard is it for an attacker to fake your fingerprint? It depends. On a lot of things. Can the attacker bypass the scanner and provide a digital image of your fingerprint directly to the matcher? If so, then the fingerprint is a password, and we've already seen that fingerprints are not secret. But, systems can and do implement countermeasures to prevent this attack, such as having the scanner cryptographically sign the images it sends and having the matcher reject any that don't have the correct signature. Plus, this sort of attack requires hardware hacking that is beyond the skill level of many potential attackers (I'll come back to this point).

If the attacker can't inject digital data directly, that means he must somehow create a fake fingerprint and get the scanner to accept it as a real one. Scanners implement some "liveness detection" countermeasures that attempt to make this difficult, with varying degrees of success. (Liveness detection also hopes to defeat the more gruesome stolen-finger attack). Again, though, creating a fake finger that will work takes some skill and some effort which is beyond the capability of many attackers. In addition, getting it right often requires some trial and error, especially if the attacker doesn't actually know the fingerprint to use, but only has a set of prints lifted from surfaces you touched, some of which may not be yours, and some of which may be yours, but not the right finger.

In some contexts, stronger countermeasures can be implemented. For example, military access control systems that use biometric authentication often have an armed guard who is trained to look for finger fakery. This makes using a fake (or stolen) finger harder and increases the consequences of failure. Luckily for him, the systems Mr. Bond encounters always seem to be unattended, or attended by an easily-subdued guard.

Claim: Fingerprint authentication isn't useless in all circumstances, but the way it has to be implemented in a smartphone or a credit card or a personal computer makes it useless.

Fingerprint security depends not on secrecy but on the difficulty of presenting a known fingerprint that is not the attacker's own. How hard that is depends on the details of the system. Whether that is hard enough depends on the attacker: motivation, tolerance for risk and ability. What sorts of attackers are interested in defeating authentication is determined by how much value they find in defeating it. If the fingerprint auth is the only thing protecting a billion dollars, or nuclear weapons, or any other very high-value target (to some attacker), then motivation, risk tolerance and ability will all be high. If it's protecting my contact list... not so much. Especially since if the attacker can drum up some plausible reason for needing to know my contact list, he can just ask me (this is called social engineering).

So, how valuable is a credit card? A few thousand dollars at the outside, and there are non-trivial risks and difficulties in getting that money and getting away with it. There are certainly people willing to brave the risks for the rewards, but they tend not to be people with high levels of technical skill or a taste for the tedious, detailed work required to lift good prints and make good fake fingers. Those sorts of people can generally acquire thousands of dollars in risk-free, socially-approved ways. Some choose not to, but they're very unusual.

Also, you have to consider what the fingerprint authentication is replacing, because this is an augmentation of an existing, well-understood and reasonably well-functioning system. What is it replacing? Essentially... nothing. The US does not use chip-and-PIN so the only form of user authentication we have now is signature. Which is nothing. No one checks it, and no one knows how to check it if they want to, except at the crudest level. So in the context of credit cards, fingerprint authentication is an unambiguous improvement, as long as the existing backend-based risk management systems are retained.

What about smartphones? Their value varies tremendously. At the low end, there is the resale value of the device itself. At the high end, they may contain immensely valuable secrets. Donald Trump's Twitter password can move stock markets. Larry Page's email likely contains the details of multi-million dollar acquisition proposals. Somewhere in between the low and high end, attackers are willing and able to hack hardware and fake fingers.

But, again, you have to consider the alternative. At the low end, the majority of smartphones without fingerprint scanners have no password, and so no security at all, other than some degree of care to retain physical possession. People don't password their phones because it's inconvenient to enter a password many times per day. Others are willing to put up with a little inconvenience in the name of security so they use a password, but choose a very weak one, easily guessed or shoulder-surfed. Or if they choose a middling password (basically no one chooses a good one for their phone), they set the lockscreen timeouts to be very high so that they don't have to enter it often — very convenient for the attacker who finds/steals the device.

For the vast majority of smartphone users, then, a fingerprint is an unambiguous improvement in the security of their device.

Bottom line:

Biometric authentication is not perfect, nor is it useless. It works differently from password authentication, has a different security model with different tradeoffs. Whether it's workable for a particular context depends on the details; you have to understand the security model and analyze the situation in detail considering risks, expected sorts of attacker, and alternatives. And it's actually pretty good for most smartphone users and most credit card holders.

Wednesday, February 24, 2016

Web security

People don't understand web security, even many who should

Yesterday I posted a Google+ poll about what is and isn't safe on an unknown Wifi access point, and 93% of the 30 responses were wrong, in spite of the fact that many of the people who follow me are technical and even have a security bent. This did not surprise me.

The question was "You're connected to some random public Wifi hotspot. You know nothing about who owns it or why they make it public (the SSID says it's STARBUCKS and there is a nearby Starbucks, but who knows?), but it gives you a functional Internet connection. Which of the following is safe?"

The three possible answers and their response rates were:
  • Logging onto your bank to transfer money (7%)
  • Reading the news on CNN, or similar (27%)
  • None of the above, unknown Wifi is risky (67%)
The right answer is "Logging onto your bank to transfer money". Congrats to the two voters who got it right.

The fact is that your bank's web site uses TLS to provide an end-to-end secure connection, encrypted and authenticated. There is a possibility that your bank has TLS configured incorrectly, for example by still allowing old versions SSL3 and even SSL2 which have known weaknesses, but it's actually pretty low. Most all banks get this right. If you wonder about yours, there are some free services that check for you. For example, https://www.ssllabs.com/ssltest/.

(Note that the SSLLabs testing protocol is fairly harsh, which is good but perhaps a bit alarmist. If your bank gets a B or a C, scroll down in the report and take a look at the results of simulations with the browser you use. One of the banks I use (Chase) gets a B because it supports use of RC4. Now RC4 isn't actually insecure when used correctly, and TLS does use it correctly, but it's old and no longer recommended. However, my browser (Chrome 47), does not use RC4, so the site/browser combination is green, meaning secure. Some sites also get dinged for not supporting forward secrecy. That's a nice property, but not essential. Don't worry about it.)

So why were the others wrong?

The second option, reading CNN or similar, is wrong because almost no news sites use TLS at all. They're all unencrypted and unauthenticated. This makes them risky all the time, because any attacker anywhere in the chain of routers and servers between you and the origin site can alter the data you receive from them, injecting malware to exploit your browser, tracking cookies (from any origin), drive-by downloads, etc. An attacker can also extract authentication cookies from your browser, for any non-TLS sites you use even without any browser vulnerabilities. But non-TLS sites are particularly risky when you're connected to a random Wifi AP because that is an ideal way for someone to mount these sorts of attacks.

The third option, "just say no", is one I have some sympathy for. And, as the comments on my original post pointed out, it actually may be a correct answer depending on how you interpret "safe". TLS doesn't protect the Wifi operator from seeing what sites you visit, for example, and that list can say a lot about you even if none of your actual information is revealed and even if the data sent to your browser can't be manipulated in dangerous ways. However, if being tracked online is your major concern, you probably just shouldn't use the web at all, unfortunately, so I'm going to discard that definition of "safe". Under a more common interpretation which is that your traffic can't be snooped or your computer (or mobile device) harmed by what you receive, this option is wrong because connecting to your bank, with an up-to-date browser and assuming decent configuration on the bank side, *is* safe, and the "just say no" stance would unnecessarily restrict you.

Others in the comments tried to add additional options to the poll, suggesting that it was safe to use a VPN to a trusted gateway. This is true, but HTTP sites are still inherently less secure than HTTPS sites. Use of the VPN removes the possibly-unscrupulous Wifi operator from the equation, but an attacker who manages to get between the VPN server and CNN can still do anything. Security really needs to be end-to-end.

An interesting point related to VPNs is that Google's security team is eliminating their use for Google employees. Google used to deploy VPN software on almost all portable corporate devices so that employees could securely connect to company systems while traveling, or from home. A couple of years ago they recognized that this was an unnecessary complication and instead they've worked to expose all of the internal systems directly on the Internet, but accessible only via HTTPS (and with tight TLS configurations). Client certificates are issued to corporate devices, and the TLS configuration is set to reject any connection that doesn't have a valid certificate. Actually "exposed" isn't quite the right word, because what's really directly connected to the Internet is a reverse proxy which does the TLS and also implements user authentication (after the connection is established and client cert is verified) using user password and two-factor authentication. The 2FA used in most cases is cryptographic, not a password, from a hardware token (a "security key"). The actual systems receive information from the proxy about what user connected, and don't need to handle any of the TLS or authentication themselves. It's quite elegant, very secure, all standards-based, and doesn't require VPN software, just a modern web browser. Also, _exactly_ the same technology is used for connections coming from inside the corporate network. There's no need to distinguish between "outside" and "inside"; just secure it all.

By the way, I sometimes run into people who don't believe that everything on the web should be encrypted, because lots of stuff just isn't that important. They're wrong. We really need to encrypt and authenticate all Internet traffic.

Thursday, August 27, 2015

Jace's Grade Monitor

My youngest son is a bright kid, but doesn't have much interest in school. He's in 8th grade. It's easy enough for him, but he'd rather play games or watch TV. That has caused him to struggle in school, and resulted in lots of tear-filled evenings when he gets to do nothing but homework because he's behind.

Our solution is to ground him from video games and TV if he's behind in school... and to specify that he isn't allowed to do any of that stuff after school until he's done his homework, but it's pretty hard for us to enforce that, since we don't have up-to-date information on what he's assigned and what he's turned in. We can find out if he's behind or his grades are slipping, though, by looking at the school's on-line gradebook. So, the rule is that if that shows he's behind, or his grades are too low, then he's grounded.

But even that is something of a pain, because it requires us (well, Kristanne) to regularly check the web site, which is often down and almost always slow. And if she's not home when he gets home from school he doesn't know if he's currently behind.

I decided that this problem can be solved with technology. There are several ways I could go about it. I could write a phone app, or put something on the kids' computer. I decided to go with something less subtle: A multi-color LED in the kitchen that shows his current status at all times. If his grades are good, it's green. If they begin to fall, it shades towards yellow and then red. If he's missing assignments, it flashes a count of the missing assignments in blue.

The result is a crude, hack-and-slash assembly of some pretty cool technology. I threw it together in an evening, including learning how to install and configure all the components and write all the code. The components are:
  • A Raspberry Pi 2, (RP2) a small quad-core 700 Mhz ARM computer with 1 GiB of ram and a 32 GiB SD card. It runs Debian Linux.
  • A Spark core, which is an Arduino board with built-in Wifi and an interesting development toolset.
  • A red-green-blue LED.
  • Some resistors to avoid blowing up the LED. I used 330 ohm resistors, which are too big, which means the LED is dimmer than it could be. Oh, well.
My original plan was not to use the Spark, but to wire the LED directly to the GPIO pins on the RP2. But the GPIO headers were male, which means I need a ribbon cable or something else to make it easier to wire them (or I could have soldered wires to them, but that sounded too much like work). So instead I connected the LED to the Spark and I'm using the RP2 to control it.

Here's what the RP2 looks like, in the nifty black case I got for it:


The dongle hanging out the right end is USB Wifi adapter. It's also got an Ethernet jack, if I want to put it where it can be wired.

And here's the LED connected to the Spark core, on a breadboard, with the all-important grade status LED:



For now I just set the whole thing on top of the printer. I'll think about doing something nicer later.

How does it work? The RP2 has a cron job that runs a small script. The script uses curl to download the web page with the grade and assignment information, then pipes it to a Python program that parses the HTML and extracts and summarizes the grade and missing assignment info. The output from the Python program is then used for two more curl invocations, which post the computed values to a server run by Particle.  They make the Spark core.

The Particle server routes the data to the Spark, which is running a trivial Arduino program. Normally you compile Arduino code on your computer and download it to the device via USB. You can do that with the Spark, but for small stuff it's easier to use their web-based tools. They provide a web-based editor into which you type your code, then you click a button and they compile the code to a binary and send it over the Internet to the Spark, which flashes it and reboots.

All very hack and slash... but it works and it was really easy.  The Sparks are pretty cheap, too ($12, IIRC). I have another I'm going to wire into my garage door controller to do some automation on it (because the kids keep leaving the garage open).

My collection is complete(ish)!

I drive an electric car, a Tesla Model S. Generally, I never worry about how or where to charge it. It has a large battery and a long range,...