Category Archives: security

Pretty Good Phishing

PGP is not broken.  It has long been the best framework most of us have for digital identity, and a secure means of communication.

Sadly the same cannot be said for certain popular PGP tools, nor for vast numbers of tutorials out there.  The usage we enjoyed and became accustomed to for a quarter century will now lead at best to confusion, and at worst to mistakes that could defeat the entire purpose of PGP and leave users wide open to spoofing.  That applies both to longstanding users who understand it well, and to the newbie who has read and understood a tutorial.

The underlying problem is that 32-bit (8 hex character) key IDs are comprehensively broken.  The story of that is told at evil32.com, by (I think) the people who originally demonstrated the issue.  It’s developed further since I last paid attention to it (and drew my colleagues’ attention to the need to stop using those 32-bit key IDs), in that an entire ‘shadow strong set’ has now been uploaded to the keyservers.  Those imposters were revoked by the evil32 folks, but with the idea being out there, anyone could now repeat that exercise and generate their own fake identities and fake Web of Trust.  And when a real malefactor does that, they’ll have the private keys, so there’ll be no-one to revoke them.

Let’s take a look at a recent sequence of events, when I rolled a release candidate for an Apache software package, and PGP-signed it.  Bear in mind, this is all happening in a techie community: people who have been happily using PGP for years.

[me] Signs a software bundle, upload it with the signature to web space.
[colleague] Checks the software, comes back with a number of comments.  Among them:

- Key B87F79A9 is listed as "revoked: 2016-08-16" in key server

Where does that come from?  I take great care of my PGP keys, and I certainly don’t recollect revoking that one.  To have revoked it, someone needs to have had access to both my private key and my passphrase, which is kind-of equivalent to having both the chip and the PIN to use my bank card (and that’s ignoring risks like someone tampering with my post on its way from the bank).  This is … impossible … alarming!

Yet this is exactly what happens if you RTFM:

% gpg --verify bundle.asc
gpg: Signature made Sun 16 Apr 2017 00:00:14 BST using RSA key ID B87F79A9
gpg: Can't check signature: public key not found

We don’t have the release manager’s public key ( B87F79A9 ) in our local system. You now need to retrieve the public key from a key server.

% gpg --recv-key B87F79A9
gpg: requesting key B87F79A9 from HKP keyserver pgpkeys.mit.edu
gpg: key B87F79A9: public key "Nick Kew <me>" imported
gpg: Total number processed: 1
gpg:           imported: 1

That’s a paraphrased extract from a real tutorial (which I intend to update, if noone else gets there first).  It was fine when it was written, but now imports not one but two keys.  Here they are:

$ gpg --list-keys B87F79A9
 pub 4096R/B87F79A9 2011-01-30
 uid Nick Kew <niq@apache...>
 uid Nick Kew (4096-bit key) <nick@webthing...>
 sub 4096R/862BA082 2011-01-30

pub 4096R/B87F79A9 2014-06-16 [revoked: 2016-08-16]
 uid Nick Kew <niq@apache...>

Both appear to be me; one is really me, the other an imposter from the evil32 set.  It’s easy to see when we know what we’re looking for, but could be confusing if unexpected!

The problem goes away if we use 64-bit Key IDs, or (nowadays strongly recommended) the full 160-bit (40 character) fingerprint.  It is computationally infeasible anyone could impersonate that, and indeed, they haven’t.

$ gpg --fingerprint B87F79A9
 pub 4096R/B87F79A9 2011-01-30
 Key fingerprint = 3CE3 BAC2 EB7B BC62 4D1D 22D8 F3B9 D88C B87F 79A9
 uid Nick Kew <niq@apache...>
 uid Nick Kew (4096-bit key) <nick@webthing...>
 sub 4096R/862BA082 2011-01-30

pub 4096R/B87F79A9 2014-06-16 [revoked: 2016-08-16]
 Key fingerprint = C74C 8AA5 91CB 3766 9D6F 73C0 2DF2 C6E4 B87F 79A9
 uid Nick Kew <niq@apache...>

The imposter’s fingerprint is completely different from mine.  It’s not PGP that’s broken, it’s the use of 32-bit/8-character key IDs in our tools, our tutorials, and our minds, that’s at fault.

However, the problem is a whole lot worse than that.  It’s not just my key (and everyone else in the Strong Set at the time of the evil32 demo) that has an imposter, it’s the entire WoT.  Let’s see if WordPress will let me present these side-by-side if I truncate the lines a bit.  The commandline used here is

$ gpg --list-sigs [fingerprint] |egrep ^sig|cut -c14-50|sort|uniq|head -5

which lists me:

My Key Imposter
010D6F3A 2012-04-11  dirk astrath (mo
02D1BC65 2011-02-07  Peter Van Eynde 
0AA3BF0E 2011-02-06  Christophe De Wo
16879738 2011-02-07  Markus Reichelt 
1DFBA164 2011-02-07  Bernhard Wiedema
010D6F3A 2014-08-05  dirk astrath (mo
02D1BC65 2014-08-05  Peter Van Eynde 
0AA3BF0E 2014-08-05  Christophe De Wo
16879738 2014-08-05  Markus Reichelt 
1DFBA164 2014-08-05  Bernhard Wiedema

The first field there is the culprit 8-hex-char Key IDs for my signatories and their evil32 doppelgangers.  The only clue is in those dates, which would be easy to overlook.  Otherwise we have a complete imposter WoT.   Those IDs offer no more security than a checksum (such as MD5 or SHA) if used without due care, and without a chain of trust right back to the user’s own signature (which is something you probably don’t have if you’re not a geek).

There are a lot of tools and tutorials out there that need updating to prevent this becoming yet another phisher’s playground.  Tools should not merely stop displaying 8-character key IDs, they shouldn’t even accept them.  I don’t think mere disambiguation is enough when an innocent user might thoughtlessly just select, say, the first of competing options.

I’ve already been diving in to some of those tutorials where I have write access to update them, but the task is complicated by having to work in the context of a document that deals with more than just the one thing, and without adding too much complexity for readers.  So I decided to work through the story here first!

Under attack

Yesterday morning I woke up to several hundred (or was it thousand?) messages from the online contact form on my website.  They came from what was clearly an automated dumb probe: all within a few minutes just before 4 a.m.  The probe had tried filling different fields with all kinds of payloads: fishing Unix paths, fishing Windows paths, escaped and unescaped commandline sequences including shellshock, SQL injection attacks, Javascript/XSS fragments, attempts to send mail or proxy HTTP.  Oh, and some fragments whose potential purpose eludes me.

OK, no big deal: just a few minutes of my time.  Dumb bots attack websites all the time.  Whatever vulnerabilities my server has (and I’m sure there are some), that kind of bot probing my contact form is no threat – except insofar as it could become a DoS.

This morning, another 740 messages.  From an even briefer probe: all at 03:59 and 04:00.  Checked the IP they all came from, and firewalled it off.  With a DROP rule, of course.  If it recurs from elsewhere, I’ll have to take a view on whether this approach can be extended or is useless.

If I can be arsed, maybe I’ll stay up and tail the log tonight, starting 03:50 or so.  Wonder if the perpetrator can be pwned while in action?  On second thoughts, maybe not at that hour, doubly not after the couple of pints I regularly enjoy on a Thursday evening.

A little secret

Yahoo admits to a billion customer records being compromised.  The numbers are staggering, but the news of the exploit is mundane.

Doubtless the raw numbers are very largely inactive accounts.  People who long-since stopped using Yahoo accounts.  People who signed up with some other company that subsequently got borged by Yahoo.  People who once signed up to access some service but never used the accounts.   Etcetera.  Just as with social media numbers (even just the number of followers of this humble blog), to be taken with a big pinch of salt.

Nevertheless, that’s a billion signups.  Allowing for fakes and duplicates, that might be a nine-digit number of real people who once answered security questions.  That’s a bunch of answers that, unlike passwords, travel with the user across multiple services, not just online but also those you might access by other means such as the ‘phone or even face-to-face.  The name of your first pet or your primary school are no more secure than the classic mother’s maiden name.

And now a billion such records have leaked.  Give or take: we don’t know how many users ever were genuine, nor how many such questions and answers each genuine user disclosed.

So what does it mean if you’re one of the billion?  If someone wants to steal your identity, your security questions and answers have passed from the realm of something they have to research to something easily automated.  Well, we don’t know that for certain, but it’s certainly a risk that can no longer be dismissed.

You’d better change your security questions everywhere that matters.  What do you mean, you can’t remember which questions you signed up to Yahoo with twenty years ago?  Don’t tell me you can’t change the city of your birth, or the initials of your first lover.  Oh dear [shakes head].

And even if you’re not one of the billion, you may already have started to get the phishing emails purporting to be from yahoo (or others) about changing passwords.

I’ve argued here before that security questions are not fit for purpose.  Perhaps the Yahoo leak might help persuade the world to stop using them for things that matter!

Public wifi menace

A couple of days ago, I was looking up a bus timetable from my ‘phone.  All perfectly mundane.

The address I thought I wanted failed: I don’t have it bookmarked and I’ve probably misremembered.  So I googled.

Google failed too.  With a message about an invalid certificate.  WTF?  Google annoyingly[1] use https, and I got a message about an invalid certificate.    Who is sitting in the middle?  Surely they can’t really be eavesdropping: with browsers issuing strong warnings, they’re never going to catch anything sensitive.  Must be just a hopelessly misconfigured network.

I don’t care if someone watches as I look up a bus time, I just want to get on with it!  But it’s not obvious with android how I can override that warning and access google.  Or even an imposter: if they don’t give me the link I wanted from google, nothing lost!

So has my mobile network screwed up horribly?  Cursing at the hassle, I go into settings and see it’s picked up a wifi network.  BT’s public stuff: OpenZone, or something like that (from memory).  This is BT, or someone on their network, playing sillybuggers.  Just turn wifi off and all works well again as the phone reverts to my network.

Except, now I have to remember to re-enable wifi before doing anything a bit data-intensive, like letting the ‘phone update itself, or joining a video conference.  All too easy to forget.

Hmm, come to think of it, that broken network is probably also what got between me and the bus timetable in the first place.  That wasn’t https.

[1] There are good reasons to encrypt, but search is rarely one of them.  Good that google enables it (at least if you trust google more than $random-shady-bod), but it’s a pain that they enforce it.

Identity and Trust

Folks who know me will know that I’ve been taking an interest for some time in the problems of online identity and trust:

  • Passwords (as we know them today) are a sick joke.
  • Monolithic certificate authorities (and browser trust lists) are a serious weakness in web trust.
  • PGP and the Web of Trust remain the preserve of geekdom.
  • People distrust and even fear centralised databases.  At issue are both the motivations of those who run them, and security against intruders.
  • Complexity and poor practice opens doors for phishing and identity theft.
  • Establishing identity and trust can be a nightmare, to the extent that a competent fraudster might find it easier than the real person to establish an identity.

I’m not a cryptographer.  But as mathematician, software developer, and old cynic, I have the essential ingredients.  I can see that things are wrong and could so easily be a whole lot better at many levels.  It’s not even a hard problem: merely a more rational deployment of existing technology!  Some time back I thought about setting myself up in the business of making it happen, but was put off by the ghost of what happened last time I tried (and failed) to launch an innovative startup.

Recently – starting this summer – I’ve embarked on another mission towards improving the status quo.  Instead of trying to run my own business, I’ve sought out an existing business doing good work in the field, to which I can hope to make a significant contribution.  So the project’s fortunes tap into my strengths as techie rather than my weaknesses as a Suit.

I should add that the project does rather more than just improve the deployment of existing technology, as it significantly advances the underlying cryptographic framework.  Most importantly it introduces a Distributed Trust Authority model, as an alternative to the flawed monolithic Certificate Authority and its single point of failure.  The distributed model also makes it particularly well-suited to “cloud” applications and to securing the “Internet of Things”.

And it turns out, I arrived at an opportune moment.  The project has been single-company open source for some time and generated some interest at github.  Now it’s expanding beyond that: a second corporate team is joining development and I understand there are further prospects.  So it could really use a higher-level development model than github: one that will actively foster the community and offer mutual assurance and protection to all participants.  So we’ve put it forward as a candidate for incubation at Apache.  The proposal is here.

If all goes well, this could be the core of my work for some time to come.  Here’s hoping for a big success and a better, safer online world.

Saved from Visa

I’ve written before about the Fraudster’s Friend misleadingly named “Verified by Visa”.  Most directly in my post Phished by Visa, though Bullied by Visa perhaps also deserves a mention.

Today I went to place an order with Argos, who I’ve used several times before and who have always – in contrast to some of their competitors – delivered very efficiently.  This time alas the shopping process has become significantly more hassle, and they’ve introduce the VBV cuckoo into the process.  But I was pleased to note that, when I came to the VBV attack, Firefox flagged it up as precisely what it is: an XSS attack, and in the context of secure data (as in creditcard numbers) a serious security issue.

I hope Firefox does that by default, rather than just with my settings.  Though it would be courageous, to take the blame from the unwashed masses who might think VBV serves their interests when it doesn’t work.  Doing the Right Thing against an enemy with ignorance on its side has a very bad history in web browsers, as Microsoft in the late 1990s killed off the opposition by exposing their users to a whole family of “viruses” in a move designed to make correct behaviour a loser in the market (specifically, violation of MIME standards documented since 1992 as security-critical).

Alas, while Firefox saved me from the evil phishing attack, the combination of that and other Argos website trouble pushed me to a thoroughly insecure and less than convenient medium: the telephone.  Bah, Humbug.

To phish, or not to phish?

I recently had email telling me my password for $company VPN is due to expire, and directing me to a URL to update it.

Legitimate or phishing?  Let’s examine it.

It follows the exact form of similar legitimate emails I’ve had before.  Password expires in 14 days.  Daily updates decrementing the day count until I change it.  So far so good.

However, it’s directing me to an unfamiliar URL: https://$company.okta.com/.   Big red flag!  But $company outsources a range of admin functions in this manner, so it’s entirely plausible.

It appears to come from a legitimate source.  But since all $company email is outsourced to gmail, the information I can glean from the headers is limited.  How much trust can I place in gmail’s SPF telling me the sender is valid?

A look on $company’s intranet fails to find anything relevant (though in the absence of a search function I probably wouldn’t find it anyway without a truly gruelling trawl).  OK, let’s google for evidence of a legitimate connection between $company and okta.com.  I’ve resolved similar problems to my own satisfaction that way before both for $company and other such situations (e.g. here or here), but the hurdle for a $company-VPN password – even one I’m about to change – has to be high.

Googling finds me only inconclusive evidence.  There’s a linkedin page for $company’s sysop, only it turns out he’s moved on and the linkedin page is just listing both $company and okta skills in his CV.  There’s a PDF at $company’s website with instructions for setting up some okta product (though it’s one of those that insults you with big cuddly pictures of selecting a series of menu options without actually saying anything non-obvious).

Hmmm …

OK, maybe I can get okta.com to prove itself, with the kind of security question your bank asks when you ‘phone it.  Let’s use okta’s “Password Reset”.  I expect it’ll send a one-off token I can use to set a new password.  If legit, that’ll work; if not then the newly-minted password is worthless and I just abandon it.  But no such thing: instead of sending me such a token, it tells (emails) me:

Your Okta account is configured to use the same password you currently use for logging in to your organization’s Windows network. Use your Windows account password to sign in to Okta. Please use the password reset function in Windows to reset your password.

Well, b***er that.  Windows account password?  Windows network?  I have no such thing, and neither does $company expect me to.  I expect $company may have a few windows boxes, but they’re certainly not the norm.  No doubt it just means the LDAP password I’m supposed to be changing, but if I know that then why should I be asking it for password reset?  Bah, Humbug!

One more thing to try before a humiliating request for help over something I should be able to deal with myself.  Somewhere in my gmail I can dig up previous password reset reminders, with a URL somewhere on $company’s own intranet.  Try that URL.  Yes, it still works, and I can reset my VPN password there.  All that investigation for … what?

Well, there’s a value to it.  Namely the acid test: does the daily password reminder stop after I’ve reset the password?  If it’s genuine then it shares information with $intranet and knows I’ve reset my password.  If it’s a phish then it knows nothing.  So now I’m getting some real evidence: if the password reminders stop then it’s genuine.

They do stop.  So I conclude it is indeed genuine.

Unless it’s so ultra-sophisticated that it’s been warned off by my having visited the site and used password reset, albeit unsuccessfully.  Waiting to try again in a few months?  Hmmm ….

Well, if $company hasn’t outsourced it then the intranet-based password reset will continue to work next time.  If it doesn’t work next time then there’s one more piece of evidence it’s genuine.

Defending against shell shock

I started writing a longer post about the so-called shell shock, with analysis of what makes a web server vulnerable or secure.  Or, strictly speaking, not a webserver, but a platform an attacker might access through a web server.  But I’m not sure when I’ll find time to do justice to that, so here’s the short announcement:

I’ve updated mod_taint to offer an ultra-simple defence against the risk of shell shock attacks coming through Apache HTTPD, versions 2.2 or later.  A new simplified configuration option is provided specifically for this problem:

    LoadModule taint_module modules/mod_taint.so
    Untaint shellshock

mod_taint source and documentation are at http://people.apache.org/~niq/mod_taint.c and http://people.apache.org/~niq/mod_taint.html respectively.

Here’s some detail from what I posted earlier to the Apache mailinglists:

Untaint works in a directory context, so can be selectively enabled for potentially-vulnerable apps such as those involving CGI, SSI, ExtFilter, or (other) scripts.

This goes through all Request headers, any PATH_INFO and QUERY_STRING, and (just to be paranoid) any other subprocess environment variables. It untaints them against a regexp that checks for “()” at the beginning of a variable, and returns an HTTP 400 error (Bad Request) if found.

Feedback welcome, indeed solicited. I believe this is a simple but sensible approach to protecting potentially-vulnerable systems, but I’m open to contrary views. The exact details, including the shellshock regexp itself, could probably use some refinement. And of course, bug reports!

Scripting with gpg

I have a build script that may, as a matter of convenience, download and build a third-party software package.  Before the build script goes into any release, I want to tighten up its security to ensure it verifies the PGP signature on the package.

OK, I can do that in a Makefile using two separate targets: the tarball, and the verified tarball.  I thought I could make the latter a link to the former, using something like:

gpg –verify $(TARBALL).asc $(TARBALL) \
&& ln $(TARBALL) $(TARBALL-VERIFIED) \
|| (echo “### Please verify $(TARBALL) ###” && exit 1)

However, this is failing me, because gpg is too trusting:

$ gpg –verify nginx-1.7.3.tar.gz.asc nginx-1.7.3.tar.gz
gpg: Signature made Tue 8 Jul 14:22:56 2014 BST using RSA key ID A1C052F8
gpg: Good signature from “Maxim Dounin <email.suppressed>”
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: B0F4 2533 73F8 F6F5 10D4 2178 520A 9993 A1C0 52F8

$ echo $?
0

(OK, now you know the identity of $(TARBALL))

It has not verified that the signature is trusted, but it still thinks all’s well.  Ouch!  I can verify the signature manually (if rather weakly) but I’d rather not try to script that.  Nor do I want to concern myself with issues that might change with each new nginx release, or with changes of pgp keys.

A bit of googling finds this message, from which it appears this was a known bug but fixed in GnuPG version 1.4.2.1 back in 2006 (and yes, my gpg version is more recent than that)!  Was that a non-fix that only tells you if it’s a BAD signature or no PGP data at all?  That would be no more useful than an MD5 or SHA checksum!

OK folks, what am I missing?  What do you use to script the verification of a package?

Bleeding Heart

The fallout from heartbleed seems to be manifesting itself in a range of ways.  I’ve been required to set new passwords for a small number of online services, and expect I may encounter others as and when I next access them.

The main contrast seems to be between admins who tell you what’s happening, vs services that just stop working.  Contrast Apache and Google:

Apache: email arrives from the infrastructure folks: all system passwords will have to be reset.  Then a second email: if you haven’t already, you’ll have to set a new password via the “forgot my password” mechanism (which sends you PGP-encrypted email instructions).  All very smooth and maximally secure – unless some glitch has yet to manifest itself.

Google: @employer email address, which is hosted on gmail, just stopped working without explanation.  But this is the weekend, and similar things have happened before at weekends, so I ignore it.  But when it’s still not back on Monday, I try logging in with my web browser.  It allows me that, and insists I set a new password, whereupon normal imap access is also restored.  Hmmm … In the first place, no explanation or warning.  In the second place, if the password had been compromised then anyone who had it could trivially have reset it.  Bottom of the class both for insecurity and for the user experience.

There is also secondary fallout: worried users of products that link OpenSSL asking or wondering what they have to upgrade: for example, here.  For most, the answer is that you just upgrade your OpenSSL installation and then restart any services that link it (or reboot the whole system if you favour the sledgehammer approach).  Exceptions to that will be cases where you have custom builds with statically linked OpenSSL, or multiple OpenSSL installations (as might reasonably be the case on a developer’s machine).  If in doubt, restart your services and check for the OpenSSL version appearing in its startup messages: for example, with Apache HTTPD you’ll see it in the error log at startup.