EFF and Open Rights Group today submitted formal comments to the
British Treasury, urging restraint in applying anti-money-laundering
regulations to the publication of open-source software.
The UK government sought public feedback on proposals to update its
financial regulations pertaining to money laundering and terrorism in
alignment with a larger European directive. The consultation asked for
feedback on applying onerous customer due diligence regulations to the
cryptocurrency space as well as what approach the government should take
in addressing “privacy coins” like Zcash and Monero. Most worrisome,
the government also asked “whether the publication of open-source
software should be subject to [customer due diligence] requirements.”
We’ve seen these kind of attacks on the publication of open source
software before, in fights dating back to the 90s, when the Clinton
administration attempted
to require that anyone merely publishing cryptography source code
obtain a government-issued license as an arms dealer. Attempting to
force today’s open-source software publishers to follow financial
regulations designed to go after those engaged in money laundering is
equally obtuse.
In our comments, we describe the breadth of free, libre, and open
source software (FLOSS) that benefits the world today across industries
and government institutions. We discuss how these regulatory proposals
could have large and unpredictable consequences not only for the
emerging technology of the blockchain ecosystem, but also for the FLOSS
software ecosystem at large. As we stated in our comments:
If the UK government was to determine that open source
software publication should be regulated under money-laundering
regulations, it would be unclear how this would be enforced, or how the
limits of those falling under the regulation would be determined.
Software that could, in theory, provide the ability to enable
cryptocurrency transactions, could be modified before release to remove
these features. Software that lacked this capability could be quickly
adapted to provide it. The core cryptographic algorithms that underlie
various blockchain implementations, smart contract construction and
execution, and secure communications are publicly known and relative
trivial to express and implement. They are published, examined and
improved by academics, enthusiasts, and professionals alike…
The level of uncertainty this would provide to FLOSS use and
provision within the United Kingdom would be considerable. Such
regulations would burden multiple industries to attempt to guarantee
that their software could not be considered part of the infrastructure
of a cryptographic money-laundering scheme.
Moreover, source code is a form of written creative expression, and
open source code is a form of public discourse. Regulating its
publication under anti-money-laundering provisions fails to honor the
free expression rights of software creators in the United Kingdom, and
their collaborators and users in the rest of the world.
Source code is a form of written creative expression, and open source code is a form of public discourse.
EFF is monitoring the regulatory and legislative reactions to new blockchain technologies, and we’ve recently spoken out about misguided ideas for banning cryptocurrencies and overbroad regulatory responses to decentralized exchanges.
Increasingly, the regulatory backlash against cryptocurrencies is being
tied to overbroad proposals that would censor the publication of
open-source software, and restrict researchers’ ability to investigate,
critique and communicate about the opportunities and risks of
cryptocurrency.
This issue transcends controversies surrounding blockchain tech and
could have significant implications for technological innovation,
academic research, and freedom of expression. We’ll continue to watch
the proceedings with HM Treasury, but fear similar anti-FLOSS proposals
could emerge—particularly as other member states of the European Union
transpose the same Anti-Money Laundering Directive into their own laws.
America, Canada, New Zealand, the UK and Australia are in a surveillance
alliance called The Five Eyes, through which they share much of their
illegally harvested surveillance data.
In a recently released Statement of Principles on Access to Evidence and Encryption,
the Five Eyes powers have demanded, again, that strong cryptography be
abolished and replaced with defective cryptography so that they can spy
on bad guys.
They defend this by saying “Privacy is not absolute.”
But of course, working crypto isn’t just how we stay private from
governments (though god knows all five of the Five Eyes have, in very
recent times, proven themselves to be catastrophically unsuited to
collect, analyze and act on all of our private and most intimate
conversations). It’s how we make sure that no one can break into the
data from our voting machines, or push lethal fake firmware updates to
our pacemakers, or steal all the money from all of the banks, or steal
all of the kompromat on all 22,000,000 US military and government
employees and contractors who’ve sought security clearance.
Also, this is bullshit.
Because it won’t work.
Here’s the text of my go-to post about why this is so fucking stupid. I just can’t be bothered anymore. Jesus fucking christ. Seriously? Are we still fucking talking about this? Seriously? Come on, SERIOUSLY?
It’s impossible to overstate how bonkers the idea of sabotaging
cryptography is to people who understand information security. If you
want to secure your sensitive data either at rest – on your hard drive,
in the cloud, on that phone you left on the train last week and never
saw again – or on the wire, when you’re sending it to your doctor or
your bank or to your work colleagues, you have to use good cryptography.
Use deliberately compromised cryptography, that has a back door that
only the “good guys” are supposed to have the keys to, and you have
effectively no security. You might as well skywrite it as encrypt it
with pre-broken, sabotaged encryption.
There are two reasons why this is so. First, there is the question of
whether encryption can be made secure while still maintaining a “master
key” for the authorities’ use. As lawyer/computer scientist Jonathan
Mayer explained,
adding the complexity of master keys to our technology will “introduce
unquantifiable security risks”. It’s hard enough getting the security
systems that protect our homes, finances, health and privacy to be
airtight – making them airtight except when the authorities don’t want
them to be is impossible.
What these leaders thinks they’re saying is, “We will command all the
software creators we can reach to introduce back-doors into their tools
for us.” There are enormous problems with this: there’s no back door
that only lets good guys go through it. If your Whatsapp or Google
Hangouts has a deliberately introduced flaw in it, then foreign spies,
criminals, crooked police (like those who fed sensitive information to
the tabloids who were implicated in the hacking scandal – and like the
high-level police who secretly worked for organised crime for years),
and criminals will eventually discover this vulnerability. They – and
not just the security services – will be able to use it to intercept
all of our communications. That includes things like the pictures of
your kids in your bath that you send to your parents to the trade
secrets you send to your co-workers.
But this is just for starters. These officials don’t understand
technology very well, so they doesn’t actually know what they’re asking
for.
For this proposal to work, they will need to stop Britons, Canadians,
Americans, Kiwis and Australians from installing software that comes
from software creators who are out of their jurisdiction. The very best
in secure communications are already free/open source projects,
maintained by thousands of independent programmers around the world.
They are widely available, and thanks to things like cryptographic
signing, it is possible to download these packages from any server in
the world (not just big ones like Github) and verify, with a very high
degree of confidence, that the software you’ve downloaded hasn’t been
tampered with.
Australia is not alone here. The regime they proposes is already in
place in countries like Syria, Russia, and Iran (for the record, none of
these countries have had much luck with it). There are two means by
which authoritarian governments have attempted to restrict the use of
secure technology: by network filtering and by technology mandates.
Australian governments have already shown that they believes they can
order the nation’s ISPs to block access to certain websites (again, for
the record, this hasn’t worked very well). The next step is to order
Chinese-style filtering using deep packet inspection, to try and
distinguish traffic and block forbidden programs. This is a formidable
technical challenge. Intrinsic to core Internet protocols like IPv4/6,
TCP and UDP is the potential to “tunnel” one protocol inside another.
This makes the project of figuring out whether a given packet is on the
white-list or the black-list transcendentally hard, especially if you
want to minimise the number of “good” sessions you accidentally
blackhole.
More ambitious is a mandate over which code operating systems in the 5
Eyes nations are allowed to execute. This is very hard. We do have, in
Apple’s Ios platform and various games consoles, a regime where a single
company uses countermeasures to ensure that only software it has
blessed can run on the devices it sells to us. These companies could,
indeed, be compelled (by an act of Parliament) to block secure software.
Even there, you’d have to contend with the fact that other states are
unlikely to follow suit, and that means that anyone who bought her
Iphone in Paris or Mexico could come to the 5 Eyes countries with all
their secure software intact and send messages “we cannot read.”
But there is the problem of more open platforms, like GNU/Linux
variants, BSD and other unixes, Mac OS X, and all the non-mobile
versions of Windows. All of these operating systems are already designed
to allow users to execute any code they want to run. The commercial
operators – Apple and Microsoft – might conceivably be compelled by
Parliament to change their operating systems to block secure software in
the future, but that doesn’t do anything to stop people from using all
the PCs now in existence to run code that the PM wants to ban.
More difficult is the world of free/open operating systems like
GNU/Linux and BSD. These operating systems are the gold standard for
servers, and widely used on desktop computers (especially by the
engineers and administrators who run the nation’s IT). There is no legal
or technical mechanism by which code that is designed to be modified by
its users can co-exist with a rule that says that code must treat its
users as adversaries and seek to prevent them from running prohibited
code.
This, then, is what the Five Eyes are proposing:
* All 5 Eyes citizens’ communications must be easy for criminals, voyeurs and foreign spies to intercept
* Any firms within reach of a 5 Eyes government must be banned from producing secure software
* All major code repositories, such as Github and Sourceforge, must be blocked in the 5 Eyes
* Search engines must not answer queries about web-pages that carry secure software
* Virtually all academic security work in the 5 Eyes must cease –
security research must only take place in proprietary research
environments where there is no onus to publish one’s findings, such as
industry R&D and the security services
* All packets in and out of 5 Eyes countries, and within those
countries, must be subject to Chinese-style deep-packet inspection and
any packets that appear to originate from secure software must be
dropped
* Existing walled gardens (like Ios and games consoles) must be ordered to ban their users from installing secure software
* Anyone visiting a 5 Eyes country from abroad must have their smartphones held at the border until they leave
* Proprietary operating system vendors (Microsoft and Apple) must be
ordered to redesign their operating systems as walled gardens that only
allow users to run software from an app store, which will not sell or
give secure software to Britons
* Free/open source operating systems – that power the energy, banking,
ecommerce, and infrastructure sectors – must be banned outright
The Five Eyes officials will say that they doesn’t want to do any of
this. They’ll say that they can implement weaker versions of it – say,
only blocking some “notorious” sites that carry secure software. But
anything less than the programme above will have no material effect on
the ability of criminals to carry on perfectly secret conversations that
“we cannot read”. If any commodity PC or jailbroken phone can run any
of the world’s most popular communications applications, then “bad guys”
will just use them. Jailbreaking an OS isn’t hard. Downloading an app
isn’t hard. Stopping people from running code they want to run is – and
what’s more, it puts the every 5 Eyes nation – individuals and
industry – in terrible jeopardy.
That’s a technical argument, and it’s a good one, but you don’t have to
be a cryptographer to understand the second problem with back doors: the
security services are really bad at overseeing their own behaviour.
Once these same people have a back door that gives them access to
everything that encryption protects, from the digital locks on your home
or office to the information needed to clean out your bank account or
read all your email, there will be lots more people who’ll want to
subvert the vast cohort that is authorised to use the back door, and the
incentives for betraying our trust will be much more lavish than
anything a tabloid reporter could afford.
If you want a preview of what a back door looks like, just look at the
US Transportation Security Administration’s “master keys” for the locks
on our luggage. Since 2003, the TSA has required all locked baggage
travelling within, or transiting through, the USA to be equipped with
Travelsentry locks, which have been designed to allow anyone with a
widely held master key to open them.
What happened after Travelsentry went into effect? Stuff started going
missing from bags. Lots and lots of stuff. A CNN investigation into
thefts from bags checked in US airports found thousands of incidents of
theft committed by TSA workers and baggage handlers. And though
“aggressive investigation work” has cut back on theft at some airports,
insider thieves are still operating with impunity throughout the
country, even managing to smuggle stolen goods off the airfield in
airports where all employees are searched on their way in and out of
their work areas.
The US system is rigged to create a halo of buck-passing
unaccountability. When my family picked up our bags from our Easter
holiday in the US, we discovered that the TSA had smashed the locks off
my nearly new, unlocked, Travelsentry-approved bag, taping it shut after
confirming it had nothing dangerous in it, and leaving it “completely
destroyed” in the words of the official BA damage report. British
Airways has sensibly declared the damage to be not their problem, as
they had nothing to do with destroying the bag. The TSA directed me to a
form that generated an illiterate reply from a government subcontractor,
sent from a do-not-reply email address, advising that “TSA is not
liable for any damage to locks or bags that are required to be opened by
force for security purposes” (the same note had an appendix warning me
that I should treat this communication as confidential). I’ve yet to
have any other communications from the TSA.
Making it possible for the state to open your locks in secret means that
anyone who works for the state, or anyone who can bribe or coerce
anyone who works for the state, can have the run of your life.
Cryptographic locks don’t just protect our mundane communications:
cryptography is the reason why thieves can’t impersonate your fob to
your car’s keyless ignition system; it’s the reason you can bank online;
and it’s the basis for all trust and security in the 21st century.
In her Dimbleby lecture, Martha Lane Fox recalled Aaron Swartz’s words:
“It’s not OK not to understand the internet anymore.” That goes double
for cryptography: any politician caught spouting off about back doors is
unfit for office anywhere but Hogwarts, which is also the only
educational institution whose computer science department believes in
“golden keys” that only let the right sort of people break your
encryption.
The method is a steganographic technique, meaning it hides secret information in plain sight such that only its intended recipient knows where to look for it and how to extract it. FontCode can be applied to hundreds of common fonts, like Helvetica or Times New Roman, and works in word processors like Microsoft Word. Data encoded with FontCode can also endure across any image-preserving digital format, like PDF or PNG. The secret data won’t persist after, say, copy and pasting FontCode text between text editors.
[…]
The text perturbations FontCode uses to embed a message involve slightly changing curvatures, widths, and heights—but crucially it’s all imperceptible to the naked eye. You can intuit that some letters, like capital “I"s or “J"s, don’t have a lot of complexity in which to hide subtle variations. But lowercase “a"s and "g"s, for example, have lots of edges and curves that can be elongated or shortened and bulked up or paired down.
The only easy way to extract the hidden information in all those tiny tweaks is with the research teams’ decoding algorithm. A recipient of a FontCode message could use their smartphone to take a picture of text manipulated with FontCode, then run the photo through a dedicated mobile app that decrypts the code to pull out the hidden message. It would also be possible to set up decoding schemes that use a webcam, a scanner, or any other image digitization method. You can see how it works in the video below.