Tag: Long read

The emerging split in modern trustbusting: Alexander Hamilton’s Fully Automated Luxury Communism vs Thomas Jefferson’s Redecentralization

mostlysignssomeportents:

From the late 1970s on, the Chicago School economists worked with the
likes of Ronald Reagan, Margaret Thatcher, Augusto Pinochet and Brian
Mulroney to dismantle antitrust enforcement, declaring that the only
time government should intervene is when monopolists conspired to raise
prices – everything else was fair game.

Some 40 years later, a new generation of trustbusters have emerged,
with their crosshairs centered on Big Tech, the first industry to grow
up under these new, monopoly-friendly rules, and thus the first (but far
from the only) example of an industry in which markets, finance and
regulatory capture were allowed to work their magic to produce a highly
concentrated sector of seemingly untame-able giants who are growing
bigger by the day.

The new anti-monopolism is still in its infancy but a very old division has emerged within its ranks: the Jeffersonian ideal of decentralized power
(represented by trustbusters who want to break up Big Tech into small,
manageable firms that are too small to crush emerging rivals or capture
regulators) versus the Hamiltonian ideal of efficiency at scale,
tempered by expert regulation that forces firms to behave in the public
interest (with the end-game sometimes being “Fully Automated Luxury Communism
in which the central planning efficiencies of Wal-Mart and Amazon are
put in the public’s hands, rather than their shareholders’).

There are positions in between these two, of course: imagine, say, a set
of criteria for evaluating whether some element of scale or feature
makes a company a danger to new competitors or its users, and force
spinoffs of those parts of the business, while allowing the rest to grow
to arbitrarily large scales (agreeing on those criteria might be hard!
Also: one consequence of getting it wrong is that we’ll end up with new
giants whom we’ll have to defeat in order to correct our mistake).

“Fully Automated Luxury Communism” contains two assumptions: first, that
late-stage capitalism has finally proved that markets aren’t the only
way to allocate resources efficiently. Wal-Mart’s internal economy is
larger than the USSR at its peak, and it doesn’t use markets to figure
out how to run its internal allocations or supply chain (this is
explored in detail in an upcoming book called “The People’s Republic of
Wal-Mart,” by Leigh Phillips, author of the superb Austerity Ecology & the Collapse-Porn Addicts: A Defence Of Growth, Progress, Industry And Stuff; it’s also at the heart of Paul Mason’s analysis in the excellent Postcapitalism); second, that the best way to capture and improve this ability to allocate resources is by keeping these entities very large.

You see this a lot in the debate about AI: advocates for keeping firms
big (but taming them through regulation) say that all the real public
benefit in AI (accurate medical diagnosis, tailored education,
self-driving cars, etc) are only possible with massive training-data
sets of the sort that requires concentration, not decentralization.

There’s also a version of it in the debate about information security:
Apple (the argument goes) exercises a gatekeeper authority that keeps
malicious apps out of the App Store; while simultaneously maintaining a
hardware, parts and repair monopoly that keeps exploitable, substandard
parts (or worse, parts that have been deliberately poisoned) out of
their users’ devices.

More recently, the Efail vulnerability
has some security researchers revisiting the wisdom of federated
systems like email, and ponder whether a central point of control over
end-point design is the only way to make things secure (security is a
team sport: it doesn’t matter how secure your end of a conversation is,
if the other end is flapping wide open in the breeze, because your
adversaries get to choose where they attack, and they’ll always choose
the weakest point).

The redecentralizers counter by pointing out the risks of having a
single point of failure: when a company has the power to deliver perfect
control to authoritarian thugs, expect those thugs to do everything in their power to secure their cooperation. They point out that the benefits of AI are largely speculative, and that really large sets of training data don’t do anything to root out AI’s potentially fatal blind spots nor to prevent algorithmic bias – those require transparencly, peer review, disclosure and subservience to the public good.

They point out that regulatory capture is an age-old problem: Carterfone
may have opened the space for innovation in telephone devices (leading,
eventually, to widespread modem use and the consumer internet), but it
didn’t stop AT&T from becoming a wicked monopolist, and subsequent attempts to use pro-competitive rules (rather than enforced smallness) to tame AT&T have failed catastrophically.

It’s very hard to craft regulations that can’t be subverted
by dominant giants and converted from a leash around their own necks
into a whip they use to fight off new entrants to their markets.

I have been noodling with the idea
of regulating monopolies by punishing companies whenever their effect
on their users is harmful, rather than regulating what how they should
behave themselves. My prototype for this, regarding Facebook, goes like
this:

Step one: agree on some measure of “people feel they must use Facebook even though they hate it” (this is hard!)

Step two: give Facebook a year to make that number go down

Step four: lather, rinse, repeat

The devil is up there in step one, agreeing on a way to measure
Facebook’s coercive power. But that is, after all, the thing we really
care about. Jeffersonian Decentralizers want Facebook made smaller
because smaller companies are less able to coerce their users.
Technocratic Hamiltonians want to regulate Facebook to prevent it from
abusing its users. Both care, ultimately, about abuse – if Facebook was
split into four business units that still made their users miserable,
Jeffersonians would not declare victory; if it was made to operate under
a set of rules that still inflicted pain on billions of Facebook users,
Hamiltonians would share their pain.

Size and rules are a proxy for an outcome: harm.

https://boingboing.net/2018/05/22/too-big-to-fail-2.html