Category: Uncategorized

dzamie:

shoshi-miriam:

amisbro:

edwardspoonhands:

rakugaki-otoko:

snarkydiscolizard:

“i’m sad and idk how to feel better”

image

“i don’t know what to draw”

image

“i always mess up”

image

“BUT I SUCK”

image

LISTEN TO BOB ROSS.

Bob Ross was paid $0 to make his series. He made a living giving lessons IRL and later selling his own line of paints and brushes.

I apologize for not reblogging him as much but everyone needs this on their dash daily.  Seriously everyone needs this on their blog or wherever.

Do they rerun this anymore or no?

Words of wisdom!

@amisbro Twitch Always On still runs it I think. Shouldn’t be too hard to find episodes online, can either.

elodieunderglass:

thesilentdarkangel:

elodieunderglass:

flamethrowing-hurdy-gurdy:

elodieunderglass:

flamethrowing-hurdy-gurdy:

I have had this on my mind for days, someone please help:

Why are dogs dogs?

I mean, how do we see a pug and then a husky and understand that both are dogs? I’m pretty sure I’ve never seen a picture of a breed of dog I hadn’t seen before and wondered what animal it was.

Do you want the Big Answer or the Small Answers cos I have a feeling this is about to get Intense

Oooh okay are YOU gonna answer this, hang on I need to get some snacks and make sure the phone is off.

The short answer is “because they’re statistically unlikely to be anything else.”

The long question is “given the extreme diversity of morphology in dogs, with many subsets of ‘dogs’ bearing no visual resemblance to each other, how am I able to intuit that they belong to the ‘dog’ set just by looking?”

The reason that this is a Good Big Question is because we are broadly used to categorising Things as related based on resemblances. Then everyone realized about genes and evolution and so on, and so now we have Fun Facts like “elephants are ACTUALLY closely related to rock hyraxes!! Even though they look nothing alike!!”

These Fun Facts are appealing because they’re not intuitive.
So why is dog-sorting intuitive?

Well, because if you eliminate all the other possibilities, most dogs are dogs.

To process Things – whether animals, words, situations or experiences – our brains categorise the most important things about them, and then compare these to our memory banks. If we’ve experienced the same thing before – whether first-hand or through a story – then we know what’s happening, and we proceed accordingly.

If the New Thing is completely New, then the brain pings up a bunch of question marks, shunts into a different track, counts up all the Similar Traits, and assigns it a provisional category based on its similarity to other Things. We then experience the Thing, exploring it further, and gaining new knowledge. Our brain then categorises the New Thing based on the knowledge and traits. That is how humans experience the universe. We do our best, and we generally do it well.

This is the basis of stereotyping. It underlies some of our worst behaviours (racism), some of our most challenging problems (trauma), helps us survive (stories) and sharing the ability with things that don’t have it leads to some of our most whimsical creations (artificial intelligence.)

In fact, one reason that humans are so wonderfully successful is that we can effectively gain knowledge from experiences without having experienced them personally! You don’t have to eat all the berries to find the poisonous ones. You can just remember stories and descriptions of berries, and compare those to the ones you’ve just discovered. You can benefit from memories that aren’t your own!

On the other hand, if you had a terribly traumatic experience involving, say, an eagle, then your brain will try to protect you in every way possible from a similar experience. If you collect too many traumatic experiences with eagles, then your brain will not enjoy eagle-shaped New Things. In fact, if New Things match up to too many eagle-like categories, such as

* pointy
* Specific!! Squawking noise!!
* The hot Glare of the Yellow Eye
* Patriotism?!?
* CLAWS VERY BAD VERY BAD

Then the brain may shunt the train of thought back into trauma, and the person will actually experience the New Thing as trauma. Even if the New Thing was something apparently unrelated, like being generally pointy, or having a hot glare. (This is an overly simplistic explanation of how triggers work, but it’s the one most accessible to people.)

So the answer rests in how we categorise dogs, and what “dog” means to humans. Human brains associate dogs with universal categories, such as

* four legs
* Meat Eater
* Soft friend
* Doggo-ness????
* Walkies
* An Snout,
* BORK BORK

Anything we have previously experienced and learned as A Dog gets added to the memory bank. Sometimes it brings new categories along with it. So a lifetime’s experience results in excellent dog-intuition.

And anything we experience with, say, a 90% match is officially a Dog.

Brains are super-good at eliminating things, too. So while the concept of physical doggo-ness is pretty nebulous, and has to include greyhounds and Pekingese and mastiffs, we know that even if an animal LOOKS like a bear, if the other categories don’t match up in context (bears are not usually soft friends, they don’t Bork Bork, they don’t have long tails to wag) then it is statistically more likely to be a Doggo. If it occupies a dog-shaped space then it is usually a dog.

So if you see someone dragging a fluffy whatnot along on a string, you will go,

* Mop?? (Unlikely – seems to be self-propelled.)
* Alien? (Unlikely – no real alien ever experienced.)
* Threat? (Vastly unlikely in context.)
* Rabbit? (No. Rabbits hop, and this appears to scurry.) (Brains are very keen on categorising movement patterns. This is why lurching zombies and bad CGI are so uncomfortable to experience, brains just go “INCORRECT!! That is WRONG!” Without consciously knowing why. Anyway, very few animals move like domestic dogs!)
* Very fluffy cat? (Maybe – but not quite. Shares many characteristics, though!)
* Eldritch horror? (No, it is obviously a soft friend of unknown type)
* Robotic toy? (Unlikely – too complex and convincing.)
* alert: amusing animal detected!!! This is a good animal!! This is pleasing!! It may be appropriate to laugh at this animal, because we have just realized that it is probably a …
* DOG!!!! Soft friend, alive, walks on leash. It had a low doggo-ness quotient! and a confusing Snout, but it is NOT those other Known Things, and it occupies a dog-shaped space!
* Hahahaha!!! It is extra funny and appealing, because it made us guess!!!! We love playing that game.
* Best doggo.
* PING! NEW CATEGORIES ADDED TO “Doggo” set: mopness, floof, confusing Snout.

And that’s why most dogs are dogs. You’re so good at identifying dog-shaped spaces that they can’t be anything else!

This is sooo CUTE!

I love this!

@elodieunderglass thank you for teaching me a New Thing™️

You’re very welcome!

Technically the cognitive process of quantifying Doggo-ness is called a schema. But I wrote it a while ago, on mobile, at about 4 am, while nursing a newborn baby with the other arm, and I’m frankly astonished that I was able to continue a single train of thought for that long, let alone remembering Actual Names For Things (That Have Names.) I strongly encourage you to learn more about schemata if you are interested in this sort of thing!

livyscuteurl:

tzikeh:

purplemonkfish:

If I could rise up and applaud I would. THIS! fucking THIS.

You know what bullied people do? they take it out on themselves. They go home and hang themselves, they don’t pick up a gun and think “i’ll show them!”, that’s anger, entitlement, resentment. That’s not bullying doing it, that’s being a fucking asshole.

the last shooting? He killed that girl because she fucking dumped him. How, just HOW is that not fucking entitlement right there?

“White men from prosperous families grow up with the expectation that our voices will be heard. We expect politicians and professors to listen to us and respond to our concerns. We expect public solutions to our problems. And when we’re hurting, the discrepancy between what we’ve been led to believe is our birthright and what we feel we’re receiving in terms of attention can be bewildering and infuriating. Every killer makes his pain another’s problem. But only those who’ve marinated in privilege can conclude that their private pain is the entire world’s problem with which to deal. This is why, while men of all races and classes murder their intimate partners, it is privileged young white dudes who are by far the likeliest to shoot up schools and movie theaters.” 

Why Most Mass Murderers Are Privileged White Men

I feel the need to point out that while continued portrayal and normalization of violence of media is a thing (and a thing marketed especially to men and especially to white men), I almost think that the issue is the lack of any action against initial mass shootings.

Once these same entitled men saw that shootings would achieve what they wanted to achieve in terms of making their problem the world’s problem and getting their point across with little backlash (at least compared to what the backlash should’ve been) and little prevention, mass shootings became more prevalent and preferred as a means of entitled men getting their point across.

The emerging split in modern trustbusting: Alexander Hamilton’s Fully Automated Luxury Communism vs Thomas Jefferson’s Redecentralization

mostlysignssomeportents:

From the late 1970s on, the Chicago School economists worked with the
likes of Ronald Reagan, Margaret Thatcher, Augusto Pinochet and Brian
Mulroney to dismantle antitrust enforcement, declaring that the only
time government should intervene is when monopolists conspired to raise
prices – everything else was fair game.

Some 40 years later, a new generation of trustbusters have emerged,
with their crosshairs centered on Big Tech, the first industry to grow
up under these new, monopoly-friendly rules, and thus the first (but far
from the only) example of an industry in which markets, finance and
regulatory capture were allowed to work their magic to produce a highly
concentrated sector of seemingly untame-able giants who are growing
bigger by the day.

The new anti-monopolism is still in its infancy but a very old division has emerged within its ranks: the Jeffersonian ideal of decentralized power
(represented by trustbusters who want to break up Big Tech into small,
manageable firms that are too small to crush emerging rivals or capture
regulators) versus the Hamiltonian ideal of efficiency at scale,
tempered by expert regulation that forces firms to behave in the public
interest (with the end-game sometimes being “Fully Automated Luxury Communism
in which the central planning efficiencies of Wal-Mart and Amazon are
put in the public’s hands, rather than their shareholders’).

There are positions in between these two, of course: imagine, say, a set
of criteria for evaluating whether some element of scale or feature
makes a company a danger to new competitors or its users, and force
spinoffs of those parts of the business, while allowing the rest to grow
to arbitrarily large scales (agreeing on those criteria might be hard!
Also: one consequence of getting it wrong is that we’ll end up with new
giants whom we’ll have to defeat in order to correct our mistake).

“Fully Automated Luxury Communism” contains two assumptions: first, that
late-stage capitalism has finally proved that markets aren’t the only
way to allocate resources efficiently. Wal-Mart’s internal economy is
larger than the USSR at its peak, and it doesn’t use markets to figure
out how to run its internal allocations or supply chain (this is
explored in detail in an upcoming book called “The People’s Republic of
Wal-Mart,” by Leigh Phillips, author of the superb Austerity Ecology & the Collapse-Porn Addicts: A Defence Of Growth, Progress, Industry And Stuff; it’s also at the heart of Paul Mason’s analysis in the excellent Postcapitalism); second, that the best way to capture and improve this ability to allocate resources is by keeping these entities very large.

You see this a lot in the debate about AI: advocates for keeping firms
big (but taming them through regulation) say that all the real public
benefit in AI (accurate medical diagnosis, tailored education,
self-driving cars, etc) are only possible with massive training-data
sets of the sort that requires concentration, not decentralization.

There’s also a version of it in the debate about information security:
Apple (the argument goes) exercises a gatekeeper authority that keeps
malicious apps out of the App Store; while simultaneously maintaining a
hardware, parts and repair monopoly that keeps exploitable, substandard
parts (or worse, parts that have been deliberately poisoned) out of
their users’ devices.

More recently, the Efail vulnerability
has some security researchers revisiting the wisdom of federated
systems like email, and ponder whether a central point of control over
end-point design is the only way to make things secure (security is a
team sport: it doesn’t matter how secure your end of a conversation is,
if the other end is flapping wide open in the breeze, because your
adversaries get to choose where they attack, and they’ll always choose
the weakest point).

The redecentralizers counter by pointing out the risks of having a
single point of failure: when a company has the power to deliver perfect
control to authoritarian thugs, expect those thugs to do everything in their power to secure their cooperation. They point out that the benefits of AI are largely speculative, and that really large sets of training data don’t do anything to root out AI’s potentially fatal blind spots nor to prevent algorithmic bias – those require transparencly, peer review, disclosure and subservience to the public good.

They point out that regulatory capture is an age-old problem: Carterfone
may have opened the space for innovation in telephone devices (leading,
eventually, to widespread modem use and the consumer internet), but it
didn’t stop AT&T from becoming a wicked monopolist, and subsequent attempts to use pro-competitive rules (rather than enforced smallness) to tame AT&T have failed catastrophically.

It’s very hard to craft regulations that can’t be subverted
by dominant giants and converted from a leash around their own necks
into a whip they use to fight off new entrants to their markets.

I have been noodling with the idea
of regulating monopolies by punishing companies whenever their effect
on their users is harmful, rather than regulating what how they should
behave themselves. My prototype for this, regarding Facebook, goes like
this:

Step one: agree on some measure of “people feel they must use Facebook even though they hate it” (this is hard!)

Step two: give Facebook a year to make that number go down

Step four: lather, rinse, repeat

The devil is up there in step one, agreeing on a way to measure
Facebook’s coercive power. But that is, after all, the thing we really
care about. Jeffersonian Decentralizers want Facebook made smaller
because smaller companies are less able to coerce their users.
Technocratic Hamiltonians want to regulate Facebook to prevent it from
abusing its users. Both care, ultimately, about abuse – if Facebook was
split into four business units that still made their users miserable,
Jeffersonians would not declare victory; if it was made to operate under
a set of rules that still inflicted pain on billions of Facebook users,
Hamiltonians would share their pain.

Size and rules are a proxy for an outcome: harm.

https://boingboing.net/2018/05/22/too-big-to-fail-2.html