Category: Uncategorized

dovewithscales:

the6fingeredbiologist:

giada-luna:

dovewithscales:

hyratel:

dovewithscales:

messy-scandinoodle:

dovewithscales:

virtuous-thing:

baaaaaaaaaaaaaaaaaaa:

heartgemsona:

erotic-yoddeling:

bemusedlybespectacled:

nonlinear-nonsubjective:

sonneillonv:

castiel-for-king:

maliwanhellfires:

just-shower-thoughts:

Mammals both produce milk and have hair. Ergo, a coconut is a mammal.

I know you’re being facetious, but this is an actual issue with morphology-based phylogeny.

*leans over and whispers to person beside me* what are they talking about

*leans over and whispers back*  Human ability to quantify and categorize natural phenomena is sketchy at best and wildly misleading at worst

consider the coconut

this reminds me of that time Plato defined humans as “featherless bipeds” and Diogenes ran in with a plucked chicken screaming “BEHOLD A MAN!”

i love how you say “it reminds me of that time” like you were there.

listen if an immortal feels brave and supported enough to come out we should respect them

This post is a journey

1 Reblog = 1 Respect

I maintain that humans started attempting classify animals, and some god or another made the platypus, and is still laughing.

Zeus: *hits joint* okay so like. It’s gonna have a duck bill right. But an otter body okay? And then a beaver tail. It’s a mammal. But. It lays eggs!

Hades: wait wait dude. Give it. Give it poison. Make it poisonous

Athena: You mean venomous, and make sure the eggs have both reptile and bird traits.

Hermes: *takes the joint* Give it extra senses.

Poseidon: It should be aquatic.

I MEAN where’s the lie

Demeter: … And where exactly do you expect me to put this?

Everyone: Australia.

Reblogging for that last exchange.

Whether or not the platypus is venomous was Hiiiighly debates until like the 1980’s, mostly because the potency of the males venom varies according to the seasons (most potent during mating season cause your rival can’t steal your girl if he’s writhing in pain)

Reblogging this cause it came across my dash again and I still love it.

Goldman Sachs report: “Is curing patients a sustainable business model?”

Uncategorized

mostlysignssomeportents:

In Goldman Sachs’s April 10 report, “The Genome Revolution,” its
analysts ponder the rise of biotech companies who believe they will
develop “one-shot” cures for chronic illnesses; in a moment of rare
public frankness, the report’s authors ask, “Is curing patients a
sustainable business model?”

The authors were apparently spooked by the tale of Gilead Sciences, who
developed a Hepatitis C therapy that is more than 90% effective, making
$12.5B in 2015 – the year of the therapy’s release – a number that
fell to $4B this year.

The analysts are making a commonsense observation: capitalism is
incompatible with human flourishing. Markets will not, on their own,
fund profoundly effective cures for diseases that destroy our lives and
families. This is a very strong argument for heavily taxing the profits
of pharma companies’ investors and other one percenters, and then
turning the money over to publicly funded scientific research that
eschews all patents, and which is made available for free under the
terms of the Access To Medicines treaty, whereby any country that
devotes a set fraction of its GDP to pharma research gets free access to
the fruits of all the other national signatories.

Humans have shared microbial destiny. If there’s one thing that
challenges the extreme libertarian conception of owing nothing to your
neighbor save the equilibrium established by your mutual selfishness,
it’s epidemiology. Your right to swing your fist ends where it connects
with my nose; your right to create or sustain reservoirs of pathogens
that will likely kill some or all of your neighbors is likewise subject
to their willingness to tolerate your recklessness.

Goldman Sachs’s analysts suggest three “cures” for the problem of
one-shot cures; and taxing the rich to fund socialized pharma research
isn’t among them; rather, they propose eschewing rare diseases, to
ensure that the pool of patients is large enough to produce a return on
their investment, or developing one-shot cures fast enough to “offset
the declining revenue trajectory of prior assets.”

https://boingboing.net/2018/04/14/shared-microbial-destiny.html

For the first time, a US president has classified the legal justification for taking publicly acknowledged actions

Uncategorized

mostlysignssomeportents:

It’s not uncommon for legal opinions from the Justice Department’s
Office of Legal Counsel to be classified; whenever the President wants
to do something nefarious – like authorizing the CIA’s program of
torture – he’ll get a memo out of the OLC, and then classify the whole
thing: the action and its justification.

But Trump’s memo justifying his decision to bomb Syria is classified, while the bombing, obviously, isn’t.

What’s more, the level of secrecy slapped on the OLC memo explaining how
the President could order an act of war when the Constitution
explicitly says that Congress alone can authorize this is so secret that
even Congress isn’t allowed to see it.

That’s right: the President got a secret memo drafted that explains why
he can go to war without Congressional approval, and Congress isn’t
allowed to read that memo.

When GW Bush kept his torture-authorizing memos a secret, it was because
he wanted to keep the torture a secret, too. But Trump isn’t even
keeping up with that pretense of internal consistency. Instead, the
justification for taking an action that the President personally
announced on his Twitter feed is, “I don’t want you to know.”

https://boingboing.net/2018/04/14/secret-public-bombings.html

When algorithms surprise us

dzamieponders:

lewisandquark:

Machine learning algorithms are not like other computer programs. In the usual sort of programming, a human programmer tells the computer exactly what to do. In machine learning, the human programmer merely gives the algorithm the problem to be solved, and through trial-and-error the algorithm has to figure out how to solve it.

This often works really well – machine learning algorithms are widely used for facial recognition, language translation, financial modeling, image recognition, and ad delivery. If you’ve been online today, you’ve probably interacted with a machine learning algorithm.

But it doesn’t always work well. Sometimes the programmer will think the algorithm is doing really well, only to look closer and discover it’s solved an entirely different problem from the one the programmer intended. For example, I looked earlier at an image recognition algorithm that was supposed to recognize sheep but learned to recognize grass instead, and kept labeling empty green fields as containing sheep.

image

When machine learning algorithms solve problems in unexpected ways, programmers find them, okay yes, annoying sometimes, but often purely delightful.

So delightful, in fact, that in 2018 a group of researchers wrote a fascinating paper that collected dozens of anecdotes that “elicited surprise and wonder from the researchers studying them”. The paper is well worth reading, as are the original references, but here are several of my favorite examples.

Bending the rules to win

First, there’s a long tradition of using simulated creatures to study how different forms of locomotion might have evolved, or to come up with new ways for robots to walk.

Why walk when you can flop? In one example, a simulated robot was supposed to evolve to travel as quickly as possible. But rather than evolve legs, it simply assembled itself into a tall tower, then fell over. Some of these robots even learned to turn their falling motion into a somersault, adding extra distance.

image

[Image: Robot is simply a tower that falls over.]

Why jump when you can can-can? Another set of simulated robots were supposed to evolve into a form that could jump. But the programmer had originally defined jumping height as the height of the tallest block so – once again – the robots evolved to be very tall. The programmer tried to solve this by defining jumping height as the height of the block that was originally the *lowest*. In response, the robot developed a long skinny leg that it could kick high into the air in a sort of robot can-can. 

image

[Image: Tall robot flinging a leg into the air instead of jumping]

Hacking the Matrix for superpowers

Potential energy is not the only energy source these simulated robots learned to exploit. It turns out that, like in real life, if an energy source is available, something will evolve to use it.

Floating-point rounding errors as an energy source: In one simulation, robots learned that small rounding errors in the math that calculated forces meant that they got a tiny bit of extra energy with motion. They learned to twitch rapidly, generating lots of free energy that they could harness. The programmer noticed the problem when the robots started swimming extraordinarily fast.

Harvesting energy from crashing into the floor: Another simulation had some problems with its collision detection math that robots learned to use. If they managed to glitch themselves into the floor (they first learned to manipulate time to make this possible), the collision detection would realize they weren’t supposed to be in the floor and would shoot them upward. The robots learned to vibrate rapidly against the floor, colliding repeatedly with it to generate extra energy.

image

[Image: robot moving by vibrating into the floor]

Clap to fly: In another simulation, jumping bots learned to harness a different collision-detection bug that would propel them high into the air every time they crashed two of their own body parts together. Commercial flight would look a lot different if this worked in real life.

Discovering secret moves: Computer game-playing algorithms are really good at discovering the kind of Matrix glitches that humans usually learn to exploit for speed-running. An algorithm playing the old Atari game Q*bert discovered a previously-unknown bug where it could perform a very specific series of moves at the end of one level and instead of moving to the next level, all the platforms would begin blinking rapidly and the player would start accumulating huge numbers of points. 

A Doom-playing algorithm also figured out a special combination of movements that would stop enemies from firing fireballs – but it only works in the algorithm’s hallucinated dream-version of Doom. Delightfully, you can play the dream-version here

image

[Image: Q*bert player is accumulating a suspicious number of points, considering that it’s not doing much of anything]

Shooting the moon: In one of the more chilling examples, there was an algorithm that was supposed to figure out how to apply a minimum force to a plane landing on an aircraft carrier. Instead, it discovered that if it applied a *huge* force, it would overflow the program’s memory and would register instead as a very *small* force. The pilot would die but, hey, perfect score.

Destructive problem-solving

Something as apparently benign as a list-sorting algorithm could also solve problems in rather innocently sinister ways.

Well, it’s not unsorted: For example, there was an algorithm that was supposed to sort a list of numbers. Instead, it learned to delete the list, so that it was no longer technically unsorted.

Solving the Kobayashi Maru test: Another algorithm was supposed to minimize the difference between its own answers and the correct answers. It found where the answers were stored and deleted them, so it would get a perfect score.

How to win at tic-tac-toe: In another beautiful example, in 1997 some programmers built algorithms that could play tic-tac-toe remotely against each other on an infinitely large board. One programmer, rather than designing their algorithm’s strategy, let it evolve its own approach. Surprisingly, the algorithm suddenly began winning all its games. It turned out that the algorithm’s strategy was to place its move very, very far away, so that when its opponent’s computer tried to simulate the new greatly-expanded board, the huge gameboard would cause it to run out of memory and crash, forfeiting the game.

In conclusion

When machine learning solves problems, it can come up with solutions that range from clever to downright uncanny. 

Biological evolution works this way, too – as any biologist will tell you, living organisms find the strangest solutions to problems, and the strangest energy sources to exploit. Sometimes I think the surest sign that we’re not living in a computer simulation is that if we were, some microbe would have learned to exploit its flaws.

So as programmers we have to be very very careful that our algorithms are solving the problems that we meant for them to solve, not exploiting shortcuts. If there’s another, easier route toward solving a given problem, machine learning will likely find it. 

Fortunately for us, “kill all humans” is really really hard. If “bake an unbelievably delicious cake” also solves the problem and is easier than “kill all humans”, then machine learning will go with cake.

Mailing list plug

If you enter your email, there will be cake!

“If we were, some microbe would have learned to exploit its flaws.”

Counterpoint: pistol shrimp and waterbears clearly exploit the physics engine.

eyeshadow2600fm:

prokopetz:

That thing about how cats think humans are big kittens is a myth, y’know.

It’s basically born of false assumptions; folks were trying to explain how a naturally solitary animal could form such complex social bonds with humans, and the explanation they settled on is “it’s a displaced parent/child bond”.

The trouble is, cats aren’t naturally solitary. We just assumed they were based on observations of European wildcats – but housecats aren’t descended from European wildcats. They’re descended from African wildcats, which are known to hunt in bonded pairs and family groupings, and that social tendency is even stronger in their domesticated relatives. The natural social unit of the housecat is a colony: a loose affiliation of cats centred around a shared territory held by alliance of dominant females, who raise all of the colony’s kittens communally.

It’s often remarked that dogs understand that humans are different, while cats just think humans are big, clumsy cats, and that’s totally true – but they regard us as adult colonymates, not as kittens, and all of their social behaviour toward us makes a lot more sense through that lens.

They like to cuddle because communal grooming is how cats bond with colonymates – it establishes a shared scent-identity for the colony and helps clean spots that they can’t easily reach on their own.

They bring us dead animals because cats transport surplus kills back to the colony’s shared territory for consumption by pregnant, nursing, or sick colonymates who can’t easily hunt on their own. Indeed, that’s why they kill so much more than they individually need – it’s not for fun, but to generate enough surplus kills to sustain the colony’s non-hunting members.

They’re okay with us messing with their kittens because communal parenting is the norm in a colony setting, and us being colonymates in their minds automatically makes us co-parents.

It’s even why many cats are so much more tolerant toward very small children, as long as those children are related to one of their regular humans: they can tell the difference between human adults and human “kittens”, and your kittens are their kittens.

Basically, you’re going to have a much easier time getting a handle on why your cat does why your cat does if you remember that the natural mode of social organisation for cats is not as isolated solitary hunters, but as a big communal catpile – and for that purpose, you count as a cat.

cat socialism

shotinthekidney:

me: alright, i’ve got a few hours to myself. should i read, write, draw, play some video games…

executive dysfunction: you’re going to scroll through tumblr until you have to go to sleep

executive dysfunction: you’re not even going to like it.