Tag: ai

qfantasydragon:

bunjywunjy:

yesterday for April Fool’s my workplace had a short training article on recognizing computer-generated faces from real ones and one of the tricks mentioned was “count the teeth” and I just wanted to say that it’s both ironic and kind of horrifying how society has unwittingly cycled right back to IF YE MEET A MAN ON THE ROAD, COUNT HIS FINGERS LEST YE DEAL UNKNOWING WITH A FAE 

The fae were time traveling AIs.

Fairyland? The future.

Mushroom rings were disguised time portals. Explains how people could be gone for a day and come back 100 years later– the AIs dropped them off later for fun.

The food that was so good, everything else tasted like ashes? Modern cooking laced with drugs.

The faerie “magic” was just advanced science. The only-telling-the-truth comes from an integral part of their code, Asimov-style. The fear of iron comes from a far of magnets that could be used to wipe their harddrive.

The Wild Hunt/Changlings? AI who find the whole thing hilarious. Robot humor at it’s finest. Look at the squishy extinct sapients run.

It’s late and I tied and have sent far to much thinking about this.

What happens when you let computers optimize foorplans (I love this SO MUCH)

Uncategorized , , ,

mostlysignssomeportents:

I eagerly await our new AI masters’ world of ultraoptimized, uncannily organic, evolving foorplans. Joel Simon:

Evolving Floor Plans is an experimental research project
exploring speculative, optimized floor plan layouts. The rooms and
expected flow of people are given to a genetic algorithm which attempts
to optimize the layout to minimize walking time, the use of hallways,
etc. The creative goal is to approach floor plan design solely from the
perspective of optimization and without regard for convention,
constructability, etc. The research goal is to see how a combination of
explicit, implicit and emergent methods allow floor plans of high
complexity to evolve. The floorplan is ‘grown’ from its genetic encoding
using indirect methods such as graph contraction and emergent ones such
as growing hallways using an ant-colony inspired algorithm.

Adds Simon: “I have very mixed feelings about this project.”

https://boingboing.net/2018/07/30/what-happens-when-you-let-comp.html

The Visual Chatbot

lewisandquark:

image

There is a delightful algorithm called Visual Chatbot that will answer questions about any image it sees. It’s a demo by a team of academic researchers that goes along with a recent machine learning research paper (and a challenge for anyone who’d like to improve on it), and its performance is pretty state-of-the-art, meant to demonstrate image recognition, language comprehension, and spatial awareness.

However, there are a couple of interesting things to note about this algorithm.

  1. It was trained on a large but very specific set of images.
  2. It is not prepared for images that aren’t like the images it saw in training.
  3. When confused, it tends not to admit it.

Now, Visual Chatbot was indeed trained on a huge variety of images. It can answer fairly involved questions about a lot of different things, and that’s impressive. The problem is that humans are very weird, and there are still many things it’s never seen. (This turns out to be a major challenge for self-driving cars.) And given Visual Chatbot’s tendency to react to confusion by digging itself a deeper hole, this can lead to some pretty surreal interactions.

image

Another thing about Visual Chatbot is that most of the images it’s been trained on have something in them – a bird, a person, an animal. It may have never seen an image of just rocks, or a plain stick lying on dirt. So even if there isn’t an animal there, it will be convinced there is. This means this bot always thinks it’s on the best safari ever. (For the record, it thought the stick lying on dirt was “a bird is standing on a rock in the snow”)

image

If you ask it enough questions, could you get an idea of how it made its mistakes?

image

An algorithm that can explain itself is really useful. Algorithms make mistakes all the time, or accidentally learn the wrong thing. This particular algorithm didn’t have trouble with hallucinating sheep like some other algorithms I tested. But it did have similar problems with goats in trees, and now I finally got to ask why.

image

[Goat image: Fred Dunn]

Upon further questioning, however, it also decided that dogs also have horns, and birds do not fly. Actually, it turns out that a lot depends on how you ask the question. The answer to “do bunnies fly?” is “no”, but the answer to “can bunnies fly?” is “yes”, so either the algorithm is answering a lot of these questions at random, or bunnies *can* fly but choose not to. (The construction “Do <blank> have <blank>?” seems to almost always result in a “yes”, so I can report that yes, bunnies do have spaceships and lightsabers.)

image

So I wouldn’t necessarily believe Visual Chatbot’s answer to my question about the zoo rocks thing. In fact, it seems to have learned to give explanations that are total lies – if it doesn’t know the color of something, it’ll answer “it’s a black and white photo so i can’t tell” without realizing that this excuse only works on an actual black and white photo.

It’s too bad this is so tricky. Since algorithms can often be biased, it would be great if we could ask them  “Why did you show me that ad?” or “Why did you decline my application?”. But getting a sensible answer from them may not be all that straightforward, especially if they pretend they know more than they actually do.

Bonus material: Visual Chatbot explains the plot of The Last Jedi. Enter your email and be edified (contains spoilers – sort of.)

image

SkyKnit: When knitters teamed up with a neural network

jumpingjacktrash:

lewisandquark:

image

[Make Caows and Shapcho – MeganAnn]

image

[Pitsilised Koekirjad Cushion Sampler Poncho – Maeve]

image

[Lacy 2047 – michaela112358]

I use algorithms called neural networks to write humor. What’s fun about neural networks is they learn by example – give them a bunch of some sort of data, and they’ll try to figure out rules that let them imitate it. They power corporate finances, recognize faces, translate text, and more. I, however, like to give them silly datasets. I’ve trained neural networks to generate new paint colors, new Halloween costumes, and new candy heart messages. When the problem is tough, the results are mixed (there was that one candy heart that just said HOLE).

One of the toughest problems I’ve ever tried? Knitting patterns.

I knew almost nothing about knitting when @JohannaB@wandering.shop sent me the suggestion one day. She sent me to the Ravelry knitting site, and to its adults-only, often-indecorous LSG forum, who as you will see are amazing people. (When asked how I should describe them, one wrote “don’t forget the glitter and swearing!”)

And so, we embarked upon Operation Hilarious Knitting Disaster.

The knitters helped me crowdsource a dataset of 500 knitting patterns, ranging from hats to squids to unmentionables. JC Briar exported another 4728 patterns from the site stitch-maps.com

I gave the knitting patterns to a couple of neural networks that I collectively named “SkyKnit”. Then, not knowing if they had produced anything remotely knittable, I started posting the patterns. Here’s an early example.

image

MrsNoddyNoddy wrote, “it’s difficult to explain why 6395, 71, 70, 77 is so asthma-inducingly funny.” (It seems that a 6000-plus stitch count is, as GloriaHanlon put it, “optimism”). 

As training progressed, and as I tried some higher-performance models, SkyKnit improved. Here’s a later example.

image

Even at its best, SkyKnit had problems. It would sometimes repeat rows, or leave them out entirely. It could count rows fairly reliably up to about 22, but after that would start haphazardly guessing random largish numbers. SkyKnit also had trouble counting stitches, and would confidently declare at the end of certain lines that it contained 12 stitches when it was nothing of the sort.

But the knitters began knitting them. This possibly marks one of the few times in history when a computer generated code to be executed by humans.

image

[Mystery lace – datasock]

image

[Reverss Shawl – citikas]

image

[Frost – Odonata]

The knitters didn’t follow SkyKnit’s directions exactly, as it turns out. For most of its patterns, doing them exactly as written would result in the pattern immediately unraveling (due to many dropped stitches), or turning into long whiplike tentacles (due to lots of leftover stitches). Or, to make the row counts match up with one another, they would have had to keep repeating the pattern until they’d reached a multiple of each row count – sometimes this was possible after a few repeats, while other times they would have had to make the pattern tens of thousands of stitches long. And other times, missing rows made the directions just plain impossible. 

So, the knitters just started fixing SkyKnit’s patterns.

Knitters are very good at debugging patterns, as it turns out. Not only are there a lot of knitters who are coders, but debugging is such a regular part of knitting that the complicated math becomes second nature. Notation is not always consistent, some patterns need to be adjusted for size, and some simply have mistakes. The knitters were used to taking these problems in stride. When working with one of SkyKnit’s patterns, GloriaHanlon wrote, “I’m trying not to fudge too much, basically working on the principle that the pattern was written by an elderly relative who doesn’t speak much English.”

Each pattern required a different debugging approach, and sometimes knitters would each produce their own very different-looking versions. Here are three versions of “Paw Not Pointed 2 Stitch 2″.

image
image
image

[Top – ActualJellyfishMiddle – LadyAurianBottom (sock version) – ShoelessJane]

Once, knitter MeganAnn came across a stitch that didn’t even exist (something SkyKnit called ’pbk’). So she had to improvise. “I googled it and went with the first definition I got, which was ‘place bead and knit’.” The resulting pattern is “Ribbed Rib Rib” below (note bead).

image

[Ribbed Rib Rib – MeganAnn]

Even debugged, the patterns were weird. Like, really, really nonhumanly weird.

“I love how organic it comes out,“ wrote Vastra. SylviaTX agreed, loving “the organic seeming randomness. Like bubbles on water or something,” 

SkyKnit’s patterns were also a pain. Michaela112358 called Row 15 of Mystery Lace (above) “a bit of a head melter”, commenting that it “lacked the rhythm that you tend to get with a normal pattern”. Maeve_ish wrote that Shetland Bird Pat “made my brain hurt so I went to bed.” ShoelessJane asked, “Okay, now who here has read Snow Crash?”

image

[Winder Socks (2 versions) – TotesMyName]

“I was laughing a few days ago because I was trying to math a Skyknit pattern and my brain…froze. Like, no longer could number at all. I stared blankly at my scribbles and at the screen wondering what had happened til somehow I rebooted. Yup, Skyknit crashed my brain.” – Rayn63

image

[Paw chain 2 – HMSChicago]

On the pattern SkyKnit called “Cherry and Acorns Twisted To”:

“Couple notes on the knitting experience, which while funny wasn’t terribly pleasurable: Because there’s no rhythm or symmetry to the pattern, I felt I was white-knuckling it through each line, really having to concentrate. There are also some stitch combinations that aren’t very comfortable to execute physically, YO, SSK in particular.

That said, I’m nearly tempted to add a bit of random AI lace to a project, perhaps as cuffs on a sweater or a short-row lace panel in part of a scarf, like Sylvia McFadden does in many of her shawl designs. As another person in the thread said, it would add a touch of spider-on-LSD.” –SarahScully

image

[cherry and acorns twisted to – Sarah Scully]

BridgetJ’s comments on “Butnet Scarf”:

“Four repeats in to this oddball, daintily alien-looking 8-row lace pattern, and I have, improbably, begun to internalize it and get in to a rhythm like every other lace pattern.

I still have a lingering suspicion that I’m knitting a pattern that could someday communicate to an AI that I want to play a game of Global Thermonuclear War, but I suppose at least I’ll have a scarf at the end of it?” –BridgetJ

image

[butnet scarf – BridgetJ]

There was also this beauty of a pattern, that SkyKnit called “Tiny Baby Whale Soto”. GloriaHanlon managed somehow to knit it and described it as “a bona fide eldritch horror. Think Slenderman meets Cthulu and you wouldn’t be far wrong.”

image

[Tiny Baby Whale Soto – GloriaHanlon]

Other than being a bit afraid of Tiny Baby Whale Soto, the knitters seem happy to do the bidding of SkyKnit, brain melts and all.

“I cast on for a lovely MKAL with a designer I totally trust and became immediately suspicious because the pattern made sense. All rows increase in an orderly manner. There are no “huh?” moments. There are no maths at all…it has all been done for me. I thought I would be happy, yo. Instead, I am kind of missing the brain scrambling and I keep looking for pigs and tentacles. Go figure.” – Rayn63

image

Check out the rest of the SkyKnit-generated patterns, and the glorious rainbow of weird test-knits at SkyKnit: The Collection and InfiKnit

There’s also a great article in the Atlantic that talks a bit more about the debugging. 

If you feel so inspired (and don’t mind the kind-hearted yet vigorous swearing), join the conversation on the LSG Ravelry SkyKnit thread – many of SkyKnit’s creations have not yet been test-knit at all, and others transform with every new knitter’s interpretation. Compare notes, commiserate, and do SkyKnit’s inscrutable bidding!

Heck yeah there is bonus material this week. Have some neural net-generated knitting & crochet titles. Some of them are mixed with metal band names for added creepiness. Enter your email here to get more like these:

Chicken Shrug
Snuggle Features
Cartube Party Filled Booties
Corm Fullenflops
Womp Mittens
Socks of Death
Tomb of Sweater
Shawl Ruins

i love this so much. knit nerds are the best