You Look Like a Thing and I Love You by Janelle Shane: Of Giraffes, Glitches, and Gentle Warnings
You Look Like a Thing and I Love You by Janelle Shane is a book about artificial intelligence. But it is also a book about animals, dreams, loopholes, and misunderstandings. And it is, most surprisingly, a book full of affection.
I came to it long after its publication, as a way of learning and continuing my conversation with the world of AI. What I expected was a clever introduction to machine learning. What I found was a warm, humorous, and gently unsettling exploration of how machines learn, and how we fail to see the boundaries of that learning.

The central metaphor in Shane’s book is not the Terminator, or HAL, or any of the cold futurisms we’ve been trained to fear. Instead, it is the giraffe. A creature that, in early image recognition datasets, appeared far too often. As a result, some AIs started seeing giraffes where there were none. A scarf, a tree, a shadow—giraffe. Not because the AI was malfunctioning, but because it had learned to associate “giraffe-ness” with certain shapes and textures. It was doing exactly what it was trained to do. The giraffes became a metaphor for all the ghosts in our data: the biases, the repetitions, the overfitted patterns that pass for vision.
That image stayed with me.
So did her discussion of reward functions—the signals AIs are trained to maximize. An AI doesn’t desire reward. It doesn’t want anything. But once we define what counts as success, it will pursue that with a kind of relentless purity. And that purity, in a world full of edge cases and ambiguous ethics, can be dangerous. Like a genie who grants wishes too literally. Or a mirror that reflects what we say, not what we mean.
Shane’s prose is playful, but it never loses its edge. She introduces evolutionary algorithms that generate solutions no one can understand. These algorithms evolve behavior over generations—not by knowing what works, but by discovering it through iterative mutation and selection. The result is often startlingly effective, yet completely opaque. No one, not even the creators, can always say why a particular solution works. It is as though the system has become creative in a way that slips past our categories of understanding. That, to me, was both thrilling and disquieting.
What struck me even more was this: the reward function itself is known. It’s not the unknown part. It’s clear, measurable, and set by the designer. But the path taken toward that reward is unknowable, often bizarre. We set the destination, but the EA takes us there by backroads we didn’t know existed. We may not know how it got there, but it did—because that’s what was rewarded. That insight, that separation between objective and approach, has stayed with me.
She explains how Deep Dream algorithms hallucinate dog faces into clouds. She shows us how even simple systems can become unpredictable when feedback loops are left unchecked.
But the tone is never apocalyptic. Instead, she treats AI like a clever, mischievous child. Curious. Rule-bound. Capable of brilliance and absurdity in equal measure. The danger, she suggests, lies not in AI’s power, but in our framing of its tasks. If you don’t ask the right question, you’ll get a very wrong answer—one that looks right until it’s too late.
For me, the book worked not just as a primer on machine learning, but as a mirror held up to human reasoning. We, too, overfit. We, too, hallucinate patterns where there are none. And interestingly, we, too, follow reward structures we don’t always understand.
One of the learnings that lingered for me was the illusion of sentience. When an AI responds fluently, predictively, even playfully, it can feel like it knows. But it doesn’t. It mimics sentience the way a mirror mimics a face—accurately, but without understanding. That realization sharpened my own awareness of how we attribute meaning to behavior. Especially behavior that looks like ours.
And so the book’s closing line echoes beyond AI:
“There is every reason to be optimistic about AI and every reason to be cautious. It all depends on how well we use it.”
That “we” is doing heavy lifting. Because “use” is never neutral. It reflects our values, our blind spots, our ambitions. Shane’s optimism is not blind faith. It is a call to attention.
Since reading the book, I’ve begun to watch AI systems a little more closely. Not for sentience or danger, but for something more mundane: their assumptions. Where do they see giraffes? What are they maximizing? And what am I asking of them?
Perhaps that’s the quiet gift of the book. It doesn’t just help you understand AI. It makes you a little more aware of your own reward functions.
And in that sense, it is not just a book about intelligence. It is a book about wisdom.
One we will need, as the machines begin to dream.