Google’s engineers say the effect is not unlike the way a person might find meaning in a cloudscape. When asked to look for something recognizable, people—and computers, it turns out—identify and “over-interpret” the outlines of things they already know.
“This network was trained mostly on images of animals, so naturally it tends to interpret shapes as animals. But because the data is stored at such a high abstraction, the results are an interesting remix of these learned features,” wrote Google engineers Alexander Mordvintsev, Christopher Olah, and Mike Tyka in a blog post. “The results vary quite a bit with the kind of image, because the features that are entered bias the network towards certain interpretations. For example, horizon lines tend to get filled with towers and pagodas. Rocks and trees turn into buildings. Birds and insects appear in images of leaves.”
And because neural networks assess images in layers—by color, by the sorts of lines or shapes depicted, and so on—the complexity of the image generated depended on which layer the engineers asked the computer to enhance. The lowest layers are the contours of things—lines and shadows—whereas the highest layers are where more sophisticated imagery emerges. “For example, lower layers tend to produce strokes or simple ornament-like patterns, because those layers are sensitive to basic features such as edges and their orientations,” the engineers wrote.
Join the conversation as a VIP Member