Imagine you’re Edwin Hubble in 1923, about to prove that the Milky Way is just one galaxy in a universe filled with them. You have just spotted a faraway variable star. You write down a note about that star on a photographic plate: “VAR!”
Or imagine it’s 1977, and you’re reading a printout from a radio telescope that listens for aliens, red pen in hand, when you find a long, strange, still-unexplained signal. “Wow!” you write.
Or imagine it’s August 2017, you’re signed on to Slack, and you’ve just seen the smoldering wreckage of a collision of two neutron stars. “!” you type to your colleagues, unable to muster anything else.
Each of these astronomical classics highlights one particular aspect of discovery: the thrill of knowing something about nature that no one else does. But these moments from the highlight reel of astronomy’s history minimize the more prosaic aspects of research, the tedium of peering at a screen for hours on end, blinking, clicking, or executing a computer script, again and again, forever, and maybe not finding anything noteworthy at all.
But now AI is here to do the boring part.
In a new paper published by the journal Monthly Notices of the Royal Astronomical Society, a neural network has successfully flipped through images of more than 20,000 galaxies and pulled out a few hundred of the most intriguing. “I think it will become the norm since future astronomical surveys will produce an enormous quantity of data,” said Carlo Petrillo of the University of Groningen in the Netherlands, in a statement. “We don’t have enough astronomers to cope with this.”
Petrillo’s specific quarry was gravitational lenses: rare patterns in the night sky that let astronomers fathom the depths of many of modern physics’ most pressing mysteries. Picture a galaxy so massive that space is distorted by its gravity, as Einstein’s theory of general relativity predicts. As rays of light from galaxies far beyond it pass through that warped space around the foreground galaxy, they curve, bending toward the Earth. This makes the foreground galaxy like a lens made of space, not glass.
Astronomers often call gravitational lenses cosmic telescopes, whose lenses amplify the light from very distant galaxies, opening a window to older, more primitive parts of the universe than we could otherwise see. That is, if they can find them. Since galaxies needs to fall directly behind each other by random chance, lenses are rare, and graduate students have only finite time to comb through endless images hunting them down. They all look a certain way, like broken arcs of light ringing a galaxy or a galaxy cluster. But they’re all different, and arms of spiral galaxies look like arcs too, confounding the search. It’s the kind of thing you’d think you need human intuition to do, that is until an AI pulls it off.
Petrillo’s group turned to convolutional neural networks, which are often used for other image-analysis tasks like facial recognition. They showed their network 100,000 images of gravitational lenses. Since fewer than a thousand real lenses are known, they used fake ones simulated in a computer. They let their network build up a feeling for how lenses look. Then they turned it loose on real galaxies, where it found 761 candidates, which the team winnowed down to 56 after checking by human eye.
“Now you’re getting into the era that is scary for scientists.”
According to astrophysicist Brian Nord at Fermilab, Petrillo’s work is part of a boom in applying artificial intelligence to this exact problem. From January to March, he estimates, about 8 different papers came out on the subject from different groups—including a draft of this one. While most of these efforts limited themselves to looking only at simulated pictures, as a proof of concept, Petrillo’s network and another made by graduate student Colin Jacobs seem to have identified real lenses that no human had ever seen.
And AI can do more than just find lenses. It can use them. In April, Nord says he visited Stanford and talked about how making changes at just “two little places” in a neural network’s code could let a network switch from recognizing lenses to measuring them—specifically, it could allow them to add up the mass that was creating the lens effect. By August, Stanford researchers did it. Their network analyzes lenses some 10 million times faster than older, simulation-based methods. Like any good scientist, the network had graduated from simply classifying—lens or not a lens?—to measuring.
How far this goes is an open question. In the past few years, cosmologists have started touting gravitational lenses as the solution to many of the field’s woes. Besides just magnifying distant galaxies, lenses work like cosmic bathroom scales: Their exact curvature traces the mysterious, unidentified dark matter that surrounds all galaxies. They are all also hyper-accurate yardsticks for astronomers who hope to clock the speed at which the universe is expanding. Better still, it seems like new AI methods and faster-than-ever computers can understand them.
Sending AI after lenses might only be a fad, a technique with pros and cons that is folded into other kinds of computer classification, Nord thinks. Just another part of the astronomer’s tool kit. Or it could transform discovery. It might soon be possible to give a neural network images of galaxies and simply set it free—allowing it to find the lenses, asses them, and to report back with some overarching measurement like the expansion rate of the universe. Provided astronomers could trust it, of course.
“Now you’re getting into the era that is scary for scientists, or may someday be scary,” Nord says. There would be no written notes for the history books, no exclamation points. The excitement of finding something no one else had would go unfelt, or would at least be unreadable.
It’s striking to consider what gravitational lenses and neural networks have in common. Science uses them both as tools—whether made by random chance or by design, they give us a way to pry open nature’s secrets. But here it might get recursive. Someday, not too far from now, we could be empowering a scientific instrument to find telescopes out in the void, to look through them on its own, and to tell us what it learned.