“For days, rumors about the biggest advance in years in so-called complexity theory have been lighting up the Internet. That’s only fitting, as the breakthrough involves comparing networks just like researchers’ webs of online connections. László Babai, a mathematician and computer scientist at the University of Chicago in Illinois, has developed a mathematical recipe or “algorithm” that supposedly can take two networks—no matter how big and tangled—and tell whether they are, in fact, the same, in far fewer steps than the previous best algorithm. Computer scientists are abuzz, as the task had been something of a poster child for hard-to-solve problems.

“It this is correct it’s probably the theoretical computer science result of the decade,” says Scott Aaronson, a computer scientist and blogger at the Massachusetts Institute of Technology in Cambridge.

Complexity theory is basically the study of what’s hard or easy to solve with a computer. In it, the key thing is how the number of steps it takes to solve a problem grows with the size of the input. Suppose, for example, that you want to determine whether a given number, 983 or 105227, is prime and cannot be divided by another number. The number of computational steps in that calculation grows relatively slowly with the number of digits in the number. Generally, it grows with the number of digits, n, raised to a fixed power—something akin to n^{2}. Expressions like that are called polynomials, so the problem is said to be solvable in “polynomial time” and is in the complexity class “P.”

On the other hand, suppose you want to divide a given number such as 21,112,331 into all of its factors. So far, nobody has found a polynomial time algorithm to do that. So factoring is thought to be harder and reside in a broader class of problems know as NP for “nondeterministic polynomial.” Crudely speaking, for the harder NP problems, the number of steps is thought to blow up exponentially with the number of digits in the input—growing as something like 2^{n}, which gets bigger much faster than n^{2}. (For example, 100^{2} is a mere 10,000; 2 to the 100th power is more than a million trillion trillion.)

Now, Babai has shown that a problem that was thought to be on the harder NP end of the spectrum is closer to the easier P end, instead. Any network—whether it represents people spreading the flu or proteins interacting within an organism—can be depicted as a set of points, or nodes, connected by straight lines, or edges, that symbolize interactions between the nodes. Because the nodes can be plopped down any old which way, two graphs that have the same arrangement of connections can appear different (see figure), especially as the graph gets bigger and more complicated. In the “graph isomorphism problem,” the challenge is to determine whether two graphs are really the same or not. Babai has found a new algorithm to solve that problem, as he announced today.

For the previous best method, invented in 1983 by Eugene Luks, a mathematician and computer scientist at the University of Oregon in Eugene, the number of steps grows almost exponentially with the number of nodes in the networks to be compared. In Babai’s algorithm, the number of steps grows only slightly faster than polynomial time. That may sound like comparing shades of gray, but for experts that qualitative difference is thrilling. “Assuming this result holds, it would be a gem of [theoretical computer science], and an incredible thing to witness in real time,” one reader wrote on Aaronson’s blog days before Babai’s talk. “Super exciting!” wrote a reader on another blog.

Ironically, even though networks and graphs are everywhere you look nowadays, the new algorithm isn’t likely to have broad practical applications, Aaronson says. That’s because current algorithms already can quickly solve the graph isomorphism problem for the vast majority of graphs. If it holds up, the new algorithm simply proves that the tough cases that stymie the current algorithms can also be solved efficiently, Aaronson explains. And the new algorithm also does not touch on the biggest question in complexity theory: whether the class NP is really different from the class P. Researchers generally assume that’s true, and if they’re wrong, many things—such as Internet cryptography techniques—will be incredibly vulnerable. But there’s no proof that NP does not equal to P.

The real advance in the new work is to shift a key problem from the hard category to the easy one, Aaronson says. The graph isomorphism problem had been something of an oddball, he says: It was thought to be hard yet known to have certain technical properties often associated with easier problems. Now, it may just be that the problem isn’t that hard after all.

Of course, other researchers still have to check Babai’s work. He is presenting the algorithm in two talks at the university, one today and one on 24 November. “You still need to see the details,” Aaronson says. “It’s still possible that someone of the stature of Babai could make a mistake.””

Source: Mathematician claims breakthrough in complexity theory | Science/AAAS | News