I recently gave a talk at the Open Data Science Conference that took place in London on the 8-9 of October. Being a graph analyst, I wanted to stress the latest advancements in the field. Amongst them, the greatest that comes to mind is Laszlo Babai's quasi-polynomial algorithm for the Graph Isomorphism (GI) problem (paper published early this year).

Though very much excited with the idea of presenting the study, I could not but be discouraged by the lengthy (89-page long) paper Babai released. Not so much because of the complexity of the notions or the theorems presented in it but more due to the difficulty of conveying them through 45minutes talk. My concern was twofold: first, to convey the importance of this result and second, to approach the algorithmic analysis from a high-level point in order to provide the audience with an intuitive understanding of what was done. Below I provide a brief overview of the ideas presented in the talk. The presentation is embedded at the end of the blog.

# The problem

GI is a simple problem to define. Informally, it asks whether it is easy to tell if two graphs with the same number of nodes and edges share the same topology, i.e. that they are equivalent.

Unfortunately, it is very rarely that simple problem definitions imply simple problem solutions. Due to the astronomical number of ways according to which a finite number of nodes in a graph can be linked to one-another, this problem is regarded as one of the hardest problems in Complexity Theory today, with state of the art methods resourcing huge loads of processing power, time and money to solve. More importantly, the problem is related to a vast number of practical applications in fields like chem- and bio- informatics, electronic design automation, security, cryptography, social structure analysis and much more. Thus, the impact of solving this problem faster** **is huge. But that is not all there is to it. With his algorithm, Babai not only managed to beat a record held by Eugene M. Luks for more than 30 years, but also got Computer Science community to ponder once more on the holy grail of Computer Science; the P vs. NP problem.

# Complexity classes

See, all problems that can be solved by a computer belong in different complexity classes. Informally, these classes represent a way of defining a problem’s level of difficulty. You can assume that **P** concerns easy to solve problems while **NP** concerns problems that are hard to solve.

There are, of course, problems so hard that go beyond the two classes. The intriguing aspect about **P** and **NP** is that we are not really sure that they concern distinct classes. It’s been shown in the past that problems thought to belong in the **NP** class where in fact **P** problems (finding prime numbers). This could also be the case with the GI problem. As researchers continue to show that this is the case for many other problems, one day we may be able to demonstrat that the two classes collapse into one—which would have huge implications as it would challenge our understanding of the world in many ways.

# What Babai did

The most interesting part of Babai’s algorithm is that it is intended for solving a different problem; that of string iso-morphism (SI).

Of course, in order to solve GI using that algorithm one has to first convert a graph into a string. Generally, the approach of converting one problem to another is very well known amongst complexity theorists as the process of reduction. In addition, since the reduction becomes part of the solution it must also be included in the complexity analysis of the problem. Intuitively, if it takes more time to convert the problem than to solve it, then what is the gain? In his paper, Babai also shows how the GI problem is polynomial-time (‘fast’) reducible to the problem of SI. He then moves to explaining the quaisi-polinomial algorithm he developed.

Hope you will enjoy the talk. Until next time…