Generative Adversarial Networks (GANs): An Overview | Hacker Noon

Author profile picture

@samadritaghoshSamadrita Ghosh

AI & Data Science Writer | Co-Author of Data Science for Enterprises | Mentor @upGrad

GAN or Generative Adversarial Network is one of the most fascinating inventions in the field of AI. All the amazing news articles we come across every day, related to machines achieving splendid human-like tasks, are mostly the work of GANs!

For instance, if you ever heard of AI bots which create human-like paintings, it is essentially GANs behind the awe-inspiring strokes. Or if you have heard of AI bots which create human faces from scratch, faces which do not even exist yet, that too is entirely the imaginative work of powerful GANs.

“GANGogh”: Paintings of flowerpots by GANs; Courtesy: towardsdatascience

GANs have a lot of applications and one is often led to wonder how simple machines can achieve such fascinating and in fact, extensively creative accomplishments so efficiently.

If you are an observer of the real world, you might have noticed that an individual, whether it be an individual from the animal or plant kingdom, often grows stronger when it faces any sort of competition. A seed which beats its siblings by being more absorbent to the nutrients and water in the soil, grows into a strong and healthy tree, increasing the chances of stronger descendants as the generations proceed.

A boxing contender estimates the quality of his/her fellow contenders to prepare accordingly. The stronger the contenders, the higher is the quality of preparation.

Almost mimicking the real world, GANs follow suit and go by the architecture of adversarial training. Adversarial training is nothing but “learning by comparison”. The information possessed by the adversary is studied such that other instances of the information of the same category can be furnished with least flaws such that the adversary cannot detect that the information is from an alien source.

There are two counterparts in a GAN: the generator and the discriminator. The work of the generator is to tweak the outputs such that they are indistinguishable from the originals. The work of the discriminator is to judge if the generated piece is up to mark and can be classified at par with the already present samples. Otherwise, the cycle is repeated unless the generator generates a good enough image which the discriminator passes as a credible sample.

GAN Architecture; Courtesy: GeeksforGeeks

For instance, if a GAN is trained on a set of poems, it should be able to come up with poems of its own. However, as splendid as it sounds, training GANs is not an easy task. There are several ongoing research initiatives to identify ways to minimize the flaws in the generated data.

Applications of GANs are manifold and among these, the healthcare industry has also benefited majorly. For instance, uses GANs effectively to provide super-resolution to medical imaging. Radiography, ultrasound, elastography, magnetic resonance imaging and several other such salient tests generate varied images which often needs refining, especially if the imaging equipment is of relatively low quality.


Imagine using a GAN to provide super-resolution to say, radiography reports/images. This can be done by training the GAN with several previously acquired high-resolution radiography images. This would mean that the information from high-quality radiography imaging equipment can be leveraged even in areas where a relatively poorer quality of the equipment is at disposal.

To use the GAN architecture, usually, the resolution of high-quality images is brought down to low resolution and fed to the generator. The generator tries to increase the resolution such that the discriminator passes it as a credible image. The cycle continues unless the loss of the generator function, with respect to the discriminator function (or adversarial loss), is minimized.

It requires a high degree of patience to train GANs to attain super-resolution images because it has to learn to provide both detailed finer structures as well as larger structures at the same time. This makes training the generator difficult since it becomes very easy for the discriminator to distinguish tiny differences in the finer structures. For this, an optimum convergence point must be attained such that both generator and discriminator do not allow the demise of the other.

Indeed, it is a very fine process, but once achieved, it has the ability to change the face of not only healthcare imaging but also a variety of use cases in major sectors worldwide.


The Noonification banner

Subscribe to get your daily round-up of top tech stories!

read original article here