junkynanax.blogg.se

Anime images to color
Anime images to color













anime images to color

WGAN(Wasserstein GAN) is one of those new versions by Arjovsky and Bottou (2017)(1), which applies a concept of Wasserstein Distance (Earth Mover's Distance). Thus, many new versions of GAN trying to remove those disadvantages have been invented.

Anime images to color code#

Python code for GAN I've wrote is originally from Erik Linder-Norén's github (2).Īs GAN gets more popular as deep-learning algorithm, people have also been focusing on disadvantages of GAN. Recently, I've read a paper (1) that used DCGAN (GAN with CNN architecture) for image colorization, so I also decided to apply the algorithm for my anime face colorization. Originally introduced by Ian Goodfellow in 2014, GAN is still popular deep-learning algorithm used for various purpuses. Note: Emil used the vector output of InceptionV3 as a fusion layer, whereas I used the vector output of ResNet as a fusion layer.

anime images to color

If you are interested in the concept of the fusion layer, below is a diagram briefly showing the concept of fusion layer (For more details, visit the link above).

anime images to color

Simply saying, it is based on the assumption that classifying the input first would help the algorithm giving a better colorization result. Basically, what fusion layer does (in my case of coloring anime faces) is that if an input comes into the algorithm, it adds information about which anime character the input is to the encoded vector. In Emil's post, he didn't end up implementing Alpha algorithm, but he furtuer improved it by applying a concept called 'Fusion Layer'. While searching for colorization algorithms, however, I've seen quite many people using U-Net for colorization, and since U-Net also has a structure of eocnder-decoder format suitable for colorization, I've editted the U-Net Keras code ( ) little bit so that it can be used for colorization.īelow diagram is an architecture of U-Net algorithm:įull Version by Emil Wallner (Alpha Version algorithm with Fusion Layer) Roughly, the algorithm has a structure like a below diagram, and you can check a detailed Keras code on the website link above.Īccording to the inventor of the algorithm, U-Net is originalled intended to be convolutional network architecture for fast and precise segmentation of images. This is a simple CNN encoder-decoder algorithm that Emil Wallner created to colorize the image. For more information on LAB image, you can go to the links below:īelow diagram is data preprocessing process I take for the analysis:Īlgorithms Used (with Reference) Alpha Version algorithm by Emil Wallner The reason is that using L channel as input would let you keep general information of images as much as possible, whereas using onel fo RGB channel as input would exclude the information of two other channels. Since danbooru image dataset is too big, only moeimouto-faces.zip dataset has been used Preprocessingįor better colorization algorithm, I've converted RGB image to LAB image and use L channel for input and AB channel as output. Keras Implementation of different algorithms to color gray images of anime characters Data Used Source















Anime images to color