The ESRGAN AI Upscale non-Duke thread
#1 Posted 10 January 2019 - 08:31 AM
https://drive.google...3P4?usp=sharing
Some examples:
This one was taken from a hi res of the original scanned painted artwork at 512x356 found on the KQ6 game CD-ROM and it upscales to 2048x1460 with incredible results!:
The in-game image:
#2 Posted 10 January 2019 - 09:55 AM
#3 Posted 10 January 2019 - 12:54 PM
One could recreate an entire game in AGS with the scaled up graphics...but one would be crazy. SCI Companion will allow you to create your own SCI games and even decompile and edit the scripts of existing SCI games so you can port, say, 16 colour SCI games to VGA SCI games. But currently there are no tools for SCI32 640x480 SVGA.
This post has been edited by MusicallyInspired: 10 January 2019 - 12:55 PM
#4 Posted 10 January 2019 - 07:31 PM
Riven Upscale Album
This post has been edited by MusicallyInspired: 10 January 2019 - 07:32 PM
#5 Posted 12 January 2019 - 03:31 AM
Here's a shot from Kyrandia fed straight to ESRGAN (then scaled down to 640x400 with Sinc interpolation):
And here's the same image that was first scaled to 4x its original size with the xBRZ Scaler Test tool, then back to 320x200 with Sinc before processing by the Manga 109 model:
You can notice that the first image has some jagged rough edges (e.g. at the foot of the mountain on the bottom left) that are basically leftovers from the low-res pixels, while the second one has smoother lines in these parts.
This post has been edited by MrFlibble: 12 January 2019 - 03:31 AM
#8 Posted 12 January 2019 - 06:50 AM
Quote
One could recreate an entire game in AGS with the scaled up graphics...but one would be crazy. SCI Companion will allow you to create your own SCI games and even decompile and edit the scripts of existing SCI games so you can port, say, 16 colour SCI games to VGA SCI games. But currently there are no tools for SCI32 640x480 SVGA.
Ooow.
This post has been edited by Fantinaikos: 12 January 2019 - 06:53 AM
#9 Posted 12 January 2019 - 07:57 AM
https://www.reddit.c...sets_i_trained/
Sadly it produces nearly complete garbage from video game graphics, doesn't contend with Manga 109 at all. However Manga itself is not without flaw because for some reason it was trained on images with JPEG compression artifacts (such as mosquito noise), which it diligently reproduces in scaled up images. I wonder though why no one has tried training a model on actual video game assets like sprites, I've got a few ideas on possible sources for the data.
MusicallyInspired, are you registered at the ResetEra forums? They don't allow free emails for registration so I can't join. If yes maybe you could tell the folks over there about the softening method?
#10 Posted 12 January 2019 - 08:14 AM
MrFlibble, on 12 January 2019 - 03:31 AM, said:
going to put myself out there, but I kinda like the first one better - there's just a charm about the pixilation that produces certain sentimentalism.
This post has been edited by Forge: 12 January 2019 - 08:15 AM
#11 Posted 12 January 2019 - 10:02 AM
Forge, on 12 January 2019 - 08:14 AM, said:
Personally I'd prefer pixelation that would be uniformly spread. On the original image (the above is scaled down), there's a noticeable contrast between the poorly processed "pixellated" areas and for example the trees in the foreground that got enhanced detail:
(this is part of the original image zoomed up 2x with nearest neighbour for better viewing)
#12 Posted 12 January 2019 - 10:08 AM
#13 Posted 12 January 2019 - 10:50 AM
MrFlibble, on 12 January 2019 - 03:31 AM, said:
I just process straight through. There was one scene from Gabriel Knight that translated awfully and I did a bit of pixel editing before upscaling to fix it up. I included both versions though. Other than that, I've just been faithfully putting the images straight through without any alterations.
Quote
And here's the same image that was first scaled to 4x its original size with the xBRZ Scaler Test tool, then back to 320x200 with Sinc before processing by the Manga 109 model:
Did you fix the aspect correction to account for tall pixels? (4:3 as opposed to 16:9) Some games were made with square pixels in mind, but most in the DOS days treated pixels as tall.
That's a problem I had. I tried fixing the aspect before upscaling but it seems to work better to fix it after upscaling otherwise you get sometimes violently skewed and jagged warps in the final image. I stretched mine with Lanczos filtering.
Quote
Yes. Manga109 seems to respect contrasting lights and colours strongly while blending more with shading and texture. Because of this it preserves a lot of the staircasing effect of aliased pixel lines.
Fantinaikos, on 12 January 2019 - 06:50 AM, said:
It takes several seconds to render one image with a high-end GPU. Won't be real-time playable in anything for a while yet.
MrFlibble, on 12 January 2019 - 07:57 AM, said:
https://www.reddit.c...sets_i_trained/
Sadly it produces nearly complete garbage from video game graphics, doesn't contend with Manga 109 at all. However Manga itself is not without flaw because for some reason it was trained on images with JPEG compression artifacts (such as mosquito noise), which it diligently reproduces in scaled up images. I wonder though why no one has tried training a model on actual video game assets like sprites, I've got a few ideas on possible sources for the data.
Much of the original scanned artwork is available for Monkey Island 2, Police Quest 1 VGA, Eco Quest 2, and a few shots for the King's Quest games. I was planning on using it to train my own model. I just haven't gotten around to it yet. Others on Twitter who are artists I've seen make plans to upscale with Manga, make some artistic touchups afterwards, and then re-train the model with those touchups which I think is an awesome idea too. But I'm no artist and I can't do that. The best I can do is touch-up on the low-res pixel level which, thankfully, can have some meaningful effects on the final upscale.
I'd like to see a model trained to handle dithering properly and transform it into an actual high-colour gradient. As of right now, anything EGA or with a lot of dithering just comes out look like bead art or something.
Quote
No, I'm not.
Jimmy 100MPH, on 12 January 2019 - 10:08 AM, said:
Agreed! I hated scaler effects. This stuff is black magic, though!
This post has been edited by MusicallyInspired: 12 January 2019 - 10:52 AM
#14 Posted 12 January 2019 - 12:18 PM
MusicallyInspired, on 12 January 2019 - 10:50 AM, said:
That's a problem I had. I tried fixing the aspect before upscaling but it seems to work better to fix it after upscaling otherwise you get sometimes violently skewed and jagged warps in the final image. I stretched mine with Lanczos filtering.
I never tried doing aspect ratio correction before scaling, as the source image is too small. It's better done afterwards when there's more than enough data to be accommodated in the extra lines that have to be added when giving the result the proper dimensions. I too settled on Lanczos interpolation as it seems to produce the best results.
MusicallyInspired, on 12 January 2019 - 10:50 AM, said:
I came up with the xBR softening solution when playing around with waifu2x. It seems that some contrasts are too high in indexed images and so are not processed in the same way as true colour images with more continuous colour transitions on which both models were trained.
MusicallyInspired, on 12 January 2019 - 10:50 AM, said:
Both approaches sound wonderful, I've seen some comparisons of processed screenshots with original art from Monkey Island, would be interesting to train the model on this kind of data.
MusicallyInspired, on 12 January 2019 - 10:50 AM, said:
This will most certainly require indexed low-res images with dithering produced from high-res true colour images. I guess you can achieve some good results with existing tools like mtPaint or that Image Analyzer Jimmy suggested.
What I did not get is if ESRGAN is supposed to be trained on low-res images that are 25% the size of the ground truth (with the original models trained on stuff scaled down with Bicubic interpolation in MATLAB), or 50%, the former meaning a lot of detail being lost when it comes to video game art.
I also think that apart from JPEG artifacts, there's a problem that the Manga 109 set contains both drawings and text. I've noticed a tendency towards extrapolating small detail into vertical lines in this model, for example in this Warcraft image the Wizard's ear became something weird (otherwise the image is very good!):
For comparison, the waifu2x result of the same image is generally not as good but has no trouble with the shape of the ears and the like:
Also if you look closely at this Daggerfall's lady's right hand, the small finger is completely detached:
This post has been edited by MrFlibble: 12 January 2019 - 12:18 PM
#16 Posted 13 January 2019 - 09:58 AM
#17 Posted 13 January 2019 - 10:36 AM
HulkNukem, on 12 January 2019 - 03:12 PM, said:
The person who does this told me they only use Topaz Gigapixel for the upscale... but then edited the response, and now the first post in that thread makes a reference to the Doom neural upscale which is a combination of results produced by the proprietary NVIDIA network and Gigapixel. At least, this is what the author of the Hexen upscale seems to imply.
I know that the Hexen upscale was mentioned in the ESRGAN thread at ResetEra Forums, but from all above it seems to not be using ESRGAN. (Well, to be fair, that thread is not exclusively about ESRGAN but about neural network upscaling in general.)
#18 Posted 13 January 2019 - 11:02 AM
Many otherwise quite nice end results end up having much harsher contrast due to this.
ESGRAN methods look cool but suddenly you notice it trying to apply it's weird patterns in places where it doesn't belong.
These are still miles better than typical upscales since the added detail ends up being surprisingly close to something you could imagine up.
#19 Posted 13 January 2019 - 11:35 AM
#20 Posted 15 January 2019 - 04:48 AM
This post has been edited by leilei: 15 January 2019 - 04:49 AM
#22 Posted 15 January 2019 - 08:17 AM
https://forum.dune2k...neural-upscale/
https://forums.cncnz...it-graphics-cc/
And Nyerguds ran all his high-res C&C renders through the Manga model, which mmostly turned out very good:
https://imgur.com/a/2HhPlOz
#23 Posted 15 January 2019 - 12:09 PM
https://imgur.com/a/kbBSdmZ
All the images were processed in the same way. I first softened them by scaling to 400% with xBRZ in the Scaler Test tool, then applied the Pixelise filter in GIMP at 4 pixel radius and scaled back to the original dimensions with nearest neighbour interpolation. I chose this method over using Sinc interpolation because that one would result in additional sharpening when going from the ESRGAN output to the 640x480 size.
I then fed the softened images to ESRGAN with the Manga 109 model, cleaned up the result in GIMP using Selective Gaussian blur with 2 pixel radius and threshold of 15, scaled down to 640x480 (960x736 in the case of the Warcraft II shot) with Sinc interpolation, and converted to the respective original palettes using mtPaint and the Dithered (effect) dithering method. In a few cases it didn't get some colours quite right but it's rather neat overall.
#25 Posted 16 January 2019 - 03:10 AM
What are the odds of applying this upscaling method (or any other that seems to suit the need) to all Duke3D textures? A similar project is currently WIP over at the Doom forums with very promising results.
This could increase our chances to have an all-new HRP with faithful textures - and a realistic probability of actually getting completed. Could be done for textures first which would probably be easier/faster, then sprites. Wouldn't help with models, though.
This post has been edited by NightFright: 16 January 2019 - 03:22 AM
#26 Posted 16 January 2019 - 04:06 AM
MusicallyInspired, on 15 January 2019 - 01:40 PM, said:
That's because this is not the PC version. I found a set of images with pre-rendered sprites at Interplay's website using the Wayback Machine (link). I honestly have no idea what version this is supposed to be, maybe the Sega 32X with the image size cropped to 320x200. I just tried doing this to a Sega 32X shot from MobyGames, the cropped interface looked exactly the same.
NightFright, on 16 January 2019 - 03:10 AM, said:
What are the odds of applying this upscaling method (or any other that seems to suit the need) to all Duke3D textures? A similar project is currently WIP over at the Doom forums with very promising results.
Sure, here's a quick test:
https://imgur.com/a/BjKVMNi
All three were xBRZ-softened (initially for a waifu2x test), no edits after applying the Manga 109 model. They'll probably look a bit better when scaled to 2x the original size.
#27 Posted 16 January 2019 - 04:17 AM
This post has been edited by NightFright: 16 January 2019 - 04:17 AM
#28 Posted 16 January 2019 - 04:21 AM
MrFlibble, on 16 January 2019 - 04:06 AM, said:
https://imgur.com/a/BjKVMNi
All three were xBRZ-softened (initially for a waifu2x test), no edits after applying the Manga 109 model. They'll probably look a bit better when scaled to 2x the original size.
interesting.
not a fan. it seems too pastel(?) for my taste, but interesting.
#29 Posted 16 January 2019 - 10:39 AM
Forge, on 16 January 2019 - 04:21 AM, said:
not a fan. it seems too pastel(?) for my taste, but interesting.
I think it looks better when scaled down back to 2x and converted to the original palette:
Definitely ways better than my earlier attempts with waifu2x:
NightFright, on 16 January 2019 - 04:17 AM, said:
The results hidfan got with Doom assets are a blend of three processes: NVIDIA regular texture upscale, NVIDIA PhotoHallucination and Topaz Gigapixel. Initially the results of these separate processes were available at the neural upscale Google Drive (but are now either removed or the link to them isn't available), but the straightforward NVIDIA upscaling (no hallucination) seemed pretty bland to me, here's an example:
You can notice the pixel staircase effect in some areas of the texture. Not sure if this would be smoothened if the image received prior softening though.
Also the initial results added some noise due to the PhotoHallucination process, which seemed like a good thing (for creating a "pixelised" look) but turned out rather bad when actually used in the game so hidfan had to clean it up, I think it was done manually.
#30 Posted 16 January 2019 - 04:18 PM
MrFlibble, on 16 January 2019 - 10:39 AM, said:
I don't mean the work itself is bad. It's neat and interesting, but to me it looks a little to clean(?)
I don't know how to put it properly. Like the discoloration on the wall tiles. It looks like somebody walked up with a putty knife and scrapped the colors onto the wall.
I'm not a graphics artist, just a casual. I don't belong in this gallery. Somebody put a tuxedo on me and stood me in the corner.