Duke4.net Forums: The ESRGAN AI Upscale non-Duke thread - Duke4.net Forums

Jump to content

  • 12 Pages +
  • « First
  • 8
  • 9
  • 10
  • 11
  • 12
  • You cannot start a new topic
  • You cannot reply to this topic

The ESRGAN AI Upscale non-Duke thread

User is offline   Phredreeke 

#271

Ok, if you prefer a cleaner upscale

Attached Image: tile0007_rlt.png

This is upscaled with an interpolation of XBRDD and Spongebob Reloaded-SWAG models, then downscaled with a one pixel offset in both X and Y directions.

Edit: Took the opportunity to tweak my redithering script. Do you like this better?

Attached Image: tile1352.png

Comparison here https://imgsli.com/NjA0ODc

This post has been edited by Phredreeke: 09 July 2021 - 07:13 PM

0

User is offline   MrFlibble 

#272

View PostPhredreeke, on 09 July 2021 - 05:40 PM, said:

Ok, if you prefer a cleaner upscale

Doesn't appear to look much different from an XBR upscale (only blurrier), or an HQ2x upscale.
Posted Image Posted Image
Where's the point in using AI upscales then?

View PostPhredreeke, on 09 July 2021 - 05:40 PM, said:

Edit: Took the opportunity to tweak my redithering script. Do you like this better?

I guess the new version is an improvement, but... why not just use ordered dithering and call it a day? At least it'll look consistent.

Also the wheel looks like it was modelled in clay :)
0

User is offline   Phredreeke 

#273

Redithering may be a misnomer, it doesn't use any dithering algorithm. It just subtracts one image from another, scales it up 2x then spreads the pixels around.

Anyway here's the steel door again but with a better choice of model (Nickelback + preprocessing with Pixelperfect)
Attached Image: tile0007.png

Here are a few more tiles processed the same way
Attached Image: tile0064.png Attached Image: tile0118.png Attached Image: tile2469.png

I feel I'm onto a winner here
0

User is offline   MrFlibble 

#274

While technically not using ESRGAN, these AI transformations of video game image input are really amazing, and I'm reluctant to start up a new topic.

So this guy takes character portraits from Daggerfall and uses a network to render them into photorealistic faces (which do not look exactly like the source material though), then interpolates the result with the original image to get closer to the source material. Example:
Posted Image
The third image, arguably looking the best, is a "slight" interpolation of the photorealistic face with the source image.

Here's a video of the process:


This is all still being fine-tuned but the perspectives the technology offers are quite breathtaking.
1

User is offline   deus-ex 

#275

Just since today, I can finally use the voting system. So the honour of receiving my first vote falls upon you, MrFlibble. Something to tell your kids and grandkids of. Posted Image

It took me long enough to reach that 50 post barrier, which I think is set way too high. If such a limit really is necessary, I suggest setting it to 5 or 10 posts at best.
0

User is offline   Phredreeke 

#276

It's not too high, you just need to shitpost more
0

User is offline   Phredreeke 

#277

A couple of Blood upscales, sorry no comparisons with the original tile or the previous upscale.

Posted Image Posted Image Posted Image

Here is one compared to the previous background upscale pack (left is new)
Posted Image

These used a rather complex series of models and scripts, where one very clean upscale (using pixelperfect and nickelback) is overlaid on top of a rather rough redithered upscale (using farthestneighbor and fatalphotos)
0

User is offline   Phredreeke 

#278

No step on snek

Posted Image
1

User is offline   Phredreeke 

#279

Shadow was having problems with his dequantising models not blending greys and tans such as in this tile

Posted Image

I got the silly idea to use an inpainting model to inpaint all the grey pixels

Posted Image

Honestly, I don't think the result is half bad. Could probably be improved by blending some of the grey back into it.
0

User is offline   MrFlibble 

#280

I discovered that the Pixel Perfect model can be used to repair images damaged by dithering. Here's a sample of dragon sprites from Aleona's Tales:

original sprites
Posted Image
upscaled with 4x_PixelPerfectV4_137000_G.pth
Posted Image
downscaled to original size in GIMP and palettised with Image Analyzer
Posted Image
0

User is offline   Phredreeke 

#281

Yeah, I've been using it for that for a while now (including the first of the Powerslave texture packs I sent you), although it sometimes makes it appear a bit cartoony
1

User is offline   Phredreeke 

#282

I just learned of this site that lets you run GFPGAN in your browser https://huggingface..../akhaliq/GFPGAN

Here is Duki Nuki processed by it

Posted Image

Note that you may need to blur the picture prior to upscaling
1

User is offline   MrFlibble 

#283

View PostPhredreeke, on 12 October 2021 - 06:56 PM, said:

Here is Duki Nuki processed by it

I read "possessed by it".

But image that would undoubtedly have many uses, wouldn't it?
0

User is offline   Phredreeke 

#284

Do you ever expect Duki Nuki to look "good"?

Anyway, here's a somewhat better outcome (aside from the weird looking glasses)

Posted Image
3

User is offline   MusicallyInspired 

  • The Sarien Encounter

#285

Pretty neat stuff. Better than I would have anticipated. For both.
0

User is offline   Kralex 

  • Removed

#286

Tried it out with a sprite from Marathon 2 and the results are quite... interesting.

Posted Image

This post has been edited by Kralex: 13 October 2021 - 09:45 PM

-1

User is offline   MrFlibble 

#287

View PostPhredreeke, on 13 October 2021 - 04:50 AM, said:

Do you ever expect Duki Nuki to look "good"?

So you keep posting it because you think it's amusing? Surely you're not trying to discredit the entire AI upscale scene by showing mostly bad results?

A while ago I mentioned the results of pixel2style2pixel and StyleGAN applied to Daggerfall portraits and that system appears a bit more versatile than what you're showing here. But no idea how it would handle glasses or a face viewed at an angle.

Posted Image
More examples here.

Apparently I was wrong in my previous post about this, the photorealistic transformation was via pixel2style2pixel, not NVIDIA.

This post has been edited by MrFlibble: 13 October 2021 - 11:37 PM

2

User is offline   Kralex 

  • Removed

#288

View PostMrFlibble, on 13 October 2021 - 11:37 PM, said:

So you keep posting it because you think it's amusing? Surely you're not trying to discredit the entire AI upscale scene by showing mostly bad results?



I wouldn't worry about that, Phredreeke has done some phenomenal upscale packs for Build engine games that show just how great the results from this tech can be :)
0

User is offline   Phredreeke 

#289

I guess you need to be a Duke4 Discord regular to appreciate Duki Nuki.

GFPGAN can be used by anyone thanks to the site I linked to. BTW don't underestimate the value of mishaps. They can give ideas of how to refine the process.
2

User is offline   MrFlibble 

#290

View PostPhredreeke, on 14 October 2021 - 03:38 AM, said:

BTW don't underestimate the value of mishaps. They can give ideas of how to refine the process.

I don't, but you seem to regularly post here one ugly monstrosity after another (I'm still not quite over this one, thank you very much). I suppose you can be congratulated on creating a meme of your own out of "Duki Nuki", but I'm sorry, I fail to find it fun.

Also why so much emphasis on photorealistic faces? It's a major topic in AI upscales in general, but of little practical application in FPS games. You seem to have invested considerable time and effort in making Lo Wang's face look better, while it's only an infinitesimal fraction of time that the player would look in a mirror in SW, and I guess there's no time to appreciate the opponent's faces in WangBang matches at all.

If only there were analogous models available to upscale monsters, right? But I guess this would be a more complicated matter to train such a model compared to human face upscales.

Not to mention that there's a certain consistency to low-res pixel art, including faces. From what I know, AI upscales of low-res characters like in old point-and-click adventure games, where facial expressions are conveyed with a handful of pixels, fail miserably because they do nothing but mangle the original image. In the best case, the result would be little better than using algorithmical pixel scalers. For a proper high-resolution version, those would need to be redrawn manually by an artist.

View PostKralex, on 14 October 2021 - 01:12 AM, said:

I wouldn't worry about that, Phredreeke has done some phenomenal upscale packs for Build engine games that show just how great the results from this tech can be :)

From my experience, ESRGAN upscales only feel great as long as the novelty lasts. After that, the shortcomings of the tech become all too evident. You can sure get good depixelised art from low-res images, but that's about it. Good as a template for an artist to manually refine the results, and in certain cases images work well as they are (e.g. natural surface textures). You can see that some of the more detailed textures in the Enhanced Resource Pack had to be derived from manually-drawn HRP textures because ESRGAN just doesn't cut it, no matter the model.

But I understand that some users have low tolerance for big pixels so upscales are an improvement to them.
0

User is offline   Phredreeke 

#291

Given the hours I spend into the Duke and Blood upscale packs, the AI facial monstrosities is a bit of fun to take my mind of things (at least until I start thinking about how to utilise them). I could post more generic updates, but there's only so many times I could write "upscaled with pixelperfect, downscaled to 25%, upscaled with nickelback again, downscaled and ran repalettisation script"

Posted Image
This one I made yesterday, it used a mixture of Degif and Pixelperfect as pre-processors, then upscaled with Nickelback and... you get the idea.

Here's an example of GFPGAN getting a good result (from Duke Nukem's boxart)
Posted Image
6

User is offline   MrFlibble 

#292

View PostPhredreeke, on 16 October 2021 - 06:02 AM, said:

I could post more generic updates, but there's only so many times I could write "upscaled with pixelperfect, downscaled to 25%, upscaled with nickelback again, downscaled and ran repalettisation script"

I'm probably spoiled by the standards of academic writing, but it would certainly be neat if you made a series of posts showing some good results and an explanation of how you achieved them so that these results are at least theoretically repeatable by others. But of course no one urges you to post any updates at all.

I know that AI upscale discussion basically moved on to Discord, and I guess you are aware of the disadvantages that Discord poses, such as the information being less permanent and not publicly available/searchable (concerns of this kind are summed up here for example, but I suppose you're familiar with them). Unless of course the AI upscale community is not really interested in widely sharing the knowledge it produces and is content with being closed and esoteric, you could make these updates for the sake of knowledge preservation if nothing else. In fact, I'm a bit surprised at the apparent reluctance to go into details when you made past updates -- with a quickly-evolving scene such as AI upscales the process is at least as important as the result, if not more.

View PostPhredreeke, on 16 October 2021 - 06:02 AM, said:

Here's an example of GFPGAN getting a good result (from Duke Nukem's boxart)

Impressive for sure (at least, before we get used to AI upscales of this kind, which is not too far into the future I guess), but what are the practical applications of this? Surely you do not intend to replace the original box art with this artificial mockup? When we talk Daggerfall portraits, there's some reason for upscales even if they alter the originals quite drastically, because there are only low-res originals. But the Duke3D box art is available in high-res, isn't it?
0

User is offline   Phredreeke 

#293

View PostMrFlibble, on 16 October 2021 - 09:51 AM, said:

Unless of course the AI upscale community is not really interested in widely sharing the knowledge it produces and is content with being closed and esoteric, you could make these updates for the sake of knowledge preservation if nothing else. In fact, I'm a bit surprised at the apparent reluctance to go into details when you made past updates -- with a quickly-evolving scene such as AI upscales the process is at least as important as the result, if not more.


The problem is that aside from my repalettisation script, my scripts rely on Paint Shop Pro. I've had talks with a guy about reimplementing them in imagemagick. I will post anything that comes out of that. Particularly my image blending script

Here's an example of my blending script used to combine two upscales
1

User is offline   MrFlibble 

#294

View PostPhredreeke, on 30 October 2021 - 09:29 AM, said:

The problem is that aside from my repalettisation script, my scripts rely on Paint Shop Pro.

Well, you're not the only person using that software, are you? And you could always at least describe the principles of what you're doing and why, without going into exact details.

My own impression of the current state of affairs on the ESRGAN upscaling scene -- mostly based on updates of the Model Database page in the upscale wiki -- that now we have a good number of models trained with a great degree of precision (compared to the first models when people were only learning the ropes), which do more or less what the users who created them intended to. As I mentioned elsewhere, none of those seem to be particularly tailored for the task of upscaling Build/2.5D FPS art. However, while some of these are quite specific and will produce garbage from raw 8-bit sprites or textures, others are more "universal" and fare pretty well with more or less any kind of input.

With this in mind, I'm inclined to think that input image pre-processing and model interpolation should be theoretically taking a back seat now, if not getting completely eliminated from the upscaling process. On the other hand, previously you've shown some very good results with the ThiefGold model, which required either pre-processing or model interpolation, but it looks that you have stopped using it in favour of other models like Nickelback (?) which I'm not 100% sold on being better.

I'll tell you why I am reluctant to employ model interpolation, it's because it increases the complexity of an already complex matter, yet does not create a substantial improvement of upscaling results. Basically, an interpolated model produces images which are a blend between what the two individual models can produce. However, if one model is better suited for the particular kind of input and another worse, the interpolated model will produce something that will be better than the bad model, but worse than the better model. I don't think there's a good reason to choose between good results and even slightly less good results, for the sake of repairing something that the better model falls short of.

It's not that model interpolation has not worked in the past, or does not work now. It does, but it seems like we'd not be harnessing the full potential of even ESRGAN with all its limitations until there's a model which has been specifically trained for upscaling the kind of art that is used in Duke3D and co.

Desirably, this should be a 2x model if that is the target resolution of upscale packs. Anything sampled down from 4x bears the signs of that -- blurry parts here and there, those pixel patterns produced by even the most advanced resampling algorithms. Sharpening does not generally help as it produces even more artifacts, which are again noticeable.

Of course, many of the deficiencies in the current upscales are less noticeable, if at all, in-game (in Build FPSs that is), although if we go down that route, it inevitably raises the question of whether the upscales make a difference at all compared to the original textures and sprites.
0

User is offline   Phredreeke 

#295

Here's the process simplified as much as I could

1. Take image 1 and subtract image 2 (with a bias of 128, ie. grey is "neutral")
2. Take the result of the above and apply edge-preserving smooth (twittman believes it to a bilateral blur)
3. Add the above back to image 2.

The result has fine detail of image 1 on top of image 2.


Edit: You could always join the Discord and discuss your findings there... We also have a wiki, where you could document the results if you want something a bit more persistent than Discord chats.

This post has been edited by Phredreeke: 30 October 2021 - 03:06 PM

0

User is offline   Phredreeke 

#296

I did some experiments with Styletransfer attempting something like what MrFlibble had suggested earlier

I took sprites from Alien Armageddon and classic Duke. I then upscaled the classic sprite first using Spongebob-Reloaded (reason being that if I used the original sprite it would stay the same size). I then used that upscale as content, and the corresponding AA sprite as style.

Here are the results

Attached thumbnail(s)

  • Attached Image: styletransfer-aa-commander.png
  • Attached Image: styletransfer-aa-octo2.png
  • Attached Image: styletransfer-aa-enforcer.png
  • Attached Image: styletransfer-aa-overlord.png
  • Attached Image: styletransfer-aa-babes.png

1

User is offline   MrFlibble 

#297

On the surface these results appear rather similar to an ESRGAN model blend with DeToon (don't remember the exact mixture) that produced shiny-looking surfaces on sprites.

I tried this with a few sprites myself and the whole system does not seem to be much suited for this task. Everything is too blurry and has a halo around it.
0

User is offline   Phredreeke 

#298

New AI tool for combining faces https://huggingface....khaliq/BlendGAN

Spoiler

1

User is offline   Phredreeke 

#299

Alright, after having had upscales brought up in the DF-21 Discord, I gave Dark Forces a new try

https://imgsli.com/ODYyNDE
https://imgsli.com/ODYyNTI
https://imgsli.com/ODYyNDk


The process here is pretty convoluted, but it involves first preprocessing with pixelperfect and xbrz downscaling and blending the result using my custom script. That is then upscaled with Nickelback, and downscaled back to original size subtracted from the original sprite. The resulting difference is then upscaled with a XBRDD-Spongebob Reloaded interpolation and downscaled to 50% size with a one pixel shift in both axis. That image is then added back on top on the prior Nickelback upscale.

This post has been edited by Phredreeke: 14 December 2021 - 05:40 AM

0

User is offline   MrFlibble 

#300

Those are pretty decent results.

View PostPhredreeke, on 14 December 2021 - 03:57 AM, said:

The resulting difference is then upscaled with a XBRDD-Spongebob Reloaded interpolation

Interesting, what effect did you achieve by using this combination?

I notice that you've been regularly using Nickelback recently (while apparently not having found much use for the ThiefGold models that IIRC produced some very good looking results, at least with textures). Nickelback is described by its developer as intended for photorealistic imagery and not pixel art, and my own trials pointed to it not being very suitable for this kind of input, at least not as the only model in the upscale process. It does generate a noise texture over images though (somewhat like slight JPEG compression artifacts), is this what you use it for?
0

Share this topic:


  • 12 Pages +
  • « First
  • 8
  • 9
  • 10
  • 11
  • 12
  • You cannot start a new topic
  • You cannot reply to this topic


All copyrights and trademarks not owned by Voidpoint, LLC are the sole property of their respective owners. Play Ion Fury! ;) © Voidpoint, LLC

Enter your sign in name and password


Sign in options