Duke4.net Forums: Duke3D HRP: new/updated art assets thread - Duke4.net Forums

Jump to content

  • 161 Pages +
  • « First
  • 152
  • 153
  • 154
  • 155
  • 156
  • Last »
  • You cannot start a new topic
  • You cannot reply to this topic

Duke3D HRP: new/updated art assets thread  "Post and discuss new or updated textures/models for the HRP here"

User is offline   Jimmy 

  • Let's go Brandon!

#4591

Different models do different things, it's like comparing apples and oranges.
1

User is offline   MrFlibble 

#4592

View PostMark, on 14 December 2020 - 12:49 PM, said:

I wonder how long before we see just one GREAT upscaler stand out in front of the rest.

Intuitively I, too, would want to have a one-size-fits-all kind of universal model for ESRGAN, but for now the entire area, being a very new thing, is still in the stage of very rapid development. The people who train models have accumulated vast experience over the past two or so years, hence the newer models become more sophisticated and versatile, because those who create them improved their skills and knowledge in this respect.

Some ESRGAN models are from the start trained for specific tasks, as you can see in this section. I only tried out a few of those and can only say that they are poorly suited for Duke3D upscales. Others are of a more universal variety (meaning that their output will look like a coherent image and not garbage regardless of the type of input), but their limitation is that they will produce rather uniform output, which may look good on some kinds of images and meh on others (e.g. Rebout or Rebout Blend are good for pixel art kind of images, much less impressive for anything else).

It does make more sense to specifically train ESRGAN models for certain tasks, given the nature of this AI system, rather than hope for a universal model for all kinds of images. In fact, it is not implausible to suppose that a model specifically trained to upscale Duke3D textures would work better than anything we have now. But one of the big problems here is that you need an adequate data set for training such a model. Unlike downscaled photographic material that ESRGAN was actually created for, in many cases high-resolution versions of Duke3D textures do not exist even theoretically. You would need to find an analogous data set of large and small 8-bit image pairs, or downscale existing textures so that they would have the same relevant features (bit depth, resolution, detail level etc.) as the images that you intend to upscale. I think it would not do, for this purpose, to scale down original Duke3D textures to half the size and train a model on these pairs because your source and target image resolution would be different from the intended upscale.
0

User is online   Phredreeke 

#4593

Upscaling is one half. The other half is taking the upscaled result and applying palettes so that it looks good in game when using alternate palettes.
1

User is offline   Jimmy 

  • Let's go Brandon!

#4594

There's no way you could create a one size fits all model that produces coherent results because Duke's art is all over the place. Some is computer generated, some is hand drawn, some is from photographs.
0

User is online   Phredreeke 

#4595

Even the best model isn't gonna beat using original photos to reconstruct tiles (such as with the Bobby and Levelord tiles)

Of course such photos are not always available so you'll have to resort to other era appropriate photos (Doug's wanted poster for example) made to fit the tile in question with some techno-wizardry...
2

User is offline   Mark 

#4596

View PostJimmy, on 17 December 2020 - 08:22 AM, said:

There's no way you could create a one size fits all model that produces coherent results because Duke's art is all over the place. Some is computer generated, some is hand drawn, some is from photographs.

OK. I don't think I've ever seen a screenshot of any upscaler program so I have no idea how they operate. I just assumed you opened up a pic, choose a model AI and tick off the boxes for the features you want to use. Therefore, if an "all in one" highly trained model existed you would only enable the parts of it's AI you wanted to apply depending on the source and your finished texture needs. Like someone else mentioned, maybe that will happen later on as the software improves.
0

User is online   Phredreeke 

#4597

ESRGAN works like this, you put the images you want to upscale in the LR folder, then you run test.py with the desired model as an argument. It goes through everything in the LR folder and saves the results in the results folder
0

User is offline   Mark 

#4598

Ah, no fine tune controls yet?
0

User is online   Phredreeke 

#4599

Fine tuning is done by using net_interp.py to interpolate models. You can see in an earlier post how I ran the same image through a model with and without interpolation with a different one. In general interpolated models have less artifacts but are also softer. Note that not all models are compatible with each other. Most new models use existing models as pretrains, and if both models being interpolated has a shared lineage then interpolation should work. Otherwise the result is more or less a muddy outline of what you're trying to upscale.

There's a new tool that's supposed to make installing and using ESRGAN easier. I haven't tried it myself.

New models tend to be added here

As you can see, they're not limited to just upscaling but other forms of image processing as well

This post has been edited by Phredreeke: 17 December 2020 - 03:50 PM

0

User is online   Phredreeke 

#4600

I've been toying around a bit with the liztroop. Trying to work in some of the improvements I had done for the upscale pack while remaining true to the original.

Attached Image: spritesheet.png
For comparison, here's the old liztroop upscale (currently used in alien armageddon)
Attached Image: spritesheet-aa.png
1

User is offline   MrFlibble 

#4601

View PostMark, on 17 December 2020 - 09:45 AM, said:

OK. I don't think I've ever seen a screenshot of any upscaler program so I have no idea how they operate. I just assumed you opened up a pic, choose a model AI and tick off the boxes for the features you want to use. Therefore, if an "all in one" highly trained model existed you would only enable the parts of it's AI you wanted to apply depending on the source and your finished texture needs. Like someone else mentioned, maybe that will happen later on as the software improves.

Here's a screenshot:
Attached Image: ESRGAN.png

ESRGAN is actually a test programme/AI that was written for an academic paper on the Single Image Super Resolution (SISR) problem, which is a field in computer science. It's just that the ability to train your own models and ease of use (no proprietary software like MatLab required) made this a good tool to experiment with for videogame upscales.

The key part here is that there are no "features" you could pick for an upscale -- any model is as good as the data set it was trained on. There is fine tuning here, but it occurs at the stage of training, not during the use of the end-product.

Model interpolation is more like producing a model that outputs what would be a blend of two images produced by the models being interpolated (if that makes any sense to you).

View PostPhredreeke, on 17 December 2020 - 03:46 PM, said:

There's a new tool that's supposed to make installing and using ESRGAN easier. I haven't tried it myself.

Personally I try to steer clear from any frontends (e.g. that DOSBox thing I forgot what it's called) because what they tend to do is introduce an opaque layer between the user and the actual tool. But some might prefer this if they like GUIs. (but what could be easier than typing python test.py model/<paste model name here>.pth anyway?)
0

User is online   Phredreeke 

#4602

View PostMrFlibble, on 19 December 2020 - 11:27 AM, said:

Model interpolation is more like producing a model that outputs what would be a blend of two images produced by the models being interpolated (if that makes any sense to you).


Not quite as simple. Compare these two.

https://cdn.discorda...elscaleonly.png
https://cdn.discorda...452_rlt_rlt.png

These are the Pixelscale and Detoon models used on their own. Now compare this to when the two models are interpolated

https://cdn.discorda...452_rlt_rlt.png

Don't ask me to explain it because I have no clue myself

View PostMrFlibble, on 19 December 2020 - 11:27 AM, said:

Personally I try to steer clear from any frontends (e.g. that DOSBox thing I forgot what it's called) because what they tend to do is introduce an opaque layer between the user and the actual tool. But some might prefer this if they like GUIs. (but what could be easier than typing python test.py model/<paste model name here>.pth anyway?)


I was suggesting it more because actually installing it can be quite a headache. Once you've actually got it running then yeah it's simple
0

User is offline   MrFlibble 

#4603

View PostPhredreeke, on 19 December 2020 - 12:24 PM, said:

Not quite as simple. Compare these two.
<...>
Don't ask me to explain it because I have no clue myself

The only problem here is that Pixelscale produces those weird whine noise artifacts, which got nullified when interpolating the two models. From the look of it I'd say alpha around 0.9-0.7 (or 0.1-0.3 depending on which model you put in which slot in net_interp.py) with 70-90% Pixelscale and 10-30% Detoon.

If you look at Jon's right arm, the central part of the vein on the biceps got blurred out somewhat in the interpolated image, while being very pronounced in the raw Pixelscale output; whereas in Detoon, the vein is almost entirely blurred out apart from the extreme parts of the line. You'd get roughly similar results if you blended these images as layers in an image editing programme.

The original intent of the ESRGAN authors, at least as I understood it from their section on network interpolation, was to produce results that would be more accurate but essentially similar to image interpolation.

I don't know if you ever noticed that but there's a weird effect with some (thankfully non-essential) models that if you interpolate them, the output will invariably fade out colours to the point of being barely visible, even though there are no notable colour alterations produced by any of the two models when used separately.
0

User is online   Phredreeke 

#4604

View PostMrFlibble, on 20 December 2020 - 10:39 AM, said:

The only problem here is that Pixelscale produces those weird whine noise artifacts, which got nullified when interpolating the two models. From the look of it I'd say alpha around 0.9-0.7 (or 0.1-0.3 depending on which model you put in which slot in net_interp.py) with 70-90% Pixelscale and 10-30% Detoon.


Correct. It's 20% Detoon and 80% Pixelscale. The noise and its absence in the interpolated model is the entire point here, in that it's not what you would get from simply blending the output of the two models.

View PostMrFlibble, on 20 December 2020 - 10:39 AM, said:

I don't know if you ever noticed that but there's a weird effect with some (thankfully non-essential) models that if you interpolate them, the output will invariably fade out colours to the point of being barely visible, even though there are no notable colour alterations produced by any of the two models when used separately.


This is victorca25's explanation of why this happens

Quote

If filters are correlated, you get a proper combination of their weights as a result of the interpolation. If they are not correlated, you get the interpolation of pseudo random values between -1 and 1. If the interpolation is set to 0.5 you get exactly the average of -1 and 1, that is 0, which corresponds to black

1

User is offline   Jimmy 

  • Let's go Brandon!

#4605

Would it change results to convert the images to greyscale, then run them through the upscale, then recolor them?
0

User is online   Phredreeke 

#4606

Hmmm... interesting idea. As a side note, one model I used would colorise smaller greyscale graphics I ran through it!

Posted Image

(this is nmkd's degif, one of the few models which work at 2x scale)

Edit: I just realised... I actually did what you suggested not long ago. This was the result

Posted ImagePosted ImagePosted Image

Posted Image

This post has been edited by Phredreeke: 20 December 2020 - 08:04 PM

1

User is offline   MrFlibble 

#4607

View PostPhredreeke, on 20 December 2020 - 11:15 AM, said:

The noise and its absence in the interpolated model is the entire point here, in that it's not what you would get from simply blending the output of the two models.

True, but the entire case is an outlier because the noise results from an error in model's performance, and is not its intended behaviour. Once you cancel the error by interpolating Pixelscale with something else (I'd suppose rebout_interp or CleanPixels would be neutral enough), you basically see the intended result. E.g. if you take an interpolation of Pixelscale at 90% and Rebout/CleanPixels at 10% the output would be as close as possible to what you'd get with Pixelscale alone, were it not for the error.

The error itself appears to be caused by some arrangement of pixels in the source image. This can come as useful knowledge if we happen upon a model that produces desirable results but is error-prone in the same vein as Pixelscale.

But if you take error-free models, the interpolation will be gerenally similar to image blending, but of course not identical because it has different inner workings. I think the devs of ESRGAN made it so that the details produced by the PSNR model would not be as blurred out when interpolated with the default ESRGAN output compared to simple image interpolation.
0

User is online   Phredreeke 

#4608

Thought I'd share my attempt at tile 0450 (the hatch that gave MrFlibble troubles earlier in the thread). This gallery includes the low res tile, raw upscale and edited (and downscaled) version and for reference tile 0447 which I used in part to add extra detail.
1

User is offline   MrFlibble 

#4609

Out of the obvious things, the yellow stripes at the bottom need to get a uniform shape -- I don't think any model I tried could do them correctly because of the colour variation on individual stripes, but they are definitely meant to be uniform.

Also it seems from the comparison with the rotated tile, that the keypad has a different shape: namely, it is no longer on an embossed trapezoid foundation but rather two flat panels resting on one another? Your result looks like a blend from both shapes, although they are kind of mutually exclusive.

In the meantime I remembered that old method of xBRZ preprocessing method I'd used with older models and decided to try it out instead of dedither. However this time I used the HQx scaler from the Scaler Test because it adds a kind of dedither effect of its own.

Apologies for snatching your example with the hatch, but here's the result I got with ThiefGoldMod_100000:
Posted Image

And this is a result of blending two images: the straight HQx upscale from above and the upscale of the HQx image that was rotated 90 degrees clockwise:
Posted Image
0

User is online   Phredreeke 

#4610

The unaltered version upscales really badly, which is why I resorted to blending with the other frame.

Here's my attempt at mixing in the yellow stripes from tile 0354

Attached Image: tile0450.png

Edit: Update to liztroop poster by me with some edits by Tea Monster https://imgsli.com/MzUwOTc

Edit 2: Mind blown
Posted Image

This post has been edited by Phredreeke: 25 December 2020 - 05:05 PM

0

User is offline   MrFlibble 

#4611

View PostPhredreeke, on 25 December 2020 - 02:40 PM, said:

The unaltered version upscales really badly, which is why I resorted to blending with the other frame.

I know what you mean, but I'd say the upscales of the raw texture are not that bad compared to the original; it's just that the details are more faded compared to the rotated version. Might not be that noticeable in-game at all.

I had a further idea that seemed worth trying. I made two separate preprocessed images: one with dedither, another with HQx softening. Upscaled both with ThiefGoldMod, then blended these two in GIMP using G'MIC's Blend [median]. I also made a separate 2x upscale with the Faithful model, of the raw texture without preprocessing. I scaled down the blended TGM result to 2x using Pixellate -> resize without interpolation in GIMP, and then blended that with the Faithful result, again using Blend [median]. Palettised with ImageAnalyzer. Here's the result:


If anything, this seems pretty accurate to the source material, and seems to have enough of the "pixel art" quality in it. Or does it look too blurry? Maybe I should ditch the dedither part from it.

Here's the HQx/Faithful only blend:
Posted Image
0

User is online   Phredreeke 

#4612

Does anyone know who made these two tiles? My attempts at upscaling the various TV/monitor tiles ends up mangling the lower part of the TV frame, and I could patch over those parts using sections from these.

Attached thumbnail(s)

  • Attached Image: 0263.jpg
  • Attached Image: 4125.png

0

User is offline   Tea Monster 

  • Polymancer

#4613

I believe the top picture is from a magazine that featured the original image used for the LA skyline. I could be wrong, it's been quite a few years.

The lower image is a retouched render of Parkar's original HRP Duke model.

Searching the 3DR sprightly appearance thread with the tile number should give you a name.

This post has been edited by Tea Monster: 30 December 2020 - 11:31 PM

0

User is online   Phredreeke 

#4614

Yeah but those aren't the parts needed :P

I attempted using ssantialias9x instead of dedither for my detoon-thiefgold/unresize combo. results are pretty good IMO.

Attached Image: tile0365.pngAttached Image: tile0398.png

https://imgsli.com/MzU4Mzk
https://imgsli.com/MzU4NDE

edit: just noticed, the first of these two has the labels mixed up. the first pic is ssaa9x

This post has been edited by Phredreeke: 31 December 2020 - 07:07 AM

1

User is offline   MrFlibble 

#4615

The sliding comparisons feel as if the anti-aliasing model made things more blurry with some detail lost?
0

User is online   Phredreeke 

#4616

The sharp one is SSAA9X, the blurry one is dedither.

Edit: another comparison, used a different model. fatal photos mk2

https://imgsli.com/MzU4ODI

This post has been edited by Phredreeke: 31 December 2020 - 11:30 AM

1

User is offline   MrFlibble 

#4617

Fatal Photos seems to be pretty good, download where?
0

User is online   Phredreeke 

#4618

Here. it's Fatal Photos by twittman

https://de-next.ownc...czkdmGqmRBFFdjB
1

User is offline   MrFlibble 

#4619

I experimented with another variable in the HQx "softening" approach, namely whether I'd scale up the image to 2x, 3x or 4x using the ScalerTest. It seems that 3x gets sharper results compared to both 2x and 4x. 2x is the blurriest of all.

Here's a door texture upscaled with ThiefGodMod, dedither vs. HQ3x:
https://imgsli.com/MzYxMzA

However Fatal Photo is too noisy if used with HQ3x preprocessing.
0

User is offline   Tea Monster 

  • Polymancer

#4620

H3X generally looks better with forms and preserves background detail better.
1

Share this topic:


  • 161 Pages +
  • « First
  • 152
  • 153
  • 154
  • 155
  • 156
  • Last »
  • You cannot start a new topic
  • You cannot reply to this topic


All copyrights and trademarks not owned by Voidpoint, LLC are the sole property of their respective owners. Play Ion Fury! ;) © Voidpoint, LLC

Enter your sign in name and password


Sign in options