Duke4.net Forums: PolymerRTX - Duke4.net Forums

Jump to content

  • 8 Pages +
  • 1
  • 2
  • 3
  • 4
  • Last »
  • You cannot start a new topic
  • You cannot reply to this topic

PolymerRTX

User is offline   Romulus 

#31

No. It doesn't work that way.

The $499 MSRP is for the reference card which has lower clocks compared to the Founders Edition while maintaining the same cooling solution. The Founders Edition card will not go back to MSRP after launch, because it is priced at $599. And once after these cards launch, you likely won't find too many reference cards.

Spoiler



And no AIB cards within a 3 month period, and it's likely that you won't find a single Reference 2070 for the stated MSRP because they'll likely be sold as FE for the extra $100 and the lower clocked Reference card will likely be available only to OEMs who sell complete systems.

This post has been edited by Romulus: 12 September 2018 - 02:18 PM

0

#32

View PostRomulus, on 12 September 2018 - 02:11 PM, said:

No. It doesn't work that way.

The $499 MSRP is for the reference card which has lower clocks compared to the Founders Edition while maintaining the same cooling solution. The Founders Edition card will not go back to MSRP after launch, because it is priced at $599. And once after these cards launch, you likely won't find too many reference cards.

I don't want to turn this topic into a debate on GPU prices, and what there going to be in four months. In the end it really doesn't matter, performance wise, the delta between Nvidia FE, reference boards, and AiB partner boards are so insignificant its ridiculous. I believe RTX prices will return to MSRP in January, you don't, let's move on.

This post has been edited by icecoldduke: 12 September 2018 - 02:20 PM

-1

User is offline   Romulus 

#33

View Posticecoldduke, on 12 September 2018 - 02:16 PM, said:

In the end it really doesn't matter, performance wise, the delta between Nvidia FE, reference boards, and AiB partner boards are so insignificant its ridiculous.


I once said it before, and I'll say it again, when it comes to hardware, you're completely clueless. The reference card operates at a base clock of 1410 MHz with a boost clock of 1620 MHz and beyond if there's thermal and board TDP headroom, whereas the FE has a boost clock of 1710 MHz, which surely indicates that the FE chips have better bins to ensure that it can sustain the thermals and TDP headroom necessary to maintain those boost clock, and go beyond. That overclock can mean the difference between dipping down into the low 50s and maintaining a 57~60 FPS at all times.

As for AIB partner boards being insignificant? Really dude, then why do you think they have a higher price tag than the MSRP? The custom board design ensures better power delivery with more phases, improved cooling while maintaining lower acoustics and of course, higher overclock.

Of course, in your PolymerNG thread you once said that you haven't come across a single game that you can't run at 4K on a GTX 1080 and that statement alone told me all there was to know about you.

And the point I am trying to make is that the price of a GTX 1080 Ti is never going to drop below $450~500 unless you're getting a used card. nvidia wants to do away with their GTX 1000 series cards but they don't want to do it at a loss. If you're going to go ahead with this project, don't make it into something that only works for owners of a niche range of GPU products.

This post has been edited by Romulus: 12 September 2018 - 02:36 PM

-1

User is online   Danukem 

  • Duke Plus Developer

#34

View PostRomulus, on 12 September 2018 - 02:29 PM, said:

Of course, in your PolymerNG thread you once said that you haven't come across a single game that you can't run at 4K on a GTX 1080 and that statement alone told me all there was to know about you.


He makes some inaccurate statements about hardware and that tells you everything you need to know about him, period? It seems like you have a beef with him that goes well beyond his hardware statements and you are using that as an excuse to shit on his thread about his new renderer project.
1

#35

View PostRomulus, on 12 September 2018 - 02:29 PM, said:

And the point I am trying to make is that the price of a GTX 1080 Ti is never going to drop below $450~500 unless you're getting a used card. nvidia wants to do away with their GTX 1000 series cards but they don't want to do it at a loss. If you're going to go ahead with this project, don't make it into something that only works for owners of a niche range of GPU products.

A ray-tracing engine in 2018 is by definition aimed at a niche range of GPU products.

Even if you could get a new card for $500 as opposed to $600, not everyone upgrades their GPU every year, nor spends $500 on a video card.

Less than 3% of users in the Steam hardware survey have a 1080 currently.

I'd say that is pretty niche.
1

User is offline   Romulus 

#36

View PostTrooper Dan, on 12 September 2018 - 02:45 PM, said:

He makes some inaccurate statements about hardware and that tells you everything you need to know about him, period? It seems like you have a beef with him that goes well beyond his hardware statements and you are using that as an excuse to shit on his thread about his new renderer project.


I am sorry if it came across to you that I have some kind of "beef" with him, which clearly isn't the case.

It's not the first time he said something wrong about hardware in the recent past, and I believe I made a relevant point about his new project. If you're going to make a renderer that requires a GTX 1070 or a GTX 1070 Ti as a bare minimum, not many people will be able to try it out. If you honestly believe that it's like taking a dump on his thread, then I'll unsubscribe to this one. Peace.
-1

#37

View PostRomulus, on 12 September 2018 - 02:29 PM, said:

I once said it before, and I'll say it again, when it comes to hardware, you're completely clueless. The reference card operates at a base clock of 1410 MHz with a boost clock of 1620 MHz and beyond if there's thermal and board TDP headroom, whereas the FE has a boost clock of 1710 MHz, which surely indicates that the FE chips have better bins to ensure that it can sustain the thermals and TDP headroom necessary to maintain those boost clock, and go beyond. That overclock can mean the difference between dipping down into the low 50s and maintaining a 57~60 FPS at all times.

As for AIB partner boards being insignificant? Really dude, then why do you think they have a higher price tag than the MSRP?

Of course, in your PolymerNG thread you once said that you haven't come across a single game that you can't run at 4K on a GTX 1080 and that statement alone told me all there was to know about you.

And the point I am trying to make is that the price of a GTX 1080 Ti is never going to drop below $450~500 unless you're getting a used card. nvidia wants to do away with their GTX 1000 series cards but they don't want to do it at a loss. If you're going to go ahead with this project, don't make it into something that only works for owners of a niche range of GPU products.

I'm going to respond to this as professionally as I can, but before I do that I would like to touch on his terminology so everyone know what he's talking about. A AiB partner is someone like EVGA, MSI, etc. Brief summary on "GPU Binning". IHV's will throw GPU's in a bin based on how well they perform during automated testing, this is called "bining" because the cards are put into a bin. Some people will claim some different AiB partners have higher quality "bins" there other AiB partners, this is mostly from the hardcore overclocking crowd who try to squeeze every little bit out of there boards, and I've seen little no evidence, that one AiB has better bins then the another. I never talked about the 1080 ti, I was only talking about the 1080, please read more carefully next time :rolleyes:. I still haven't come across many games that my 1080 couldn't handle at 4k with reduced settings, however I switched over to a 1080p 144hz monitor, I personally like 144hz more then 4k but that's just me.

Games can be bottlenecked by many different factors and overclocking is not a guarantee to help performance at all. The core architecture of the GPU is more important then the overall clock speed, if you were this "hardware genius" like you claim to be, you would know that. Also just because someone charges more for something, doesn't mean its better :/. Different AiB boards have different cooling and other such things, and if that appeals to you great, but for most people I always recommend picking up the cheapest 1080 they can find, because there not going to overclock it, because 5fps is not worth it except for this small niche crowd. In fact on most projects I've worked on, when we need a GPU, we just go to the store and pick up the cheapest 1080 or whatever on the shelf, regardless of brand. I've never encountered any meaningful performance deltas between workstations with different AiB branded GPU's.

View PostRomulus, on 12 September 2018 - 03:02 PM, said:

If you're going to make a renderer that requires a GTX 1070 or a GTX 1070 Ti as a bare minimum, not many people will be able to try it out. If you honestly believe that it's like taking a dump on his thread, then I'll unsubscribe to this one. Peace.

Now this is a conversation we can have, but I'm not going to respond to aggressive nonsense, I'm simply getting too old for that anymore :P.

This post has been edited by icecoldduke: 12 September 2018 - 03:47 PM

2

User is offline   Forge 

  • Speaker of the Outhouse

#38

Posted Image
5

User is offline   Romulus 

#39

Spoiler


Spoiler


Spoiler


This post has been edited by Romulus: 12 September 2018 - 04:20 PM

2

User is offline   Micky C 

  • Honored Donor

#40

One thing we should keep in mind is that ICD hasn't even started coding this thing yet. I think the debate about what hardware in the near future will be able to support the renderer is a bit academic, since we don't know when it's going to be finished (potentially a long way off if ever). I view PolymerRTX as a renderer for the future and the mid-long term. In the meantime, we still have the highly stable classic and polymost renderers.

View Posticecoldduke, on 12 September 2018 - 01:14 PM, said:

RTX 2080 TBD(my card should be arriving on the 21st).
980ti/1080 - Will hold > 100 fps in 1080p with single ray hit.
1070 - Should hold framerate > 60 fps with single ray hit.


How dependent will the framerate be on the complexity of the scene? Number of sectors etc. If the ultimate goal is to not bother creating triangles for everything (if I understand correctly), does that mean we could get away with more detail?
1

#41

View PostMicky C, on 12 September 2018 - 05:28 PM, said:

How dependent will the framerate be on the complexity of the scene? Number of sectors etc. If the ultimate goal is to not bother creating triangles for everything (if I understand correctly), does that mean we could get away with more detail?

In PolymerRTX I wouldn't worry about polycount/vertex count as much. Here's a demo with 300 billion triangles running at 30fps on a single RTX Quadro 6000. I don't know how well this will scale to the RTX 2080/2080 ti or to the 1080 and lower.



However you will still need to worry about how many lights you have in view at once.

This post has been edited by icecoldduke: 12 September 2018 - 05:53 PM

2

User is offline   Mark 

#42

Nothing to worry about since many maps won't be pushing any limits compared to the above video. :rolleyes:

Attached thumbnail(s)

  • Attached Image: example1.jpg

0

User is offline   MusicallyInspired 

  • The Sarien Encounter

#43

Just anecdotally, I have a GTX 970 and I'm not planning on upgrading any time soon within the next 5 years at least. That could be just me, but I don't have $700-$800 CAD to waste on a video card.

This post has been edited by MusicallyInspired: 13 September 2018 - 07:33 AM

0

#44

View PostMusicallyInspired, on 13 September 2018 - 07:32 AM, said:

Just anecdotally, I have a GTX 970 and I'm not planning on upgrading any time soon within the next 5 years at least. That could be just me, but I don't have $700-$800 CAD to waste on a video card.

It will be interesting to see how well this tech scales across various GPU's, and hope you guys will try stuff out and report back performance metrics. I've preordered the RTX 2080, and I will be using it as a external GPU over thunderbolt 3. I no longer own a desktop, since I have to travel so much. This basically means my 2080 is going to operate approx 20%-30% slower since its a external GPU. I will also test on my internal GPU which is a 1080. Hopefully with everyone testing on a broad range of hardware, we can figure out a solid spec range.
1

User is offline   Forge 

  • Speaker of the Outhouse

#45

I have a 650ti and a fire extinguisher
5

User is offline   Romulus 

#46

Another question that I have in mind is that DXR has a fallback emulation layer, does Optix have anything similar?
1

#47

View PostRomulus, on 14 September 2018 - 07:12 AM, said:

Another question that I have in mind is that DXR has a fallback emulation layer, does Optix have anything similar?

Yes I'm currently running Optix on my 1080 which doesn't have RT acceleration on the die(I can't pick up my 2080 until the 21st).

View PostForge, on 14 September 2018 - 06:55 AM, said:

I have a 650ti and a fire extinguisher

Posted Image
1

User is offline   Tea Monster 

  • Polymancer

#48

View PostForge, on 14 September 2018 - 06:55 AM, said:

I have a 650ti and a fire extinguisher


I raise you a TNT Vanta and a dewar of liquid nitrogen.
0

User is offline   MusicallyInspired 

  • The Sarien Encounter

#49

Diamond Monster 3D Voodoo 1 PCI add-on card with a Matrox Mystique buried in the ice in Antarctica.

Posted Image

But seriously, I still have it. It's in my old Pentium rig. Great for running Unreal and Half-Life. I miss 3Dfx.

Sorry for derailing...

This post has been edited by MusicallyInspired: 14 September 2018 - 11:09 AM

1

#50

Here's my equivalent of my .plan update:

I've committed quite a bit to my github over the past couple days. I've got OptiX initializing, and the api setup is ready to go. I have the different raytracing compute shaders compiling from Cuda to PTX(basically a low level compute language, which is required for Optix), and are currently saved in a "nv_compute" folder in the root folder structure. The raytracing implementation is currently in its own DLL(ref_optix), with a generic API interface(see ref_common for more details). Basically all a hardware raytacer needs is a texture to render too. There's a couple reasons I choose putting the raytracer in a dll rather then just embedding the system inside eduke32. Going forward I mentioned, I want to leave the door open for me to test different raytracing API's, and having the implementation in its own DLL makes things a lot easier. Secondly I'm not exactly sure how to setup Cuda shader compiles with makefiles, and have it work nicely(like only compile when the shader has changed). The visual studio IDE does that nicely. However when this project moves past the research stages, we'll have to figure that all out, for now this was the path of least resistance, that gives me the functionality I need.

This post has been edited by icecoldduke: 14 September 2018 - 11:23 AM

4

User is offline   Paul B 

#51

View PostMusicallyInspired, on 14 September 2018 - 11:03 AM, said:

Diamond Monster 3D Voodoo 1 PCI add-on card with a Matrox Mystique buried in the ice in Antarctica.

Posted Image

But seriously, I still have it. It's in my old Pentium rig. Great for running Unreal and Half-Life. I miss 3Dfx.

Sorry for derailing...


If I recall, don't these voodoo cards not support OpenGL? But rather run their own stripped down version of OpenGL called MiniGL? Not sure how compatible these cards would be now. Yea i'm just as bad I'm not wanting to derail this thread. Sorry.

This post has been edited by Paul B: 14 September 2018 - 12:33 PM

1

User is offline   MusicallyInspired 

  • The Sarien Encounter

#52

Yeah, MiniGL. Those were the days.
0

#53

Nvidia just posted documentation on Turing(RTX 20xx architecture). Found a couple things:

Spoiler


So it looks like Turing provides hardware acceleration for iterating over there BVH acceleration structure during ray hit testing. More thoughts on this later.

Spoiler


I wonder if NGX can an alternative to DLSS for indie devs(like us), that want to enable modders to create custom content. It seems like DLSS has a huge downside, DLSS doesn't seem like it can accelerate user generated content(as the neural networks have to be built with content already developed), so if apps can render only a segment of a frame with key elements and let NGX's neural network fill in the blanks with tensor core acceleration?

This post has been edited by icecoldduke: 14 September 2018 - 02:57 PM

0

User is offline   TON 

#54

View Posticecoldduke, on 14 September 2018 - 02:51 PM, said:


I wonder if NGX can an alternative to DLSS for indie devs(like us), that want to enable modders to create custom content.


https://youtu.be/YNnDRtZ_ODM?t=22m51s
1

#55

View PostTON, on 15 September 2018 - 06:40 AM, said:


I saw that interview live. The biggest issue I see for DLSS in eduke32 is mod content. The neural network can't upscale something it's never been trained to see. It will be interesting to see how Nvidia deals with DLSS and mods going forward. That's why I was interested in the above tech, I wonder what heuristics have to be fed into that neural network to have it fill in those gaps, or if its the same thing as DLSS and I'm simply not reading the whitepaper properly :rolleyes:.

This post has been edited by icecoldduke: 15 September 2018 - 07:40 AM

0

User is offline   lamduck 

#56

This sounds great! I can't wait to see what you come up with!

Here is a Quake 2 realtime lighting pathtracing engine:

http://amietia.com/q2pt.html

It may give you some hints or insight. I hope the person who wrote it updates it for the new Nvidia cards. I run it on a GTX 780 with settings that give me about 30 fps at 1024 x 768, it's quite good even on an older card.
1

#57

View Postlamduck, on 15 September 2018 - 11:00 AM, said:

This sounds great! I can't wait to see what you come up with!

Here is a Quake 2 realtime lighting pathtracing engine:

http://amietia.com/q2pt.html

It may give you some hints or insight. I hope the person who wrote it updates it for the new Nvidia cards. I run it on a GTX 780 with settings that give me about 30 fps at 1024 x 768, it's quite good even on an older card.

That's awesome I completely forgot about that project, thanks for posting it :rolleyes:.
0

User is offline   Micky C 

  • Honored Donor

#58

Hold on I'm confused, why is there talk of ANNs here?
0

#59

View PostMicky C, on 15 September 2018 - 05:35 PM, said:

Hold on I'm confused, why is there talk of ANNs here?

ANN?
1

User is offline   Mark 

#60

MickyC forgot to take his meds again :rolleyes:
0

Share this topic:


  • 8 Pages +
  • 1
  • 2
  • 3
  • 4
  • Last »
  • You cannot start a new topic
  • You cannot reply to this topic


All copyrights and trademarks not owned by Voidpoint, LLC are the sole property of their respective owners. Play Ion Fury! ;) © Voidpoint, LLC

Enter your sign in name and password


Sign in options