Wall of text....
Basis of 3D is models, split to triangles.
Take a room, split the geometry into triangles. That's step #1
Build/Doom/etc.. software rendering don't need to do this yet.
Quake is 3D, but it would be unreasonable to use a modelling tool to do this. Map editor is more of a tool to create "pseudo models" using building blocks that are more suited for 3D geometry.
This then gets chopped to a way more optimized 3D model (and triangles) through an automated compiling tool.
Specially tagged geometry bits (i.e. doors) are treated in isolation and gets flagged as an "entity", otherwise something like a closed door would have it's bottom/top removed as they couldn't be seen.
This is our basis for "dynamic vs. static" geometry in Quake as now we have objects that aren't welded to the main geometry. Think of them as "props / models", except done with the same editor.
Next you have the issue of rendering this 3D, what is visible and what is not. Also being able to determine what is in front/back.
3D can be extremely heavy if you don't have this data as even 3D accelerators had the role of drawing only and you had to use the CPU to build those calls.
Here is where the approaches diverge radically between conventional 3D and something like Build.
Long story short is that you can either approach this by pre-computing everything or working around this through other methods.
If you pre-compute, then you can dedicate hours on trying to find the right rendering order from given angles and camera positions.
This is what the BSP and VIS passes typically do in Quake/Doom.
With Doom there is only 2D to worry about so the complexity is vastly simpler compared to Quake where these computations can take hours to complete.
This exact same procedure is still required on Source2 maps.
The tradeoff? As everything is pre-computed, there are very limited or non-existing provisions to do this in real time. If you move a door for example then you're better off treating these as a simple "open" "closed" state and store these in the calculations. You can't have something like a huge vehicle, walk next to it as it moves and have it not render what's behind it even if it's taking all of your screen. In it's a reasonable trade-off however.
But imagine carefully calculating a boxy room and have an earthquake transform it's shape in real time. All of a sudden the square angles you have stored no longer apply, you might be able to swap between "start" and "end" states like a door but you can see that there is much less flexibility already. Storing every possible position the map can be in isn't really realistic.
This is why Doom/Quake/etc.. have a limited set of "known entities" like doors and platforms to make sure that any special optimizations are caught. It took until expansion packs to even get basic rotating objects and HL1 took this much further but still abiding by the same limitations.
And here is where the big difference comes.. Build has no concept of "entities".
Build doesn't do ANY precomputation for geometry, visibility, lights, etc.. It was built like this. It has a very clever way to accomplish, and It's fast too!
I won't go in to details how this is done but the end result is simply that the concept of "static" and "dynamic" never existed as there was no such "cheating" done, it cheated differently by the fact that the output didn't need to be true 3D.
This is why there is zero extra tax when it comes to wildly heavy changes between frames as it doesn't rely on any pre-computed data and can simply approach each frame "fresh".
Now we are getting to the root cause of the problem...
Build game you total freedom on how to handle geometry manipulation. Ken wrote some example "effects" that manipulate walls & sectors to create what we perceive as doors, elevators, etc.. Behaving much like entities.
Each one of these ran it's own bit of code, completely separate from the engine (game side).
What about the games? Well.. imagine 1994, you have 5+ developers who now want a "doom door" in their game. We're still on planet earth and just like any software development project, of course the result is that now we have 5 different incompatible implementations of the doomdoor, each one with it's own quirks, visual identity and tagging systems. Rinse and repeat this for any "effects" that a game might have and suddenly you find yourself comparing your 300+ "dynamic objects" against a handful from Quake (or even games like HL2). But that's just part of the problem.. Even with just accounting for one game, each "effect" is merely a macro against the geometry, this means that there is no real restriction on having multiple modifiers overlap (execute simultaneously, after each other, etc..), potentially giving very unpredictable results. Let's not even start with any effect extensions via modding.
Solid "dynamic" vs. "static" prediction is just near impossible to define reliably. Even Duke64 simplified some effects to get some extra shortcuts here (i.e. real moving subway doesn't exist)
This is why Polymost gave up on this for the most part and relies on some software render code to do the setup and then has it's own way of cutting the geometry into triangles in real time.
As static doesn't exist, you get computationally heavier 3D preparation that needs to occur constantly, and you get like a gazillion draw calls as it's just difficult to make a reliable model out of this quickly, each frame. But still possible.
Polymer does some extra on top of this but as there is zero precomputation done, light passes like quake has are impossible to perform. Lights basically have to always be dynamic and that can be quite heavy.
A light pass on a quake map is extremely heavy and imagine having to do these in real time (Quake has on/off states and very limited dynamic glow for things like rockets).
Dynamic lights in Build also don't have visibility data or such to fall back on.
In short, clever decisions that make Build what it is just don't translate well in to de-facto rendering practices and it's a constant fight against the current as you have to go against sanity in order to get the real output