Skip to content

Conversation

@portsmouth
Copy link
Contributor

@portsmouth portsmouth commented Apr 15, 2025

It was noted that the "hazy gloss" look on the right below achieved with Standard Surface, is no longer doable in OpenPBR:

This is because we have the physical effect of coat roughening baked in, i.e. a rough coat in OpenPBR will always roughen an underlying smooth metal/spec. That was done since it matches how it would work on a real material (a shiny metal under a rough coat can't magically appear shiny through the coat), but for a “hazy” look you do want the coat to effectively leave the base highlight undisturbed and add the blur on top.

That would actually correspond physically (most likely) not to a coat at all, but to the metal having some NDF with a wider tail than GGX, rather than a rough coat sitting on top. We can add this haze/tail control to the specular lobe, specifically to support this hazy gloss look, without having to “break” the physics by having a mirror spec unphysically unaffected by an overlying rough coat. (This breakage was supported in standard surface via “coat affect roughness”, but we removed it. I think it would be much preferable have a physically sound “GGX tail” control, than put back somehow that unphysical turning off of the coat roughening to fake the effect).

This can easily be supported simply by having the base microfacet lobe be a blend of two lobes with different roughnesses (but otherwise identical anisotropy), which is how it is done in a number of other schemes:

image image

@portsmouth
Copy link
Contributor Author

portsmouth commented Apr 15, 2025

Here for example is the current situation, where one attempts to make a hazy metal look by varying the coat roughness $r_C$ (on top of smooth spec):

$r_C$=0.1 $r_C$=0.2 $r_C$=0.3
coat_0 1 coat_0 2 coat_0 3

Instead (with no coat now, just the base metal) we can set the specular_tail_mix to 0.5, and vary the specular_tail_roughening $\Delta_r$:

$\Delta_r$=0 $\Delta_r$=0.3 $\Delta_r$=0.6
tm_0_tr_0 3 tm_0 5_tr_0 3 tm_0 5_tr_0 6

Note how the coat also the changes the look at grazing angles due to Fresnel, which may not be wanted. While the tail control does not affect that. This increases the degrees of freedom of the model in a useful way.

@portsmouth
Copy link
Contributor Author

This has been noted as a desirable feature multiple times. See below for some examples.

The exact details of the specular_tail_roughening parametrization and defaults need to be agreed on of course.

image image

@portsmouth
Copy link
Contributor Author

portsmouth commented Apr 15, 2025

It's worth considering whether to add such a tail control to the coat lobe as well. I think probably it's unnecessary though. The look of the combination of coat and specular tail would be interesting to check as well.

@portsmouth portsmouth marked this pull request as draft April 15, 2025 17:58
@chadgsg
Copy link

chadgsg commented Apr 17, 2025

I'd like to advocate for the inclusion of a GGX tail in the OpenPBR material. As a user, I've become quite dependent on the Standard Surface's handling of the Coat parameter, which allows me to create a realistic haze or dust effect where it makes the most sense on surfaces. Breaking up the specular response on the coat enables more nuanced looks that enhance realism. I believe that implementing a GGX tail would significantly improve the versatility and quality of materials in OpenPBR. Thank you for considering this suggestion!

@portsmouth
Copy link
Contributor Author

portsmouth commented Apr 29, 2025

I'd note that an alternative parametrization is a single "shape" parameter for the GGX profile.

The Student's t-distribution NDF has such a parameter, $\gamma$. There is also "generalized Trowbridge-Reitz" (GTR), where Trowbridge-Reitz ($\gamma=2$ case) is GGX, and $\gamma\rightarrow\infty$ is Beckmann. (As far as I understand, the Student's t-distribution NDF is essentially GTR but properly taking into account shadowing-masking as well).

https://mribar03.bitbucket.io/projects/eg_2017/

image

For example in Rombo tools they call this the "reflection slope": https://www.rombo.tools/2021/12/12/reflection-slope/

In VRay they have "GGX tail falloff", with minimum value 2 corresponding to regular GGX: https://docs.chaos.com/display/VMAYA/VRayMtl+Reflection

Chaos implements GTR: https://www.chaos.com/cn/improvements-to-the-gtr-brdf

The Disney Principled BRDF used GGX for the specular base, but GTR (with $\gamma=1$) for the coat:

For our BRDF, we chose to have two fixed specular lobes, both using the GTR model. The primary
lobe uses γ = 2, and the secondary lobe uses γ = 1. The primary lobe represents the base material
and may be anisotropic and/or metallic. The secondary lobe represents a clearcoat layer over the base
material, and is thus always isotropic and non-metallic.

Using GTR/Student-t would be more involved to specify though as we'd effectively be dictating a specific mathematical formula for the NDF which is a bit complicated to work with (probably having performance implications as well).

Also, though the theory of Student’s t-Distribution is nice, it's not really clear what this model brings visually that a simple mix of different roughnesses does not. It's plausible that the mix could actually be more expressive, since it's a two parameter model so e.g. can do a very sharp NDF plus a very rough one, unlike GTR (and also seems quite reasonable on physical grounds, as representing a microstructure consisting of a statistical mix of two "mesoscale" phases with different GGX roughness).

The most general thing would be to have the "dual" GGX lobes, as well as the GTR shape parameter for each lobe. This seems probably like overkill though, and we could always add the tail parameters at a later date.

@virtualzavie
Copy link
Contributor

As we've said, we'd need artist input.

Although it is difficult to quantify, I think an important metric is how much additional expressivity is brought, with regard to the additional parameters. From that point of view, the $\gamma$ parameter doesn't seem to bring much expressivity, and the two lobe alternative looks more worthwhile to me.

@portsmouth
Copy link
Contributor Author

portsmouth commented May 9, 2025

Here is a wedge of specular_haze_mix $w_h$ versus specular_haze_roughening $\Delta_r$ (on a smooth metal):

specular_haze_mix $w_h$ $\Delta_r$=0 $\Delta_r$=0.2 $\Delta_r$=0.4 $\Delta_r$=0.6
0 m0 m0 m0 m0
0.1666 m0 m0 166_r0 2 m0 166_r0 4 m0 166_r0 6
0.3333 m0 m0 333_r0 2 m0 333_r0 4 m0 333_r0 6
0.5 m0 m0 5_r0 2 m0 5_r0 4 m0 5_r0 6
0.666 m0 m0 666_r0 2 m0 666_r0 4 m0 666_r0 6
0.8333 m0 m0 8333_r0 2 m0 8333_r0 4 m0 8333_r0 6
1.0 m0 m1_r0 2 m1_r0 4 m1_r0 6

(The top row and left-hand column are all equivalent to the original metal).

The specular_haze_roughening controls the broadness of the tail of the specular highlight.

As the specular_haze_mix increases (going down the columns), the brightness of this tail (of given width) is varied, roughly speaking, presumably borrowing some energy from the primary specular highlight since energy is still conserved. Clearly there is a significant visual difference between the columns (i.e. how the highlight behaves as the roughening is varied, for a given mix).

So it does seem quite natural to need a 2-dimensional parametrization.
I will check with e.g. a 1-dimensional GTR parametrization, but I don't see how it could provide the same level of expressivity.



Also to note, implementing the NDF mixture in a single BSDF is quite trivial it seems, since according to d'Eon's "A Hitchhiker’s Guide to Multiple Scattering" the resulting shadowing is simply given by mixing the $\Lambda$ factors. Importance sampling taking into account the visible normals can be done using the formula he gives (or more naively could just sample based on the mix weights):

image

(EDIT: though computing the multiple scattering compensation taking into account the mix also need to be done, which would require two lookups into the albedo table etc. A more naive implementation can always just blend and sample the two separate BRDF lobes).

These implementation details should be mentioned in the spec.

@portsmouth
Copy link
Contributor Author

portsmouth commented May 9, 2025

Note that I think Lama has the specular_tail_mix range over $[0, 1]$, with the top-end meaning the 50/50 mix. This might be more intuitive ($[0,1]$ sliders are always nice).

Ah no -- the secondary lobe is always higher roughness.. So specular_tail_mix > 0.5 is required to get the full parameter space. (I updated the grid of renders above to show this).

Other possibilities would be to either:

  • have independent roughness sliders for primary and secondary lobes (so the primary lobe can be rougher), then the $[0, 0.5]$ mix range is sufficient.

  • keep the specular_tail_roughness as additive to the primary roughness, but allow it to be negative thus decreasing the primary roughness..

I think both of these are probably less intuitive than having the specular_tail_roughness be additive (since the "tail" should really correspond to the higher roughness), with specular_tail_mix range over $[0, 1]$ (to allow the rougher secondary lobe to dominate).

Mention that the total roughness of the secondary lobe is clamped to [0,1].
@portsmouth
Copy link
Contributor Author

portsmouth commented May 10, 2025

A relevant paper that needs to be discussed is

"A Composite BRDF Model for Hazy Gloss", Pascal Barla, Romain Pacanowski, Peter Vangorp, EGSR 2018

In this paper they work with the same mixture model we're talking about, i.e. a blend of a primary lobe and secondary lobe differing in roughness. However they figured out how to re-write this as a sum of a "specular core" and a "surrounding halo", in such a way that the halo width can be adjusted independently of the core brightness, which they claim is important for artistic control:

However, the manipulation of haze is only indirect in this case and requires trial and error. In particular, it is extremely
difficult to create materials where haze varies spatially over a surface without affecting other material properties. When dealing with anisotropic materials, control becomes even more tedious since the number of parameters is increased.

In this image the top row is varying the naive mixture weight (so the sharp reflections are overpowered by the secondary lobe as it gets rougher), and the bottom is varying their new "haziness" parameter (sharp reflections maintained):

image

In addition to haziness, they allow for independent anisotropy so the haze has two extra roughnesses (or haze "extents") for the tangent and binormal directions.

This is a well written paper and a nice contribution, but I feel it probably isn't appropriate to use in the OpenPBR spec:

  • Their method does not produce any look that is not achievable with the mixture model, except in the latter to maintain the "core brightness", the user would need to alter the reflectivity, via specular_weight say.

  • It actually seems less physically plausible, in the sense that in a real material with varying haziness due to differences in the local density of rough patches, the effect will not be to produce this preservation of the specular core brightness, but instead it will be like the simple mix where the hazy areas have less bright sharp reflections. Is keeping the specular core brightness constant really a desirable behavior (given that it probably is less realistic)? The requirement to keep the specular core uniformly bright is perhaps useful in some situations, but seems rather artificial, and the total energy is conserved in the simple mix scheme as well. (Perhaps they would argue that real materials do not really consist of a mixture of patches with different roughness, so one can think of the haziness purely as a visual effect to be designed, but it seems a reasonable first-order approximation of what is happening physically that would be good to correctly model).

  • Their method works technically by having the Fresnel of the primary lobe tied to the haziness, in order to conserve energy while keeping the core brightness invariant. This ends up making the primary Fresnel no longer simply related to the underlying material properties via the dielectric and conductor formulas we use in the spec. It also involves some slightly ad-hoc choices about how to define the exact color of the haze and core. So it adds a fair bit of perceptually-based complexity, that will make it a bit hard to reason about other physical aspects. (For example, how would nested dielectrics work if the dielectric Fresnel is just a heuristic designed to be perceptually nice, not based on the underlying Fresnel formulas for a physical material? Admittedly, we have a similar issue with the F82-tint model, but I'd rather not add another such difficulty).

  • Also, presumably this "haziness" parametrization could still be implemented (if needed) as a user-facing control on top of the underlying mixture-model based shader, since our model essentially describes the behavior of a physical material, which can be adjusted to produce the heuristic they describe. This seems a better approach to take, as it keeps our shader easy to implement and clearly defined, while still allowing for the Barla heuristic to be done (if needed) as a post-process for the artist facing controls, in a particular DCC or shader system.

  • Their allowance for the haze lobe to have independent anisotropy is nice, though I question how significant a visual difference it produces. (The renders they show suggest it's not very noticeable). That anyway is easily doable in the naive mix model, and could be added later if needed.

One thing that is clarified in their paper is the treatment of shadowing. As they note:

One may assume the mixture to be constructed of relatively large patches (on a micro-scale) of the component microsurfaces. The masking-shadowing effects across different patches may then be considered as negligible. In contrast, if
the two distributions are intertwined, then it becomes necessary to consider a compound masking-shadowing term.

The formula I quoted from d'Eon for the shadowing (i.e. blend the $\Lambda$ factors, not the $G$ factors) is for this "intertwined" case only, it seems. In the "large patch" regime (which seems more plausible to me, i.e. that the scale of the vertical height variations is negligible compared to the horizontal scale of the patches), the inter-patch shadowing is negligible, so the BRDF is simply a blend of the separate BRDFs. In that case, clearly when evaluating the BRDF, once simply blends (with NDF D and shadowing G per patch) via:

$$F (w_1 D_1 G_1 + w_2 D_2 G_2)$$

and sampling would simply stochastically choose between the two terms with probabilities $w_1$, $w_2$. The albedo is simply the blend of the separate albedos. I'd suggest we stipulate this explicitly, to make it easy to implement. (It seems doubtful that allowing for the inter-patch shadowing adds anything useful, but it could be experimented with later).

@portsmouth
Copy link
Contributor Author

portsmouth commented May 14, 2025

Just to clarify why the parameter space for the specular tail is inherently 2-d and not 1-d, I found it helpful for intuition to think about it this way:

In the mixture model, we are mixing two separate BSDFs representing the specular core and specular haze. These have independently normalized NDFs say $D_c$ and $D_h$, controlled by roughnesses $r_c$ and $r_h$. We then mix these with haze mix weight $w_h$, producing (ignoring shadowing) the effective NDF

$$( 1- w_h) D_c + w_h D_h$$

which is also normalized.

So there are 3 independent parameters in total, i.e. $r_c$, $r_h$, and $w_h$ (two controlling the haze).

The mix weight $w_h$ controls effectively the ratio of the energy in the haze relative to the core, while the haze roughness $r_h$ can be varied completely independently of this.

So these 4 regimes are all distinct (with energy balance maintained automatically via the core energy adjusting):

  • low $w_h$, low $r_h$: "weak, low roughness haze"
  • high $w_h$, low $r_h$: "strong, low roughness haze"
  • low $w_h$, high $r_h$: "weak, high roughness haze"
  • high $w_h$, high $r_h$: "strong, high roughness haze"

In principle, we could have both the core and haze further parametrized as Student-t lobes with independent $\gamma_c$, $\gamma_t$, for full flexibility, though most likely the mixing functionality provides a sufficient level of control.

@portsmouth
Copy link
Contributor Author

portsmouth commented May 20, 2025

I think we should really add this functionality while we still can. Since:

  • It is essentially a regression compared to Standard Surface, since there is no way to get the hazy look that was possible before via the coat_affect_roughness default. If people were relying on this, it would prevent them switching to OpenPBR (as noted by Chad).
  • As evidence that it is useful/required, it exists in various forms in other uber-shader systems such as Lama, Disney Principled, VRay, Rombo tools.
  • The specific mixture model proposed is pretty standard and simple, matching what is done in Lama. This model has some theoretical pedigree, having been suggested/studied in papers [1], [2].
  • It adds only two extra parameters.
  • It will not be enabled by default, in which case there is no performance cost (other than that due to extra parameters/code). If enabled, it amounts to an extra evaluation of NDF & shadowing (in return for great expressivity/control).
  • An initial implementation for MaterialX as a blend of closures is easy to do. A deeper integration requires a change to the MaterialX spec, which can be done at a later date.

I would propose adding this to the (1.2) spec officially, getting an implementation into beta builds ASAP, and getting artist feedback to ensure we have the best choice of parameter naming, defaults, ranges etc. (As we have done for several other improvements, that were later iterated on).

[1] "A Composite BRDF Model for Hazy Gloss", Barla et. al, EGSR 2018
[2] "The perception of hazy gloss", Vangorp et. al, Journal of Vision 2017

@portsmouth portsmouth marked this pull request as ready for review May 21, 2025 03:11
@portsmouth
Copy link
Contributor Author

portsmouth commented May 31, 2025

A sensible suggestion from @masuosuzuki is to replace the specular_tail_roughening $\Delta_r$ (the extra roughness of the tail lobe) by a parameter specular_tail (say $\xi_t$) which dials the haze roughness between the base value and 1, such that the tail lobe has the total roughness $r_t$ given by:

$$r_t = (1 - \xi_t) r_c + \xi_t$$

given core/base roughness $r_c$.

Or in GLSL pseudo-code:

tail_roughness = mix(base_roughness, 1, specular_tail)

So then no roughness clamping is needed. This seems a bit more natural since the roughening parameter then applies over the whole $[0, 1]$ range. (Whereas with the previous approach, for a high core roughness, the roughening would not do anything for most of the range, due to the clamp).

I will re-do the render wedge above with this new parametrization.

@masuosuzuki suggested naming the $\xi_t$ parameter specular_tail. But if we use the name specular_tail_mix for the mix weight, this seems potentially a bit confusing (that the mix is nothing to do with the roughness). Clearer naming might be:

  • specular_tail_weight: the mix weight of the two lobes
  • specular_tail: the $\xi_t$ parameter controlling the width of the tail.

This seems best to me, as we use our standard "weight" nomenclature for a genuine mix weight, and indeed $\xi_t$ controls the width of the "tail" (i.e. the extra roughness from minimal to maximal) so we can reasonably identify it with specular_tail.

Or alternatively, we could stick with:

  • specular_tail_mix / specular_tail_weight: the mix weight of the two lobes
  • specular_tail_roughening: the $\xi_t$ parameter controlling the width of the tail.

@portsmouth
Copy link
Contributor Author

portsmouth commented Jun 9, 2025

Obviously, if the core specular_roughness $r_c$ is 0 (as in the wedge done above), there is no difference between Masuo's parametrization and the additive-roughness one.

Here instead I fix the mix weight at 50/50, and vary the core roughness $r_c$, and Masuo's haze "spread" parameter $\xi_h$, i.e. the haze lobe roughness is defined to be

$$r_h = (1 - \xi_h) r_c + \xi_h = r_c + \xi_h (1 - r_c)$$

So in words, the "spread" $\xi_h$ is the fractional amount of "available" extra roughness to add.

$r_c$ $\xi_h$=0 $\xi_h$=0.25 $\xi_h$=0.5 $\xi_h$=0.75 $\xi_h$=1.0
0 c0 0_r0 0 c0 0_r0 25 c0 0_r0 5 c0 0_r0 75 c0 0_r1 0
0.1 c0 1_r0 0 c0 1_r0 25 c0 1_r0 5 c0 1_r0 75 c0 1_r1 0
0.2 c0 2_r0 0 c0 2_r0 25 c0 2_r0 5 c0 2_r0 75 c0 2_r1 0
0.3 c0 3_r0 0 c0 3_r0 25 c0 3_r0 5 c0 3_r0 75 c0 3_r1 0
0.4 c0 4_r0 0 c0 4_r0 25 c0 4_r0 5 c0 4_r0 75 c0 4_r1 0

@portsmouth
Copy link
Contributor Author

portsmouth commented Jun 9, 2025

I worry that specular_haziness would be confusing, as a high haziness would not do anything, if the haze weight is zero. (Haziness only really works if there is a single parameter, as in the Barla paper).

Instead, a naming proposal that both @masuosuzuki and I like, is:

  • specular_haze_weight - haze lobe mix weight ($w_h$ above)
  • specular_haze_spread - haze lobe roughening param ($\xi_h$ above)

As then:

  • The two parameters have a consistent prefix (and same length of name).
  • "spread" seems a reasonable metaphor for both the visual blurring, and the widening of the NDF function.
  • "spread" also sort of vaguely implies "additional roughening/shaping" (relative to the core) rather than "absolute roughness/shape", if read as a verb.

@AdrienHerubel AdrienHerubel marked this pull request as draft June 10, 2025 15:15
@portsmouth
Copy link
Contributor Author

portsmouth commented Jun 10, 2025

As noted by @ld-kerley, specular_haze_weight might be confusing since we generally use weight to control the mix weight of some independent lobe (e.g. coat, fuzz), whereas specular_haze_weight would have no effect if e.g. specular_weight is zero.

@AdrienHerubel's suggestion in the meeting was to instead use specular_haze without the "weight" suffix. So:

  • specular_haze - haze lobe mix amount ($w_h$ above)
  • specular_haze_spread - haze lobe roughening param ($\xi_h$ above)

I think that works very well, as then it reads quite clearly as specular_haze dialing the strength of the haze, and specular_haze_spread modifying the resultant haze. (And the specular_weight killing the haze would presumably be expected).

@peterkutz
Copy link
Contributor

Regarding the premise of this feature, I understand that it's not possible to achieve the desired effect by putting a rough coat on top of a smooth base, but it is possible to put a smooth coat on top of a rough base. That approach has drawbacks in practice, such as being difficult to tune with dielectrics due to the coat affecting the relative IOR of the base, but I thought it was worth pointing out that it is not entirely unachievable in the current model.

@portsmouth
Copy link
Contributor Author

portsmouth commented Jun 12, 2025

Regarding the premise of this feature, I understand that it's not possible to achieve the desired effect by putting a rough coat on top of a smooth base, but it is possible to put a smooth coat on top of a rough base. That approach has drawbacks in practice, such as being difficult to tune with dielectrics due to the coat affecting the relative IOR of the base, but I thought it was worth pointing out that it is not entirely unachievable in the current model.

A smooth coat on a rough base does not quite produce the same look though. The unroughened base is what is wanted, which the coat Fresnel reflection won't necessarily capture (e.g. the color and Fresnel shape of a metallic base).

The coat will also generate Fresnel reflections, and darkening due to internal reflections, which may not be wanted if the user is using it only as a means to modulate the base highlight.

In the image below, the user wants to achieve a perfectly sharp base metal, with the rougher lobe superimposed on it. This wouldn't be approximated very well by the Fresnel reflection of the coat combined with the rougher metal.

Of course, what he wants is not possible physically via a coat (if the coat is fully present), since if the coat has any finite roughness, the base must be roughened, in standard light transport. But what he wants is exactly what the tail/haze mixture model does, in a physically correct manner.

One possible approach is to try a partially present rough coat though. That should in theory produce a blend of the sharp and roughened metal highlights. Though it still seems a bit of a kludge, to achieve approximately (with all the, probably unwanted, layering effects) what is expressed more cleanly and generally with the haze model.

image

@peterkutz
Copy link
Contributor

That all makes sense. The haze model seems like a better solution for all but the few cases where a smooth coat on top of a rough base might produce satisfactory results.

@peterkutz
Copy link
Contributor

peterkutz commented Jun 17, 2025

The latest proposal with specular_haze and specular_haze_spread sounds good to me. Making the specular_haze_spread blend the haze roughness from specular_roughness to 1 seems reasonable, mainly because it's simple and doesn't involve clamping.

One practical downside of this feature is that it sounds like it requires a second specular lobe, whereas all current features can be implemented with a single specular lobe because they all share the same microfacet distribution. Introducing a second specular lobe could reduce performance, though it seems relatively straightforward at least.

Luckily, as long as the haze applies to all base specular effects (both metallic and dielectric), it seems like it should be possible to stochastically select which of the two lobes to use for any given shading point (as if the two lobes are not superimposed but are instead each covering a certain fraction of the surface area). This stochastic option would not only avoid an extra specular lobe, it could also avoid a second microfacet-multiple-scattering-compensation lobe.

@portsmouth
Copy link
Contributor Author

portsmouth commented Jul 8, 2025

It seems like we have a proposal that is a candidate for merging. We need to update the text to the current proposal.

@portsmouth portsmouth changed the title Specular tail proposal Specular haze Jul 8, 2025
@AdrienHerubel AdrienHerubel marked this pull request as ready for review November 25, 2025 17:17
@portsmouth
Copy link
Contributor Author

portsmouth commented Dec 5, 2025

In dd9e01e and c76565d I updated the text to match the decided-upon parametrization.

The default for specular_haze_spread might need adjustment after some testing/feedback, I'm not sure.

image image image image

@portsmouth
Copy link
Contributor Author

@jstone-lucasfilm I recall you mentioned you will be working on the MaterialX graph implementation. Possibly that could be done in a separate PR?

@jstone-lucasfilm
Copy link
Member

@portsmouth Yes, I think that's the right approach here, and I'll plan to develop the MaterialX graph update in a separate PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants