New Surfaces Materials?
-
On 22/04/2013 at 15:05, xxxxxxxx wrote:
Originally posted by xxxxxxxx
Q1 : How would I go about adding MIP delta to these shaders to make them look better?
Check the cd->d values and use them to drive your level of detail. The smaller the values, the greater your detail should be. The exact value range depends on your shader and how you design it. Clamp the value into shape
Especially for the checkerboard, you should read something about MIP mapping. You have to come up with a clever idea to blur the checkerboard. The greater the d values, the stronger the blur.
With Noise-based shaders, I usually just reduce the noise octaves with increasing d values.Originally posted by xxxxxxxx
Q2 : This is an example I made using the VolumeData method to create a checkerboard shader. But the problem is. It's in world coords. So the pattern doesn't stick to the object when it moves. How do I write this kind of VolumeData shader so that sticks to the object?
You can use the inverse texture matrix cd->tex->im and multiply cd->vd->p with it. That should transform it into local space. Actually, the local texture space (controlled by the coordinates in the Texture Tag), which is what you want to use in Cinema.
-
On 23/04/2013 at 08:55, xxxxxxxx wrote:
I can't figure out how to use cd->d to do that.
I also can't find anything resembling cd->tex->im in the SDK.
I understand the theory about inverting a matrix to change global to local. I do that fairly often with mesh points.
But I don't see how to write that code for VolumeData textures in the SDK.-ScottA
-
On 23/04/2013 at 09:29, xxxxxxxx wrote:
Originally posted by xxxxxxxx
I can't figure out how to use cd->d to do that.
for the check board shader example you could blend the result color with the
mixed vector of colA and colB with an increasing mip radius. distant points
would become gray for a black / white check board which somewhat reflects
the natural human perception of high contrast patterns.i would like to bring up again Modelling and Texturing : A Procedural Approach.
a really good book, which is worth the money. all topics covered in this thread
are described in the book (procedural patterns, various noise types / fractal
noises, anti-aliasing, mip-mapping and much more). -
On 23/04/2013 at 10:35, xxxxxxxx wrote:
I appreciate the help.
But please don't point me towards anymore Renderman related resources. They aren't helping me.
I can't use these various Renderman resources until I know how to use the C4D SDK code first.
That book does discuss general theory. But it uses Renderman code to explain them. And the Renderman SDK code is very different from the C4D SDK.
Sure. I can convert some of the things like: Renderman: colors == C4D: Vector
But there's far, far too many Renderman specific things (like noises and proprietary methods) these books and websites use that I'm having trouble converting to the C4D SDK.
I do have some Renderman tutorials. Which show me how to do things like sticking a volumetric shader by parenting Maya shader nodes. But C4D doesn't work the same way.At this point in time. I need to see only C4D SDK code to understand how this stuff works.
I know the basic principles Between Renderman & C4D are the same. But the code is just too different for me to make the connection yet.So please guys. I really do thank you for the help.
But these kinds of things do not help me at all:
-Renderman references.
-Broad generalizations like: Mix this color with that color vector. And then invert it's matrix.If you can't provide C4D specific code. Please don't waste your time trying to explain it to me in a lecture form. Because I won't understand how to write it. And you'll be wasting your time on me.
I don't want you guys to waste your valuable time on me.I'm sure I will be able to use those resources later on to do more advanced things.
But first I need to see the C4D code on how to do these things. Not generalized theories.
Not only to understand the general theory. But to see how to write the actual C4D SDK code.
This is why I share so many plugins with the source code. So people like me who can't understand the SDK can see a working example of how to write the code.
It really does make all the difference in the world to see the actual code used in examples.Looking at the large view count for this thread. I'm guessing that this is something that many people are also interested in. And have lots of questions about how to write the C4D code like me.
-ScottA
-
On 23/04/2013 at 11:40, xxxxxxxx wrote:
Originally posted by xxxxxxxx
I also can't find anything resembling cd->tex->im in the SDK.
Oh, sorry. It's cd->vd->tex->im.
Originally posted by xxxxxxxx
I understand the theory about inverting a matrix to change global to local. I do that fairly often with mesh points.But I don't see how to write that code for VolumeData textures in the SDK.
Just do exactly as I said: Multiply your cd->vd->p with the inverted matrix. It doesn't matter if its the position of a mesh vertex or of a ray hit, the math is the same.
-
On 23/04/2013 at 15:39, xxxxxxxx wrote:
Thanks Frank.
That's got it sticking to the object now.Here's the code I have for people lurking:
Vector MyShader::Output(BaseShader *chn, ChannelData *cd) { Vector colors; if(cd->vd) { Real r,s,t; Bool rs = cd->vd->GetRS(cd->vd->lhit,cd->vd->p,&r,&s); //Use volume data to do the rendering Vector pos = cd->vd->p; //Get the the texture positions Vector localPos = pos * cd->vd->tex->im; //Invert the texture matrix so the texture sticks to the object when it's moved localPos.x *= 2; //The number of times to tile the shader localPos.y *= 2; localPos.x -= (LONG)Floor(localPos.x); localPos.y -= (LONG)Floor(localPos.y); if(localPos.x < offset && localPos.y > offset || localPos.x > offset && localPos.y < offset) colors = Vector(0,0,0); else colors = Vector(1,1,1); } return colors; }
-ScottA
-
On 24/04/2013 at 01:07, xxxxxxxx wrote:
Good work! You see, it's not that difficult, once you actually dive into it.
Sorry about all those Renderman examples, but those are really the most suitable examples. It's like learning the Blues scale before becoming a metal guitarist. It makes you understand the basics
Your code can still be optimized. This is how I would do it:
Vector MyShader::Output(BaseShader *chn, ChannelData *cd) { // hard-coded parameters (ideally, you get these from the BaseContainer in InitRender() and store them in private class members) const Real tileX = 2.0; const Real tileY = 2.0; const Real offset = 0.4; const Vector col1 = Vector(0.0); const Vector col2 = Vector(1.0); // Declare (but don't construct) sample position Vector pos(DC); if(cd->vd) { // Sample from 3D space pos = cd->vd->p * cd->vd->tex->im; // Get the sample position } else { // Sample from UV space pos = cd->p; // Get the sample position } // The number of times to tile the shader pos.x *= tileX; pos.y *= tileY; // Get local coordinates inside tile pos.x -= (LONG)Floor(pos.x); pos.y -= (LONG)Floor(pos.y); // Decide color if(pos.x < offset != pos.y < offset) return col1; else return col2; }
Changes from your code:
-
Hard-coded parameters: I moved all relevant parameters to variables to make the code more readable.
-
cd- >vd->p vs. cd->p: the shader can now sample both, 2D and 3D space. That way, the 2D shader preview will work. Depending on if cd->vd is NULL or not, only the way how I retrieve 'pos' is changed. The rest of the code is the same for both cases.
-
Vector pos(DC) : I declare the variable, but (via the DC parameter) I don't construct it yet. Wouldn't make sense, as I copy values into it later, anyway.
-
Where is 'Vector colors'? It's gone, you don't need it. Instead of copying the resulting color into 'colors' and then returning 'colors' at the end of the function, I simply return the resulting color immediately.
-
Where are 'Real r, s, t'? Also gone, you don't need those. You didn't even use them in your code.
-
Color decision: I changed that back to the shorter form. It's less comparison work, and you can achieve exactly the same with it.
-
offset: You used the 'LONG offset' from the Mandelbrot example to compare it to the local tile coordinates (which are Reals between 0.0 and 1.0). Does not make much sense. It's now a const Real variable, along with the other parameters. Using a value of 0.5 will create even tiles, other values will change the look.
Now some general notes...
So please guys. I really do thank you for the help. But these kinds of things do not help me at all:
-Renderman references.
-Broad generalizations like: Mix this color with that color vector. And then invert it's matrix.If you can't provide C4D specific code. Please don't waste your time trying to explain it to me in a lecture form. Because I won't understand how to write it.
The thing is, you will never find for exactly your problem. At least the probability is very low. And if you limit your search to C4D specific code, you decrease your changes even more. You need to learn the methodology of all this: Finding *any* example code that is easy to understand (and it won't get any easier than those 80s Renderman examples), use it to learn the maths behind it (because is is always the same), look up the functions you don't know, and then write your own shader code as a Cinema plugin. That is how we all learned it.
And if someone tells you, as you say, to "Mix this color with that color vector" or "And then invert it's matrix", then those things are not at all broad generalizations, but precise descriptions of what you have to do. If you don't know how to mix two vectors, then search for it on the web, or take a look into the SDK documentation. It's just math, there's nothing application specific to it. But you can't expect people to write your shader for you.
Long story short: You have to do research yourself. If you don't know the maths, and don't know the functions you need, and you are not willing to use Google, read code from other systems, use the documentation and generally suck up knowledge like a sponge, you won't get very far.
I know the whole topic of shader programming is not the easiest to grasp. It is complex, and it involves quite some background knowledge that you have to look up and learn. But you must have the patience to do it.
Cheers,
Frank -
-
On 24/04/2013 at 07:43, xxxxxxxx wrote:
Thanks again Frank,
I just put hard values in my example because it's a lot shorter to post a working example that way.
In a normal project I wouldn't hard them like that.
But thanks for pointing it out. Because that brings up another SDK method I'm having trouble with.You removed GetRS() from my example. And I'm still not sure exactly how that method works.
It's not explained properly enough in the SDK for me to understand how it works.This is what the SDK says:
Bool GetRS(const `RayHitID`[URL-REMOVED]& hitid, const `LVector`[URL-REMOVED]& p, Real* r, Real* s) -Calculate the R/S parameters for a point _<_h5_>_ _Parameters_ <_h5_>_/h5> \> _const`RayHitID`[URL-REMOVED]& hitid_ \> \>> The global `RayHitID`[URL-REMOVED]. \> \> _const`LVector`[URL-REMOVED]& p_ \> \>> The point. \> \> _Real* r_ \> \>> The returned R parameter. The caller owns the pointed value. \> \> _Real* s_ \> \>> The returned S parameter. The caller owns the pointed value.
>>
>> Based on my prior experiences with GeData. I think what this method does is do some kind of ray sending. And then stores the some data found from the ray into the variable r &s?
> But what the heck is r&s?!Confused
[URL-REMOVED]
> The SDK never says what theses data values are. And what they can be used for.
>
> This is why I need to stay talking in C4D SDK terms only as much as possible. Because in addtion to being brand new at shaders theory. I also don't understand many of the methods Maxon is providing to us to use to make them. Once I have the SDK decoded, those Renderman examples will probably come in handy.
>
> -ScottA
>
[URL-REMOVED] @maxon: This section contained a non-resolving link which has been removed.
-
On 24/04/2013 at 08:20, xxxxxxxx wrote:
Hm, just wanted to say "why not use google", but everything I found was saying R and S are
vectors in shading theory. But GetRS() returns two floating point numbers. -
On 24/04/2013 at 13:08, xxxxxxxx wrote:
I agree, it's not really well explained in the SDK docs. However the documentation does contain a little code example, that you didn't quote in your posting (make sure you are using an up-to-date version of the SDK docs). And that does give us a hint of what the function does exactly:
result = [color a] * (1.0-r-s) + [color d]*r + [color c]*s
The following applies: 0.0 <= (r+s) <= 1.0. So r and s are barycentric coordinates that define a position within a polygon (the result of GetRS() tells us if the hit has occurred in the first or the second triangle of a quadrangle).
You can use them to weight any kind of values between the points of a polygon, be it colors or other values (e.g. the values of a vertex map), and construct interpolated values for any position in the polygon. Here is an explanation about barycentric coordinates that might be easier to understand than the article on Wikipedia:
http://mathworld.wolfram.com/BarycentricCoordinates.htmlIf we read on in the docs, the next function mentioned is GetWeights(). And there it is said more clearly:
Returns barycentric coordinates for a point on the surface of a polygon.
...blahblah...
Works similar to GetRS(), but has a higher quality.Barycentric coordinates are useful for a lot of things, from texture mapping, distributing things on polygons, up to physical calculations; and they are not a C4D specific invention, by the way. Anyway, for a simple shader that generates a pattern, they are not needed. You can ignore GetRS().
-
On 25/04/2013 at 08:25, xxxxxxxx wrote:
"r and s are barycentric coordinates that define a position within a polygon"
Thanks. This is the missing information I was looking for.I've picked up quite a lot of good new information. And at this point I think the only thing that I really, really need to know is how to control the MIP samples.
I'm puzzled by what you said before about using cd->d to do that. Because that's a Read-Only function. So I don't understand how that can be of any help. Other than just a means to monitor the d value?Apparently I'm supposed to mix and blend something to do that. But I don't know what.
I do know how to use the math functions in the SDK:
Clamp(LReal a, LReal b, LReal x )
Mix(constLVector
[URL-REMOVED]& v1, constLVector
[URL-REMOVED]& v2, LReal t)But what values do I plugin to these math functions?
Do I have to somehow sample the colors of the shader...Then somehow blend the colors where I find two different colors?
I have no idea how I could do that.If I can just get the shader code to do the MIP sampling and make them look better. I'll stop pestering you with questions.
At least for a little while.-ScottA
[URL-REMOVED] @maxon: This section contained a non-resolving link which has been removed.
-
On 25/04/2013 at 11:42, xxxxxxxx wrote:
Originally posted by xxxxxxxx
"r and s are barycentric coordinates that define a position within a polygon"
Thanks. This is the missing information I was looking for. I've picked up quite a lot of good new information.You're welcome.
Originally posted by xxxxxxxx
And at this point I think the only thing that I really, really need to know is how to control the MIP samples. I'm puzzled by what you said before about using cd->d to do that. Because that's a Read-Only function. So I don't understand how that can be of any help.
The SDK sais about d: "The MIP sample radius in UVW coordinates."
So it's what I said: The value tells you how much of the UV space is covered by your ray. That is why the values in d become greater when you move the camera further away from the rendered surface, or when the ray hits the surface in a rather flat angle. And the values get greater when you move closer towards the surface, and/or view it from a steeper angle.
The value does not do any of the work for you, it just gives you an idea of how much you should reduce your shader's output detail.
Let's say, your camera is really far away from the rendered surface. Actually, it is so far away that you recon, your shader should have minimum detail now. You hit render, and watch the d value that is passed to your shader. Let's say, it's 0.005 (just a fantasy value).
Now you move the camera closer to the surface until you think, now would be a good distance to have your shader show full detail. You hit render, watch the d value, and find it's e.g. 0.0000001.
(d is a vector, and its components will most likely not have the exact same value, but I'm trying to keep this example simple).To make it simpler, let's put the values of d into a single Real. Let's just average them. It's less precise and you might be happier with using both values separately, but for this example, it's ok.
Real easy_d = (cd->d.x + cd->d.y) * 0.5;
You should now clamp the easy_d values between the two values we looked up before. That will prevent your shader from becoming either unnecessarily detailed or ridiculously dull.
const Real min_delta = 0.0000001; const Real max_delta = 0.005; Real delta = Clamp(min_delta, max_delta, easy_d);
Now let's say your shader is based on a noise (easier to explain than with a pattern). If the noise in your shader has an 'octaves' value of 8.0 it looks superb, and with a value of 2.0 it looks rather crappy but still halfway acceptable (again, these are fantasy values. It all depends on your shader.).
You could now control the octaves value with your delta, making it look crappy if it's far away or seen from a flat angle, and making it looks good when it's very near.
For this you could simply map the value of delta from a range of [0.0000001 ... 0.005] to a target range of [2.0 ... 8.0]. The following code is a short form of the Range Mapper code, as shown e.g. here:
http://c4dprogramming.wordpress.com/2012/09/13/range-mapping/const Real min_octaves = 2.0; const Real max_octaves = 8.0; Real octaves = (delta - min_delta) / (max_delta - min_delta); octaves = min_octaves + (max_octaves - min_octaves) * octaves;
Voilà , here you have a super simple example of how to use cd->d to control a shader's level of detail. In practice, you would probably still spend some time tweaking the values until the shader looks exactly the way you want.
Originally posted by xxxxxxxx
If I can just get the shader code to do the MIP sampling and make them look better. I'll stop pestering you with questions. At least for a little while.
Only you know what pattern your shader produces, and how it is done. There is no standard way of doing it efficiently. If you write an algorithm that creates a pattern, you have to write it in a way that its level of detail can be controlled.
If you rewrite a shader that already exists and that someone else originally wrote, then you should ask them.
I cannot give you the code of the CINEMA 4D checkerboard shader. First of all, because it's internal code; second, because the code in there is specific to the checkerboard pattern and does not necessarily apply to any other pattern; third, because it is highly mathematical and rather abstract, and I am under the impression that you are still struggling with the basics
There is, however, a good example of a filtered checkerboard shader. You'll probably hate me for that (or you recognize the sweet irony), but it's a renderman example:
http://www.renderman.org/RMR/Shaders/LGShaders/index.html
I found that and many other results that might be helpful with my very first Google search. You should be able to achieve the same. You have probably noticed that none of the code shown in this posting was any C4D specific. -
On 25/04/2013 at 15:30, xxxxxxxx wrote:
I get the idea how to create the delta value now.
But I still don't know how to apply this value to my own shader. Which does not use a noise.I looked at the Renderman checkerboard example in that link. And I think I can vaguely see how they are doing it. By comparing pixel spaces to the number of tiles ("frequency"). To find the places in the shader where to apply this delta value. Which will make those areas where the two colors meet a little bit fuzzy. Which in turn will make the seams look better(smoother).
But those Renderman variables are getting in my way from converting it to C4D.Du --> c4d?
du --> c4d?
dv --> c4d?
s --> c4d?
t --> c4d?-ScottA
-
On 26/04/2013 at 01:08, xxxxxxxx wrote:
Originally posted by xxxxxxxx
I get the idea how to create the delta value now. But I still don't know how to apply this value to my own shader. Which does not use a noise.
That is something nobody can give you a ready-to-go solution for. You write the shader, so you have to come up with a solution for detail reduction.
For noise-based stuff, that's usually the octaves. For patterns that have gradients in them (like e.g. the tiny gradient between mortar and brick color in a brick shader), I would increase the size of the gradients, thus making the shader softer and less likely to produce aliasing artifacts in the distance. For a hard-edged shader like a checkerboard, you have to implement a kind of filtering to interpolate between all colors that theoretically lie in a rendered pixel.
But really, how you do that entirely depends on your shader. You have cd->d which gives you the MIP sample radius. What you do with it, only you can know.
Originally posted by xxxxxxxx
But those Renderman variables are getting in my way from converting it to C4D. Du --> c4d?du --> c4d?dv --> c4d?s --> c4d?t --> c4d?
Then you should fire up Google and look those things up. That's another reason why I like to choose Renderman shader examples: That is one of the most documented shader languages out there. There is tons and tons of material about how it works.
Here is something I found with my very first Google search about the topic. Check page 17.
http://nccastaff.bournemouth.ac.uk/jmacey/Renderman/slides/RendermanShaders1.pdfPlease, you have to understand a very basic fact: The world of shader programming is vast and infinite.
There is almost no ready-to-use solution for anything. There is probably no ready-to-use code for the API of your choice that does exactly what you want.
It requires a lot of understanding for theoretical concepts, as well as the ability to transfer those concepts to your case and your API. A lot of mathematics. A lot of patience to read papers and experiment. And you have to look stuff up in SDK documentations, references and on Google.It's nothing you couldn't learn. In fact, I am sure you will learn it if you try. But it's not a topic that can be explained to you just like that. You have to sit down and learn.
I know how you feel. I felt the same way when I started programming, and then again when I started with 3D. And then again when I started with shaders. That's the life of a programmer, you never stop learning. The day you stop learning will be the day you should stop programming
-
On 26/04/2013 at 08:48, xxxxxxxx wrote:
OK.
I'll continue to try to figure it out. Thanks a lot for the help.I still really wish that Maxon could give us at least one shader like the pavement shader as a guide to learn from.
Trying to learn it from Renderman examples is painful.Thanks again Frank,
-ScottA -
On 27/04/2013 at 01:05, xxxxxxxx wrote:
The Pavement shader wouldn't help you, as it's Noise based
-
On 27/04/2013 at 07:52, xxxxxxxx wrote:
Well. The pavement shader (with a different shape) is the end goal I'm trying to reach.
So it really would be a tremendous help to me to have it to learn from.
But since I can't have it. I have to start from scratch, and try to learn how to make the simpler ones first. Like lines and checkerboards. And hope I can get there in the end.I'm making some progress though.
I've managed to figure out how to make lines and smooth them as little or as much as I want.
//This code creates black lines tiled in X&Y giving the appearance of tiled squares //The edges of the colors can be blurred as desired Vector colors; Real px = cd->p.x; Real py= cd->p.y; px *= 10; //The number of repeating lines(tiles) py *= 10; px -= (LONG)Floor(px); py -= (LONG)Floor(py); Vector linecolor = Vector(0,0,0); //The color of the line(the grout or gaps color) Vector mixcolor = Vector(1,1,1); //The color of the areas surrounding the line(the squares) Real LineWidth = 25.0 / 100; //The width of the lines(size of gaps between squares) Real fuzz = .050; //Amount to blur...Fixes aliasing of sharp color transitions Real distX = abs(px-.5); //Sharpness for sides of lines Real distY = abs(py-.5); //Sharpness for top&bottom of lines Real blendedX = 1-Smoothstep(LineWidth-fuzz,LineWidth+fuzz,distX); Real blendedY = 1-Smoothstep(LineWidth-fuzz,LineWidth+fuzz,distY); colors= Mix(mixcolor,linecolor,blendedX) * Mix(mixcolor,linecolor,blendedY); return colors;
-
On 27/04/2013 at 08:14, xxxxxxxx wrote:
Originally posted by xxxxxxxx
But since I can't have it. I have to start from scratch, and try to learn how to make the simpler ones first. Like lines and checkerboards. And hope I can get there in the end.
Which is the better learning approach, imo.
-
On 27/04/2013 at 14:54, xxxxxxxx wrote:
Well done! A grid shader with soft lines! That's the spirit
From now on, you will have lots of fun.