Sample a shader in 3D space
-
Hm,
I might be wrong about this, but I always thought that
VolumeData.p
is a sample position in object space andChannelData.p
is the sample position in uvw space. The latter would lie in the half interval uv(w) coordinates are normally defined in, while the former would lie in an arbitrary interval, depending on the bounding box of the sampled geometry (i.e.0 <= c <= 1
is not guaranteed to be met for the components ofVolumeData.p
). So a geometry-agnostic shader should not care aboutVolumeData.p
.Also note that I did deliberately only sample on the z-axis in my example and still get a varying output. If your premise would be true, I should get the same vector for all ten calls.
Cheers,
zipit -
Hi sorry for not answering you earlier, I just wanted to inform you I reached the development team about it especially since you use multi-threading, and sampling a shader with Multiple thread is different (for example in my case your code does not work even on a 2D gradient).
I have a property but which doesn't support all shaders (especially Layer shader).So I will keep you in touch when I have more news.
Cheers,
Maxime. -
Sorry for the delay, this took me a bit more time than expected.
I would say @zipit is completely right here, but since you are using multi threads you have to attach a FakeVolumeData for each thread since some shader really needs and output their result according to this VolumeData.
Here a complete example.
// class that hold per thread sampling data class ShaderSampler { public: BaseContainer _rdc; BaseObject* _camera = nullptr; ChannelData _channelData; ::Ray _ray; VolumeData* _volumeData = nullptr; ShaderSampler(){}; ~ShaderSampler() { if (_camera) BaseObject::Free(_camera); if (_volumeData) VolumeData::Free(_volumeData); }; void SetSamplingContext(const Vector& uv, const Vector& delta) { VolumeData& volumeData = *_volumeData; const Vector d = Vector(delta.x, delta.y, delta.z); _channelData.d = d; volumeData.delta = d; const Vector p = Vector(uv.x, uv.y, uv.z); _channelData.p = p; volumeData.uvw = p; volumeData.p = p; volumeData.pp[0] = p; volumeData.pp[1] = p; volumeData.pp[2] = p; _ray.p = p; _ray.pp[0] = p; _ray.pp[1] = p; _ray.pp[2] = p; } }; static maxon::Result<void> SampleMaterialBis(BaseDocument* doc) { iferr_scope; //Get the BaseShader linked in the color shader of the first material BaseShader* shader; MAXON_SCOPE { // Get the first material and check its a Cinema 4D default material BaseMaterial* firstMat = doc->GetFirstMaterial(); CheckState(firstMat != nullptr, "Failed to get the first material"); CheckState(firstMat->IsInstanceOf(Mmaterial), "material is not a default c4d material"); // look for the material BaseContainer and the BaseList2D associated to the color param BaseContainer* matBC = firstMat->GetDataInstance(); CheckState(matBC != nullptr, "Failed to get the BaseContainer"); BaseList2D* matColorBL = matBC->GetLink(MATERIAL_COLOR_SHADER, doc); CheckState(matColorBL != nullptr, "Failed to get the Color Shader"); // cast to BaseShader shader = (BaseShader*)matColorBL; } // Allocate and init the InitRenderStruct // This is important to init with valid FakeVolumeData since some shaders needs them. InitRenderStruct irs; VolumeData* volumeData = VolumeData::Alloc(); BaseObject* camera = BaseObject::Alloc(Ocamera); CheckState(volumeData != nullptr, "Failed to initialize volume data"); CheckState(camera != nullptr, "Failed to initialize camera"); MAXON_SCOPE { BaseContainer renderData; const Int threadIndex = maxon::JobRef::GetCurrentWorkerThreadIndex(); renderData.SetInt32(RDATA_VDFAKE_CURRENTTHREAD, threadIndex); renderData.SetInt32(RDATA_VDFAKE_THREADCOUNT, (Int32)GeGetCurrentThreadCount()); const Bool attached = C4DOS.Sh->AttachVolumeDataFake(volumeData, camera, renderData, nullptr); CheckState(attached == true, "Failed to attach volume data"); irs.vd = volumeData; irs.flags = INITRENDERFLAG::TEXTURES; irs.time = doc->GetTime(); irs.docpath = doc->GetDocumentPath(); irs.thread = GeGetCurrentThread(); COLORSPACETRANSFORMATION transform = COLORSPACETRANSFORMATION::NONE; // check if linear workflow is enabled if (irs.linear_workflow) transform = COLORSPACETRANSFORMATION::LINEAR_TO_SRGB; const INITRENDERRESULT initResult = shader->InitRender(irs); CheckState(initResult == INITRENDERRESULT::OK, "Shader init failed"); } // Allocate a list of ShaderSampler to store VolumeData and ChannelData per thread maxon::BaseArray<ShaderSampler*> samplers; MAXON_SCOPE { for (Int32 i = 0; i < GeGetCurrentThreadCount(); i++) { ShaderSampler* sampler = NewObjClear(ShaderSampler); const Int threadIndex = maxon::JobRef::GetCurrentWorkerThreadIndex(); sampler->_rdc.SetInt32(RDATA_VDFAKE_CURRENTTHREAD, threadIndex); sampler->_rdc.SetInt32(RDATA_VDFAKE_THREADCOUNT, GeGetCurrentThreadCount()); sampler->_camera = BaseObject::Alloc(Ocamera); CheckState(sampler->_camera != nullptr, "Failed to initialize camera"); sampler->_volumeData = VolumeData::Alloc(); CheckState(sampler->_volumeData != nullptr, "Failed to initialize volume data"); const Bool attached = C4DOS.Sh->AttachVolumeDataFake(sampler->_volumeData, sampler->_camera, sampler->_rdc, nullptr); CheckState(attached == true, "Failed to attach volume data"); samplers.Append(sampler) iferr_return; } } // Worker lambda to build the grid Int gridSize = 64; // Create a BaseArray of matrices and colors (to store the results of our sampling, and visualize them in an instanceObject) maxon::BaseArray<Matrix> matrices; maxon::BaseArray<maxon::Color64> colors; // * 3 because we store the data for x,y,z axis in one big array matrices.Resize(gridSize * gridSize * gridSize) iferr_return; colors.Resize(gridSize * gridSize * gridSize) iferr_return; auto worker = [&samplers, &shader, &matrices, &colors, &gridSize](UInt y) { // Retrieve the current sampler data for the current thread const Int threadIndex = maxon::JobRef::GetCurrentWorkerThreadIndex(); ShaderSampler* sampler = samplers[threadIndex]; if (sampler == nullptr) return; for (Int32 z = 0; z < gridSize; ++z) { for (Int32 x = 0; x < gridSize; ++x) { // Map 3D array to a 1D array Int32 arrayId = (z * gridSize * gridSize) + (y * gridSize) + x; matrices[arrayId] = MatrixMove(Vector(x, y, z)); // Define where we want to sample Vector uvw = Vector( double(x) / double(gridSize), double(y) / double(gridSize), double(z) / double(gridSize) ); Vector delta = Vector(0.01, 0.01, 0.01); sampler->SetSamplingContext(uvw, delta); // sample const Vector sampledValue = shader->Sample(&sampler->_channelData); // Store the color result of the sampling operation colors[arrayId] = maxon::Color64(sampledValue.x, sampledValue.y, sampledValue.z); } } }; // Execute worker maxon::ParallelFor::Dynamic(0, gridSize, worker, GeGetCurrentThreadCount()); // Free samplers MAXON_SCOPE { for (Int32 i = 0; i < GeGetCurrentThreadCount(); i++) { ShaderSampler* sampler = samplers[i]; if (sampler != nullptr) { DeleteObj(sampler); samplers[i] = nullptr; } } } // call the FreeRender to release allocated memory used for sampling shader->FreeRender(); VolumeData::Free(volumeData); BaseObject::Free(camera); // Create a Multi-Instance object with the matrix and the color information to visualize them InstanceObject* const instance = InstanceObject::Alloc(); CheckState(instance != nullptr, "Failed to allocate an Instance Object"); MAXON_SCOPE { doc->InsertObject(instance, nullptr, nullptr); // Set to multi instance + to Points mode if (!instance->SetParameter(INSTANCEOBJECT_RENDERINSTANCE_MODE, INSTANCEOBJECT_RENDERINSTANCE_MODE_MULTIINSTANCE, DESCFLAGS_SET::NONE)) return maxon::UnexpectedError(MAXON_SOURCE_LOCATION); if (!instance->SetParameter(INSTANCEOBJECT_DRAW_MODE, INSTANCEOBJECT_DRAW_MODE_POINTS, DESCFLAGS_SET::NONE)) return maxon::UnexpectedError(MAXON_SOURCE_LOCATION); // Set data in the instance object instance->SetInstanceMatrices(matrices) iferr_return; instance->SetInstanceColors(colors) iferr_return; } EventAdd(EVENT::NONE); return maxon::OK; }
Cheers,
Maxime. -
Thank you very much! I'll try that out ASAP!
Cheers,
Frank -
Almost perfect, thanks. It kinda only works half ok, I noticed.
The thing is, all shaders are always sampled in UV space. I tried adding the 3D position data for sampling, too, by setting
volumeData.p
to the global position of the point to sample, inSetSamplingContext()
. but it is being ignored. For example, if the Shader is a Noise shader, I can set it to whatever space I want, the result is always the same.With my old code, that worked. It just couldn't sample certain shaders (Layer Shader, Earth Shader, etc). Now, those shaders work, but sampling seems restricted to UV space. How can I sample those shaders in Texture Space, Object Space, and World Space (Camera, Screen, and the others can be neglected)?
Thanks in advance,
Frank -
Any news on this?
Cheers,
Frank -
Hi he completely get out of my mind sorry, I will work on your topic in the upcoming days.
However I'm not sure if this is possible since you will probably need to instantiate some VolumeData which is not possible, but I will have a look.Cheers,
Maxime. -
No worries. I forgot about it, too, until I noticed the issue again recently.
I have sent an email to sdk_support(at)maxon(dot)net with some example code and more detailled info.
Let's continue the discussion thereCheers,
Frank -
-
- you can sample a channel. it works perfectly for things like the layer shader.
- if you really have to sample a shader specifically, what I do is make a clone of the shader per each context.
i.e. you have 16 cores, you make 16 clones, and insert them on the material and initing them, (being sure to clean up after)
Layer shader will then work as expected, off the bat, as long as you use the thread index to reference from your list of shader clones.
clones.shader_list[context.GetLocalThreadIndex()]you also have to make sure you project your coordinate according to the texture tag. it wont do that for you.
p is always assumed to be a UV coordinate projected, as per the tex tag. you send p as a world coordinate with vd and if the shader needs that, that is where it would look. think of it from the perspective of a shader. where will it get a world position from? if you write a shader, and you need a world position for a hit, you look in vd for that. if vd is null, or the info has not been filled, how would it know where p is?I found in most cases sampling a channel from a material rather than a specific shader, works best, and is less problematic.
best
Paul -
Thanks, Paul! I'll try that out
Cheers,
Frank -