Tracking screen 2D coordinates of objects for each frame (convert from 3D to 2D)
-
Hello,
I'm new in C4D Development, so please give right direction to learn.
I'm not a motion designer, I'm a professional python developer, therefore, some of the wording may be inaccurate.
I have a simple project with 1 camera and 2 static plane objects. Camera is flying over them.
This project is rendering to PNG sequence. On each PNG may be one or more plane objects, or maybe nothing.
So, I need to develop a Python script, to iterate over each frame and get 2D X- and Y-coordinate of plane's vertexes.
3D world uses inchs and centimmeters, and resulting PNG is pixels (1920x1080), I'm looking for some method to map 3D to 2D.
I learned how to iterate over frames and get objects and camera position and rotation at each frame, but when I try to map C4D World to Screen (may be I need something other?) I have identical results for each frame.
Here is my code:
import json import c4d from c4d import documents, plugins, storage, gui from pprint import pprint obj = doc.SearchObject('holder0') # object I need to track fps = doc.GetFps() minFrame = doc.GetMinTime().GetFrame(fps) maxFrame = doc.GetMaxTime().GetFrame(fps) for frame in range(minFrame, maxFrame+1): time = c4d.BaseTime(frame, fps) doc.SetTime(time) doc.ExecutePasses(None, True, True, True, c4d.BUILDFLAGS_INTERNALRENDERER) c4d.EventAdd() view = doc.GetActiveBaseDraw() pprint(view.WC_V(obj.GetAbsPos())) pprint(view.WC(obj.GetAbsPos())) pprint(view.CS(view.WC(obj.GetAbsPos()), False))
Output for every frame is the same:
Vector(14.111, 11.025, 18.698) Vector(39.784, 12.714, 44.158) Vector(1720.364, -2.065, 44.158) Vector(14.111, 11.025, 18.698) Vector(39.784, 12.714, 44.158) Vector(1720.364, -2.065, 44.158) Vector(14.111, 11.025, 18.698) Vector(39.784, 12.714, 44.158) Vector(1720.364, -2.065, 44.158)
Thanx!
-
Hi,
that is fun problem which basically boils down to a transform question. Your major 'mistake' is also that you are asking for the projected vertices, but never even attempt to access them (and therefore can also not project them). Here is a pruned solution for the problem:
"""Example for projecting points into the render frame. I did cut out most of the fluff in your script, for example the whole animation part, and did focus on the core problem, the projection. Your major mistake was, that you did not really access or attempted to access the vertices of your object and therefore did also never project them. But there are a few other hoops to jump through. To run this script, you have to select the object you want to project. Start reading in the main() function. """ import c4d def get_projected_points(rdata, view, points): """Projects a list of points in the world frame into the render frame. Args: rdata (c4d.documents.RenderData): The render data for the render frame to evaluate. view (c4d.BaseView): The view for the view frame to evaluate. points (list[c4d.Vector]): A list of points in the global frame to convert. Returns: list[c4d.Vector]: A list of integer value points representing pixel coordinates in the rendering for each input point in the input order, i.e. mapping n::n. """ # Okay, we have two problems here. # 1. A view does not have necessarily the same ratio as the rendered # image. We can compensate for that by taking its safe frame into # account. # 2. Even with the same ratio, the view still might be of a different # scale. We can take care of that with the RenderData of the document. # The frame and the safe frame of the the view. frame, safe_frame = view.GetFrame(), view.GetSafeFrame() # The two frame sizes frame_size = (frame["cr"] - frame["cl"], frame["cb"] - frame["ct"]) safe_frame_size = (safe_frame["cr"] - safe_frame["cl"], safe_frame["cb"] - safe_frame["ct"]) # We calculate the scaling factor between both frames sfx = safe_frame_size[0] / frame_size[0] sfy = safe_frame_size[1] / frame_size[1] # But this is not the finale scaling yet, we also need to take into # account the uniform scaling difference between the view port safe # frame area and the final rendered image size. # Get the render resolution. xres, yres = rdata[c4d.RDATA_XRES], rdata[c4d.RDATA_YRES] # Scale our scalings with the ratio between the actual render frame # and the safe frame in the view. sfx *= (xres / safe_frame_size[0]) sfy *= (yres / safe_frame_size[1]) # Now we build a transform for all that. i = c4d.Vector(sfx, 0, 0) j = c4d.Vector(0, sfy, 0) k = c4d.Vector(0, 0, 1) off = c4d.Vector(-safe_frame["cl"], -safe_frame["ct"], 0) correction = c4d.Matrix(off, i, j, k) # Converting the points first into the view frame and then transforming # them with our correction transform. result = [view.WS(p) * correction for p in points] # You might realize that these still are floating point values and they # have a z-component. The latter is because the world to view frame # conversion of BaseView.WS stores there the depth of the point. Also # just casting the values to integer values might result in jitter # for animations due to floating point precision. There is also the # whole rendering process with tis myriads of interpolations which make # it very unlikely that this will ever be pixel perfect. result = [c4d.Vector(int(p.x), int(p.y), 0) for p in result] return result def main(): """ """ # Cinema pre-populates a script manager module with the attribute 'op'. # This is the currently selected object. This would be your "holder0" # object. if op is None: msg = "Please select an object." raise ValueError(msg) # We get all the vertices of the node in the world frame. I did not deal # with cached point objects, i.e. anything parametric, you would have to # do this on your own. if not isinstance(op, c4d.PointObject): raise TypeError("Please select a point object.") mg = op.GetMg() # the global transform of the node points = [p * mg for p in op.GetAllPoints()] # I did change the view to the render BaseDraw, since this does make more # sense in your case, because you are after the coordinates in the # rendering after all. view = doc.GetRenderBaseDraw() rdata = doc.GetActiveRenderData() # Project and transform the points. projected_points = get_projected_points(rdata, view, points) # Print the data. for i, p in enumerate(projected_points): print(i, p) if __name__ == "__main__": main()
Cheers,
zipit -
Hi,
without further feedback, we will consider this thread as solved by Monday and flag it accordingly.
Cheers,
Ferdinand