@karthikbp
As far as I can see this at a glance, this seems to be correct. You already did the most important thing by using RDATA_RENDERREGION and not some camera cropping tricks, so it renders with the full scene data for each tile.
There is however one issue, and it depends on the render engine how it is handled. At verbatim, kernels applied to your rendering (e.g., a box filter) will be slightly wrong, because for the the seams of a tile, there is data missing (the seam pixels of the neighboring tile) which would exist in a full rendering. E.g., when you have this,
--------------- ----------------
x y
Tile A x y Tile B
x y
--------------- ----------------
a rendering split into two tiles Tile A and Tile B, where x are the 'seam pixels' of Tile A and y are the seam pixels of Tile B. If you would have rendered the whole image in one go, processing one of the xs with a kernel would have included neighboring pixels, i.e., also what is now the seam pixels y of the other tile. So, at the borders of the tile, any filter kernels you will apply will not be quite the same.
To prevent that issue, Redshift does render region tiles with an extra border of the maximum size of the kernels used for that rendering. But the standard renderer does not do that. Other render engines which might support our render region setting might not do that either. In such cases, you would have to either live with the small error or render with an extra border yourself, depending on the kernels that will be used., and then crop the final result back.
Cheers,
Ferdinand