I noticed some issues with motionblur in Indigo.
First thing: we don't have geometric deformation.
Example:
Here you can see one second of a rotation of 4 seconds with 4000°.
Exposure duration 1/4.
The animation is based on 25fps; in C4D project timebase, render timebase and an output timebase of 1.
You can easily notice that there is no frame subsampling anywhere; which is not corresponding to a real camera. How can we fake now geometric deformation? Let's think about some sort of subsampling.
If we crank the project timebase up to 500fps (20 times higher), we' get a subsampling of 20. But of course, nobody wants to render an animation with 500fps.
For still images, this trick works fine, but not for animations.
So what we have to do is to output 500fps everywhre, but only save every 20th frame (remember the subsampling of 20).
Take a look at the "Bildschritt"-value in the screenshot: With this technique we can add some sort of real geometric deformation to Indigo MB.
Of course I don't know if this is possible in Blender or Sketchup.
So maybe you Indigo developers may think of a workaround, or implement real geometric deformation.
But, motionblur has some other isuues imho.
I can't really understand why I have to add motionblur-tags to moving objects to get them blurred, but having no camera-motionblur then unless I add a tag to the camera.
Indigo should work without any tags.
Motionblur should be always "activated" and only depends on my exposure duration.
This brings me to the point where Indigo doesn't seem to have a full physically based camera model. If it were so, there is simply no option for motionblur; it's simply there.
Can we expect improvements in this case for the future? Indigo is such a fantastic renderer; you have to make it even more simpler by making it more physically realistic.
Thank you in advance!
