How feasible is the notion of adding ray tracing to Nomad? Some of the latest bleeding-edge SoCs now have hardware-accelerated real-time ray tracing, like the Qualcomm Snapdragon 8 Gen 2 and the MediaTek Dimensity 9200.
Well I know it’s not the same but wouldn’t ray-tracing ultimately be applied to models that need it, like those rigged and used in Unity?
Of course you would do that, but it would be nice to be able to apply ray tracing whenever you want, right on your mobile device, without the need for a computer.
Zbrush has ray-traced ambient occlusion.
Already added in test version - from the lag in the video it seems to be too heavy for the device, so won’t be making it into real version. But it’s possible.
What device was used for testing? Was the testing only utilizing the CPU or was it using a cutting-edge GPU with ray tracing hardware?
I presume iPad but may have been on Stephane’s desktop version.
If it was an iPad then no ray tracing hardware until later this year maybe.
If it is indeed possible for Nomad to utilize these new GPUs for ray tracing/path tracing calculations, then I’m pretty sure the performance improvement would be night and day. Cool stuff!
Path tracing is already on the ipad M1 / M2 in the new Octane X renderer. Stephane’s version uses a github repo called YoctoGL.
Oh okay cool. I see here that Octane X just launched on iPad Pro M1/M2 six days ago. Huh. I guess I picked exactly the right (or wrong?) time to ask about this. Well I suppose this needs time to develop like anything else. It currently seems to be using GPGPU which is all well and good. However, harnessing the dedicated ray tracing logic in these new cutting-edge GPUs would really speed things up, methinks. Everything in its proper time, though.
Really want to see Ray Tracing in Nomad. I hope @stephomi add this (Maybe in TestFlight? version really want to test it).
Octane is cool but, painful to use on Ipad (Crash every time and UI is so…unfriednly)
if we have Ray Tracing in Nomad - we don’t need Octane on Ipad anymore
One app - one way!
“Ray Tracing” is a DirectX 12 feature (Windows) and the term was heavily used by NVIDIA to get a marketing edge instead of using “Real-time Lighting and Reflections” or simply “RTLR”. The feature is programmed to use in games and requires drivers by NVIDIA. Unreal Engine 5 uses Lumen (software RT) that uses main shader-cores and im sure other Engines will create thier own version of it too instead of only hardware RT that uses dedicated cores. Also, hardware Ray Tracing is API level.
Nomad Sculpt is Andriod and iOS based so for real-time hardware RT it would need to be API level and processed through dedicated cores like RTX or shader-cores like AMD RDNA 2.
NS needs nothing more than a simple Path Tracer for export only that gives you the option to choice shadows, reflections and also GPU acceleration. Its a sculpting app, not a game or 3D modeler like Blender.
Apple would need to develop a silicon with allocated cores specifically for real-time instructions. NVIDIA uses dedicated cores that seperate from the main silicon so drivers are required to get them to work in parallel, efficiently. This hardware RT method is annoying because it adds driver overhead and headache to develops. Thats why software RT like Lumen are being heavily favored because its all handled by the main silicon, thus natively supported and require little to no drivers.
Apple would be moving backwards by moving away from the all-in-one unified system benefits of the M-Series silicon so dont count on hardware RT that requires dedicated cores. Apple is reffering to software real-time lighting and reflection.
Alright, that’s interesting. I was under the distinct impression that ray tracing calculations can be executed much faster on bounding-volume-hierarchy processors like those in Nvidia’s RT cores (and Qualcomm’s Adreno 740 GPU), than on conventional shader processors.
What you’ve said now has me thinking that the ideal GPU design would integrate logic circuitry for bounding-volume-heirarchy operations directly into the shader ALUs themselves, thereby achieving the best and most efficient performance. The main API pipeline could integrate a stage for ray tracing together with the other stages like pixel shaders, vertex shaders, mesh shaders, etc., thereby eliminating the extra driver overhead.
I’m just a well-read layman, so if I’m wrong, please be gentle.
Found this about Ray-tracing on IOS from Apple: