Enhanced reproject - guided reprojet by vertices or face group

@stephomi I wonder if you can enhance the reproject option.

Currently, I reproject a textured low poly mesh on my high poly sculpt - see screenshot.

I know: it’s not done yet…only a speed sculpt! (white: only 0.5 million vertices - plan to improving it - hack: it took “me” 2.5 hours… it’s only a hobby - OK?!?); Texture and Retopo: from my youth (14,6k - that had been a lot then), based on good old Daz3D-Topography, Subdivided, and I know: painting is crap - I was 16…

However, I reprojected and calculated the normal map within NOMAD. Whole procedure: It’s quite time consuming - at first you have to put and squeeze the low poly mesh to almost the same position of it’s destination, and thereafter (via move brush and others) correct and reproject the low poly mesh again and again. Actually, the process is quite intuitive, saw some NOMAD-Tutorials doing this with a simple cube - but not textured… So… why not reuse this youngster approach in NOMAD? …I still like it :wink:

However, this could be improved: e.g. by assigning vertices from the low poly, textured mesh to the considered destination on the high poly mesh. A workflow e.g. like ZWrap or Softwrap (Blender plugin) would be appreciated (see their video after 00:11 - 00:44 link: simply click the source vertex on the low poly mesh and thereafter the target position on the low poly model. You might also consider a separate UI - first select target and source mesh, display both meshes side by side (or top and bottom). First orient both meshes, then: start assigning. First click source vertex on low poly mesh, then target position on high poly mesh. Of course: both meshes rotate, zoom, pan, etc. the same in their area.

I could also consider face groups to do the job - e.g. marking the inner lid area of the eye on both meshes and (boom) reproject the assigned areas more or less directly onto the corresponding area on the high poly sculpt (although - I guess - the ZWrap approach is more intuitive). Of course, their (there after in the video) following pose feature is also cool…

Btw: You might consider include some base meshes (there are so many CC-licened out there: → Blender.org). Or even provide some paid fully textured - or better: a store (your’re also in Europe) where others could sell their models, brushes… poseable, but with a pose brush (like blender :wink: ).

BTW: your “Misc”-section (see: second screenshot) gets very clunky/overcrowded… you might consider collapsing some areas (collapse groups?) or spend another top menu entry. Just a hint, not wanted to sound offensive…

You are not really supposed to waste time moving an object.
There are 2 main workflows:

  1. You already got a clean low poly object
  • You clone it (and hide the low poly)
  • you subdivide/voxel/dynTopo the clone and do the high poly asset
  • then when it’s ready you do the baking
  1. You don’t have any clean low poly
  • Do your high poly asset
  • clone it (and hide the high poly)
  • decimate/quadRemesh/retopo + UV unwrap and bake the high poly

If you really need to see the object the side to side, you should never move the object. Simply hide and create an instance of it that you can move anywhere for reference.


To my understanding ZWrap is mostly meant to match scan data to existing low poly but Nomad reprojection isn’t really meant to do that.
The idea in Nomad is that you do the asset inside Nomad (the low or the high, or both).
It’s mostly about transfering vertex data to texture (or the opposite, except for the normal map that current cannot displace vertex, technically possible but not trivial).

I know

1 Like

Thanks for your explanation. I knowed both already.
However, as there are NOMAD tutorials out, employing a cube for reprojecting, these workflows seem not to be well known (does anyone even use multiresolution?.. ). We need tutorials for it! And I resist - not to post on social media… And we need standard high quality (textured) base meshes to start from. Maybe start a contest?

However, what I especially liked about my suggestion is: you could employ two different target meshes. I could imagine using one layer - like a shape key (Blender: morphing) morphing character shape. Can imagine this would be cool. But we need split display.

And by the way: this split display could be used for so many different things: separate camera positions, render settings, source mesh vs instancing on the first display etc.

Ok, it’s more a beg for split screen options (with simultaneously rotation options). And asset library, and tutorials now…