-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Convex Decomposition and Minkowski Sum #663
Conversation
For the voro dependency, maybe you can use submodule with https://github.com/chr1shr/voro? I think to simplify the things here, parallelism (and the uncertain TODO part) can be left for a follow-up PR. |
I’d be happy to do that, but wouldn’t it prevent us from adjusting the library for the needs of Manifold (ie, with type aliasing or templates) in the future? Would Quickhull or Clipper be candidates for submoduling too? |
They are already in submodules. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the performance like? Any chance we can throw together an offset to see how it compares to your simple one?
glm::dvec4 Manifold::Impl::Circumcircle(Vec<glm::dvec3> verts, int face) const { | ||
glm::dvec3 va = verts[this->halfedge_[(face * 3) + 0].startVert]; | ||
glm::dvec3 vb = verts[this->halfedge_[(face * 3) + 1].startVert]; | ||
glm::dvec3 vc = verts[this->halfedge_[(face * 3) + 2].startVert]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about a mat3
and a for loop?
double a_length = glm::length(a); | ||
double b_length = glm::length(b); | ||
glm::dvec3 numerator = | ||
glm::cross((((a_length * a_length) * b) - ((b_length * b_length) * a)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we remove a few parens?
} | ||
std::vector<Manifold> Manifold::Fracture( | ||
const std::vector<glm::vec3>& pts, | ||
const std::vector<float>& weights) const { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to avoid having two APIs for different precisions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it okay to expect everything to converge to double
only with #659 ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO yes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes
src/manifold/src/manifold.cpp
Outdated
|
||
// Step 4. Recalculate the circumcenters | ||
for (size_t i = 0; i < uniqueFaces.size(); i++) { | ||
glm::dvec4 circumcircle = impl.Circumcircle(joggledVerts, uniqueFaces[i]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we joggle the circumcircles instead of the verts? Overriding vertPos_
in an Impl
function is pretty strange.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My ultimate consideration is to record which verts were joggled before computing the voronoi diagram, and then to unjoggle them afterward so that they occupy their original exact coordinates again...
I'm considering alternative strategies, like applying an invertible spatial distortion to all the input vertices, and then backwards to all of the output voronoi diagram vertices.... just have to find the right strategy that preserves precision...
/** @name Voronoi Functions | ||
*/ | ||
///@{ | ||
std::vector<Manifold> Fracture(const std::vector<glm::dvec3>& pts, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is fracture really meant to be a public function, or is it just a means to ConvexDecomposition?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's worth making it public because it's a non-trival operation that generalizes splitting a singular cutting plane by limiting the influence of each half-space. This can help make modelling cuts and edits more "local". Like, if you wanted to take a complicated, winding assembly, and break it apart into multiple pieces, it's nice to be able to make cuts with a localized area of effect to avoid mangling the model far from the cut.
Also, it's useful for people looking to implement their own approximate decompositions based on their particular speed and accuracy needs.
And it's good for modelling lattices and packings, which is one of the primary things Voro++ is used for... it could even help replicate NTopology lattice generation functionality if each cell is contracted and then subtracted from a solid region.
There's a universe with a dual version of this function, where the user directly specifies splitting planes and their relative area of influence. This could be implemented by inserting two voronoi cells right next to each other with the plane normal as the offset between them. Perhaps this would be more intuitive?
/** | ||
* Compute the voronoi fracturing of this Manifold into convex chunks. | ||
* | ||
* @param pts A vector of points over which to fracture the manifold. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we have some detail on what these inputs do and the purpose of this function?
I think I'd like to get parallelism into this PR for a proper performance comparison... I think for shapes with nearly as many concave edges as faces, it's probably going to be slower, but for shapes with a "normal" amount of concavity, it should hopefully be comparable or faster... but even if it isn't, I'd consider having a fast reference implementation for convex decomposition (outside of CGAL and other GPL licensing) to be valuable in its own right (like for preparing models for use with collision detection systems). Also setting PR to "ready for review" to see if I can get the other checks to run? |
In general I would expect meshes with n edges to have O(n) concave edges - about half. Likewise I expect about half of polygon verts to be reflex. For things made of curves that are not specifically convex, this feels right, particularly as n increases. |
@zalo you probably need a dummy commit, we should fix this CI issue later. |
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## master #663 +/- ##
==========================================
+ Coverage 91.66% 91.71% +0.05%
==========================================
Files 37 37
Lines 4737 4828 +91
==========================================
+ Hits 4342 4428 +86
- Misses 395 400 +5 ☔ View full report in Codecov by Sentry. |
I left and came back; it's still going 💀 EDIT: Ah, with the simpler model the decomposition is 60x slower, and the Minkowski sum is about 7500x slower… wow! And I was worried that my stuff wasn’t as cool as I thought it was 😅 There’s also that I can’t find any other Exact Convex Decomposition or Minkowski sum implementations online…. Maybe it is time to email Jyh-Ming Lien for his code… The alternative is to turn up the exactness settings on VHACD and CoACD all the way up to see what they net… |
I appreciate all the work here - it's cool stuff. But stepping back - we're talking about adding a whole new dependency and several new APIs in order to get... not much of a speedup over a simpler naive method. It's an interesting investigation, but are we convinced this is going in a direction we might merge? |
As I have said, it is probably not fair to judge by the specific sphere-in-cube model because it is specifically crafted to show convex decomposition does not scale (you can have nearly as many convex parts as number of triangles). We should get some general benchmarks, perhaps I can help to run it with openscad, iirc there are some models with quite a lot of minkowski sums. That said, I think it can also be a good idea to split this into a separate package, as other parts of manifold does not depend on this. The same can be said for sdf as well (if we want to make complex changes to improve performance). Maybe we can setup an organization to host all the repos though, which makes coordination and discovery easier. For example for the openscad interpreter I am writing it will be GPLv2 because openscad uses GPLv2, so it cannot be used within the manifold repo, and the manifoldcad website will need to be a separate project to use it (and licensed under GPLv2). |
By the way, there’s a story I’ve heard about this… OpenSCAD is GPLv2 because CGAL is GPLv2… and OpenSCAD still has CGAL as a dependency because… manifold doesn’t have an implementation for convex decomposition and minkowski()? Relicensing OpenSCAD will probably take more than eliminating CGAL, but I do see it as the first step on the road to change 🤔 (Also, an interesting thread of CGAL’s convex decomposition’s funny behaviors: As a side note, @kintel , it seems like convex decomposition is only needed for non-convex + non-convex Minkowski sums; have you ever seen a good use-case for it in OpenSCAD? |
FYI openscad/openscad#4825 list the things that manifold still does not support, and I think we probably never want to support some of the things. I think you need convex decomposition even for convex + non-convex minkowski sum, so each convex + convex parts can be summed by pointwise addition and convex hull. I personally think that for an engine it is better to use a more permissive license (at most LGPL, GPL is probably a bit too restrictive) to allow commercial tools to use it and probably boost adoption, but for something that is not meant to be used as a library I feel that GPL is great. Probably sidetracked too much here. |
Distributed OpenSCAD binaries is GPLv3, because CGAL is GPLv3. If we don't use CGAL, we can go back to GPLv2. License change of the entire project is a completely different topic and a question of copyright holder preference, and not currently on the radar. |
I did some test with openscad, it is quite a bit faster in some cases (e.g. 1.82s vs 8.85s for Anyway here is the detail of the segfault:
Problematic code: https://gist.github.com/pca006132/265586af5d7e4a31db899d2d9722c674
and the patch to openscad (note: you also need to point the submodule to your fork): diff --git a/src/geometry/manifold/manifold-applyops-minkowski.cc b/src/geometry/manifold/manifold-applyops-minkowski.cc
index de17f279c..31f2d9b0b 100644
--- a/src/geometry/manifold/manifold-applyops-minkowski.cc
+++ b/src/geometry/manifold/manifold-applyops-minkowski.cc
@@ -66,145 +66,10 @@ std::shared_ptr<const Geometry> applyMinkowskiManifold(const Geometry::Geometrie
// but this could use substantially more memory.
while (++it != children.end()) {
operands[1] = it->second;
-
- std::vector<std::list<Hull_Points>> part_points(2);
-
- parallelizable_transform(operands.begin(), operands.begin() + 2, part_points.begin(), [&](const auto &operand) {
- std::list<Hull_Points> part_points;
-
- bool is_convex;
- auto poly = polyhedronFromGeometry(operand, &is_convex);
- if (!poly) throw 0;
- if (poly->empty()) {
- throw 0;
- }
-
- if (is_convex) {
- part_points.emplace_back(getHullPoints(*poly));
- } else {
- Nef decomposed_nef(*poly);
-
- t.start();
- CGAL::convex_decomposition_3(decomposed_nef);
-
- // the first volume is the outer volume, which ignored in the decomposition
- Nef::Volume_const_iterator ci = ++decomposed_nef.volumes_begin();
- for (; ci != decomposed_nef.volumes_end(); ++ci) {
- if (ci->mark()) {
- Polyhedron poly;
- decomposed_nef.convert_inner_shell_to_polyhedron(ci->shells_begin(), poly);
- part_points.emplace_back(getHullPoints(poly));
- }
- }
-
- PRINTDB("Minkowski: decomposed into %d convex parts", part_points.size());
- t.stop();
- PRINTDB("Minkowski: decomposition took %f s", t.time());
- }
- return std::move(part_points);
- });
-
- std::vector<Hull_kernel::Point_3> minkowski_points;
-
- auto combineParts = [&](const Hull_Points &points0, const Hull_Points &points1) -> std::shared_ptr<const ManifoldGeometry> {
- CGAL::Timer t;
-
- t.start();
- std::vector<Hull_kernel::Point_3> minkowski_points;
-
- minkowski_points.reserve(points0.size() * points1.size());
- for (size_t i = 0; i < points0.size(); ++i) {
- for (size_t j = 0; j < points1.size(); ++j) {
- minkowski_points.push_back(points0[i] + (points1[j] - CGAL::ORIGIN));
- }
- }
-
- if (minkowski_points.size() <= 3) {
- t.stop();
- return std::make_shared<const ManifoldGeometry>();
- }
-
- t.stop();
- PRINTDB("Minkowski: Point cloud creation (%d ⨉ %d -> %d) took %f ms", points0.size() % points1.size() % minkowski_points.size() % (t.time() * 1000));
- t.reset();
-
- t.start();
-
- Hull_Mesh mesh;
- CGAL::convex_hull_3(minkowski_points.begin(), minkowski_points.end(), mesh);
-
- std::vector<Hull_kernel::Point_3> strict_points;
- strict_points.reserve(minkowski_points.size());
-
- for (auto v : mesh.vertices()) {
- auto &p = mesh.point(v);
-
- auto h = mesh.halfedge(v);
- auto e = h;
- bool collinear = false;
- bool coplanar = true;
-
- do {
- auto &q = mesh.point(mesh.target(mesh.opposite(h)));
- if (coplanar && !CGAL::coplanar(p, q,
- mesh.point(mesh.target(mesh.next(h))),
- mesh.point(mesh.target(mesh.next(mesh.opposite(mesh.next(h))))))) {
- coplanar = false;
- }
-
-
- for (auto j = mesh.opposite(mesh.next(h));
- j != h && !collinear && !coplanar;
- j = mesh.opposite(mesh.next(j))) {
-
- auto& r = mesh.point(mesh.target(mesh.opposite(j)));
- if (CGAL::collinear(p, q, r)) {
- collinear = true;
- }
- }
-
- h = mesh.opposite(mesh.next(h));
- } while (h != e && !collinear);
-
- if (!collinear && !coplanar) strict_points.push_back(p);
- }
-
- mesh.clear();
- CGAL::convex_hull_3(strict_points.begin(), strict_points.end(), mesh);
-
- t.stop();
- PRINTDB("Minkowski: Computing convex hull took %f s", t.time());
- t.reset();
-
- CGALUtils::triangulateFaces(mesh);
- return ManifoldUtils::createMutableManifoldFromSurfaceMesh(mesh);
- };
-
- std::vector<std::shared_ptr<const ManifoldGeometry>> result_parts(part_points[0].size() * part_points[1].size());
- parallelizable_cross_product_transform(
- part_points[0], part_points[1],
- result_parts.begin(),
- combineParts);
-
- if (it != std::next(children.begin())) operands[0].reset();
-
- t.start();
- PRINTDB("Minkowski: Computing union of %d parts", result_parts.size());
- Geometry::Geometries fake_children;
- for (const auto& part : result_parts) {
- fake_children.push_back(std::make_pair(std::shared_ptr<const AbstractNode>(),
- part));
- }
- auto N = ManifoldUtils::applyOperator3DManifold(fake_children, OpenSCADOperator::UNION);
-
- // FIXME: This should really never throw.
- // Assert once we figured out what went wrong with issue #1069?
- if (!N) throw 0;
- t.stop();
- PRINTDB("Minkowski: Union done: %f s", t.time());
- t.reset();
-
- operands[0] = N;
+ auto m1 = ManifoldUtils::createMutableManifoldFromGeometry(operands[0]);
+ auto m2 = ManifoldUtils::createMutableManifoldFromGeometry(operands[1]);
+ auto result = manifold::Manifold::Minkowski(m1->getManifold(), m2->getManifold());
+ operands[0] = std::make_shared<ManifoldGeometry>(std::make_shared<manifold::Manifold>(result));
}
t_tot.stop(); For the other benchmarks I am running those from https://gist.github.com/ochafik/70a6b15e982b7ccd5a79ff9afd99dbcf#results. |
And I feel that there are a lot of opportunities to tune the performance for minkowski, e.g. duplicating the lhs or rhs parts, when to use the naive method. So the current result is definitely not the best we can get. |
Thank you Kintel for the clarification and thank you pca for setting up this comparison! I’ll have to read into how OpenSCAD’s fast method works (and likewise add the automatic switch for the naive method to see if it helps). That segfault looks like it’s happening in a pretty innocuous place, before the multithreading even starts (I’d assumed it came from not pre-allocating enough memory before multithreading)… I’m probably doing something stupid and I’ll look into it soon 💪 Also, from the thread of missing features, my PR does implement a fast “isConvex” check (which is just whether the number of found reflex edges is 0). |
One additional optimization for convex-convex minkowski: You can probably filter out many points before doing convex hull. E.g., if you copy part a to a vertex v of part b, you can probably quickly rule out a bunch of vertices inside the convex hull formed by neighbors of v and any vertex u in a with a halfspace test. Not sure if this is correct, this just popped up on my mind when I am lying on my bed. |
btw you can try tracy for profiling, it gives pretty good insight into how long each part runs and where is the bottleneck. |
There’s a great paper by @jmlien talking about filtering out Minkowski Sum points by heuristics like their normal, etc. I think the primary trick is ensuring consistent topology afterward; the paper chose to forego it for just Minkowski point clouds in this instance… |
Good news and bad news... The "good" news is that I messed up the timing and the convex decomposition-based minkowski sum is pretty much always faster than the naive minkowski sum. This happened because I forgot that CSG operations (like Unions) are deferred until something requiring the mesh vertices requests them... The naive union is usually significantly larger/slower than the smaller union that comes out of the decomposition. The bad news is that I have a bug with the way I'm using Voro++ right now, which causes it to trip over itself and drop cells whenever Voro needs to allocate more memory during the Voronoi Diagram baking process. Working on figuring out why this is now... EDIT: And it's in my Python implementation?! Perhaps this is a deeper Voro++ issue... EDIT2; Ahhh, it seems like it's degenerate triangles with impossibly large circumradii crowding out the other triangles... |
Agreed; I'm still dubious of the usefulness of general Minkowski sums, and I have no idea how
@pca006132 We probably don't need to add your openscad interpreter to ManifoldCAD.org - it can't even load files anyway. How about I make a new repo for a Manifold VSCode extension and put your openscad interpreter there? The extension should need even less code than ManifoldCAD.org does, and have more power for loading files; I think GPL will be fine for that. |
Actually it can load file, just not in the usual sense: Either in local storage or imported from a URL, and github provides APIs to fetch files from a git repo iirc. It will be quite complicated though. I am fine for having another repo. I am already writing the code in my own repo, but I want my repo to be just the interpreter and nothing else, so it can be added back to openscad when it matures. |
Alrighty; I’ll close this in favor of taking the small win of the naive Minkowski and potentially starting a fork project at some point. |
Oh, also, on the subject of sharing in ManifoldCAD, I’ve experienced great success encoding the scripts/programs into hash parameters in the URLs with inflate/deflate -> base64 in CascadeStudio and sketch.paperjs.org. It makes it trivial to share programs with people on forums and chatrooms, and it’s easy to store them for later use. Chromium also recently added an API for direct filesystem access, which works pretty well to transparently save and load files after the user gives permission for it. |
It should not be a fork, just add manifold as a dependency. Please ping me when you need help to test things out :) |
See #192 (comment) for more details
This PR adds fast Voronoi Fracture and Convex Decomposition functions, based on the Voronoi Diagram functionality of Voro++. Convex Decomposition will enable a host of additional functionality (Minkowski Sums, Offsets, Sweeps, Physics Engine Collision, etc.)
The Convex Decomposition function is based on a novel exact decomposition formulation placing voronoi cells at the circumcenters of triangles with adjacent concave edges. This means that there will always be twice as many convex parts as reflex edges, which is better than CGAL's worst-case performance of reflex edges^2!
This allows it to be very fast!
And there's more speed to be gained... 80% of the current cost comes from intersecting the voronoi regions against the original mesh. And all of the voronoi cells can be computed in parallel via multithreading (thanks to
Voro++
's unique per-cell architecture).There are a few todo-list items, around exactness, speed, and bindings before it should be merged in. One snag around exactness is the propensity for certain mesh inputs to create identical circumcenters for different triangles, leading to the need(?) to "joggle" (or apply small random offsets to) the mesh vertices before running the voronoi computation. This can be compensated for in a number of ways, but more work will need to be done to find the method with the fewest tradeoffs.
Todos for this PR:
Future and Uncertain Todos: