Lessons from my port of JSCAD to C Sharp. #1050
Replies: 4 comments 9 replies
-
wow, that is a massive undertaking. looking forward to seing performance comaprisons. Maybe there will be lessons there to point to some perf improvements of jscad JS code. |
Beta Was this translation helpful? Give feedback.
-
Also make sure to warmup JS engine, as I have seen jscad operations can go 2-4x faster when run subsequently. |
Beta Was this translation helpful? Give feedback.
-
@briansturgill any chance of playing with the C# version? I don’t see the repository anywhere. |
Beta Was this translation helpful? Give feedback.
-
@briansturgill you may also be interested in #1063 |
Beta Was this translation helpful? Give feedback.
-
Over the last month and a half I ported 28,000 lines JSCAD Javascript to C# (26,000 from modeling, 2,000 from io).
To put it in perspective, JSCAD modeling is 35,000 lines in total. Oddly, C# seems slightly less verbose than JS... I didn't expect that.
First of all, let me say I'm very impressed with the code base. Yes, as in any large project, some code is better than others, but I only found 2 (sets) of bugs during the porting process. Lots of unit test greatly helped!
Running benchmarks, I'm very impressed with the speed you've manage to get from JavaScript. I'll have more firm results later, but initially it looks like C# is about 1/3 faster for code that is a straight translation from JavaScript. When I've rewritten portions the C# way, it runs at about 2x faster. I've been benchmarking some special C# classes (System.Numerics) that use Intel/ARM SIMD instructions. At least for standard Vec3/Mat4 Transforms, it runs 6x faster than my Vec3/Mat4 code I translated to C#. Overall I'm guessing I'll be able to achieve overall a 2x to 3x speed up, but will have the drawback of needing to switch to single precision floating point.
While many times your code base stretched beyond my JavaScript knowledge, I've found a few cases where there
seems to be confusion. Some of these are specifically failure to use modern JS, and I'll bug report them.
But one very confusing thing to me to is the use of index vectors in some of the algorithms (Graham hull scan and earcut and maybe one other.) The purpose of the parallel use of the index vector in earlier times was it saved data movement from the much larger floating point numbers. That made sense when you were using 16 bit ints routinely... but is crazy in JavaScript. Your "ints" are floating point doubles. Further, due to the way you structure your data (vec2, vec3), you are not every really moving 2 or 3 doubles, but rather a pointer containing those doubles. The use of the index vector is just greatly increasing memory usage, causes unecessary indirection arithmetic, additional data allocation/deallocation and generally messes with your CPU cache performance. It's just a bad idea.
Here's my rewrite from scratch of the Graham scan (convex hull) algorithm. It is a replacement for hullGeom2.js/hullPoints2.js. It gets rid of the index vector and the use of Atan2 and is generally easier to understand... translation back to JS should be straight forward.
Beta Was this translation helpful? Give feedback.
All reactions