You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Performance may be impacted by PostCSS compilation, the client-server setup (including launching a node process for the client on each compiled file). The following are ideas on how to make things faster.
Synchronous Promise Resolution
The client-server model could be removed entirely if the PostCSS compilation was able to be done synchronously (i.e., resolve a promise synchronously).
In order to accomplish this, a bit of native code would need to be written to run the event loop. For reference, node's event loop is basically just a uv_run(env.event_loop(), UV_RUN_ONCE);. A small bit of native code (similar to deasync… or even use deasync) and a bit of JS code to invoke this could be sufficient. The JS would be something like:
Besides the fact that a bit of native code would be introduced (making things a little harder to maintain), this actually simplifies the code significantly.
Caching PostCSS Compilation
Since the server remains active while the main process continues, it would be beneficial for various situations to cache the results of a compile. It's important to realize that the cache would likely be in memory, so it would only apply when babel & the plugin remain active in memory while the CSS file is encountered multiple times. This would happen for:
Watched builds
Builds that include the same CSS file from multiple different JS files
Re-write the postcss-client in C++ using node-gyp
The client that we launch is written in Node and is as minimal as possible, but launching a minimal Node process is still nearly 30x slower (on my machine) than launching a native executable.
Creating a binding.gyp with the type set to executable seems to work:
There's overhead in the client-server communication, too, so this may or may not provide significant performance gains. The best way to find out, though, may be to try it and see. The synchronous promise resolution seems to be a much easier option to avoid this, though.
The text was updated successfully, but these errors were encountered:
Performance may be impacted by PostCSS compilation, the client-server setup (including launching a
node
process for the client on each compiled file). The following are ideas on how to make things faster.Synchronous Promise Resolution
The client-server model could be removed entirely if the PostCSS compilation was able to be done synchronously (i.e., resolve a promise synchronously).
In order to accomplish this, a bit of native code would need to be written to run the event loop. For reference,
node
's event loop is basically just auv_run(env.event_loop(), UV_RUN_ONCE);
. A small bit of native code (similar todeasync
… or even usedeasync
) and a bit of JS code to invoke this could be sufficient. The JS would be something like:Besides the fact that a bit of native code would be introduced (making things a little harder to maintain), this actually simplifies the code significantly.
Caching PostCSS Compilation
Since the server remains active while the main process continues, it would be beneficial for various situations to cache the results of a compile. It's important to realize that the cache would likely be in memory, so it would only apply when babel & the plugin remain active in memory while the CSS file is encountered multiple times. This would happen for:
Re-write the
postcss-client
in C++ usingnode-gyp
The client that we launch is written in Node and is as minimal as possible, but launching a minimal Node process is still nearly 30x slower (on my machine) than launching a native executable.
Creating a
binding.gyp
with thetype
set toexecutable
seems to work:There's overhead in the client-server communication, too, so this may or may not provide significant performance gains. The best way to find out, though, may be to try it and see. The synchronous promise resolution seems to be a much easier option to avoid this, though.
The text was updated successfully, but these errors were encountered: