Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance #12

Closed
jameslawson opened this issue Mar 23, 2017 · 5 comments
Closed

Performance #12

jameslawson opened this issue Mar 23, 2017 · 5 comments

Comments

@jameslawson
Copy link
Contributor

jameslawson commented Mar 23, 2017

Running the conformance tests apparently can be done in under 30sec using go.
While Javascript being interpreted on Node/v8 is to be slower than go, they're taking minutes to run (roughly 20mins) which is a lot longer.

To improve time:

  • look at complexities / data structures and find any nasty time complexities
  • consider using multiple cores when running tests
  • improve input/output when running times (don't spam STDOUT, read all the file vs buffering, etc.)
@jameslawson
Copy link
Contributor Author

Once we have good performance, we can add conformance tests to continuous integration.
Right now, because they take so long, it's probably not a good idea to use travis CI, etc. to execute these long-running tests (it's against their policy).

@photopea
Copy link

photopea commented Jul 3, 2017

Do you really think it is possible to improve the performance?

The problem I see is, that you write one code and test a different one. You transform your code before the execution and I am not sure, if the transformation process is transparent to you.

My guess is, that the possible bottleneck is in using too many anonymous functions, which refer to variables outside these functions. I never used this in JS programming and I am not sure what is the semantic of such code, so it is hard for me to improve it. I am not even sure, if it was you who wrote it, or it was generated by your code transformation tool.

I see, that you use immutable.js . Do you know how these data structures are implemented inside? I just read, that it creates a completely new array when adding an item to the end of an array. It is just horrible. What about using just arrays [] and objects {} instead?

@photopea
Copy link

Hi, so I think the true performance bottleneck is in this library immutable.js . Can you please tell me, which methods of Immutable do you use, and I will prepare the alternative library for you with the same interface? That should make everything like 100 times faster!

@jameslawson
Copy link
Contributor Author

Hi @photopea, sorry for the very, very slow reply.

That would be interesting to see. If you're willing to put in the work, it would be very interesting to see the time difference. I think you'd would need to take advantage the tools like jsperf first to measure the performance difference. The main hurdle you'll face is that I have used immutable.js heavily. It's basically everywhere! So I'm afraid to say a re-write to use another data structure library could be quite difficult. :(

This issue has been left open without much activity so I'm going to close it. But feel free continue to working on this if you're still interesting and we can re-open this issue. :)

Additional notes:

  • immutable.js uses a different class of data structures called Persistent Data Structures which are used for functional programming languages and/or code written in a functional programming style: https://en.wikipedia.org/wiki/Persistent_data_structure#JavaScript
  • When I first read about persistent data structures I thought they sounded inefficient ... but now my understanding is that there are lots of optimization techniques with these data structures which has made them work efficiently inside the run-times for languages like Scala, Haskell and Clojure.
  • I wanted to experiment with functional programming which is why this library uses immutable.js. It made the code a lot easier to write and test. But I haven't closely looked into how well immutable.js can scale, so as I said, it's would be worth getting some ms measurements before any decisions are made.

@photopea
Copy link

photopea commented Apr 8, 2019

As far as I understand, when you have this "immutable" array with 1000 values, and change one value, it copies the whole array and changes that value only in the new copy. It is definitely not efficient, I feel like 99% of people use it just because of "hype", not because it really helps. If you want to experiment with funcitonal programming, you can try LISP or Scheme, or other functional languages.

I am planning to switch from this library to a WASM port of Fribidi, which would work faster (I hope) and would be smaller, too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants