Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is this project dead? #220

Open
Aditya94A opened this issue Nov 24, 2017 · 13 comments
Open

Is this project dead? #220

Aditya94A opened this issue Nov 24, 2017 · 13 comments

Comments

@Aditya94A
Copy link

Almost 2 years since any updates, issues and PRs piling up 😕

Back to gson I suppose :(

@trevjonez
Copy link
Contributor

https://github.com/square/moshi not gson :/

@andreiverdes
Copy link

Same question here!

@marc-christian-schulze
Copy link

And here!

@Aditya94A
Copy link
Author

Well, I guess I don't have this question anymore.

But the readme should say (in big bold letters) that this project has been discontinued and everyone should move to moshi.

@agrosner
Copy link

if you need similar performance, use https://github.com/ansman/kotshi

@trevjonez
Copy link
Contributor

even that is arguably not necessary. the way that retrofit/moshi performs deserialization as the data is streaming over the socket combined with the way moshi caches things makes annotation processing mostly moot. If you are doing it so you can get away from the kotlin reflect package when dealing with data classes then maybe it is worth it. but show me the performance tests where the gains are worth the overhead of another anno processor in your build. IMO the biggest win of moshi is the memory profile.

@agrosner
Copy link

Reflection will always be slower than direct method calls. But if the response JSON size is small, the differences will be negligible.
Some positives:

  1. Kotlin KAPT Cache means incremental builds are much faster.
  2. Seeing the parsing code is very helpful with debugging
  3. compile time checking for type adapter for custom types.
  4. Proguarding is much easier.

@trevjonez
Copy link
Contributor

point being that the reflection is amortized across the network transfer so your bottleneck remains network so trying to speed it up is moot. the proguard config on json is usually moot as well since obfuscation is pointless due to string constants being retained for matching the json structure. Makes reversal almost trivial.

I can agree with seeing parsing code though. static call stacks are always much nicer.

@Alexander--
Copy link

@trevjonez

reflection is amortized across the network transfer

You mean throughput is amortized, right? Request latency still remains — if a request takes 10 seconds do download JSON, and 2 more seconds to initialize reflection machinery, you are penalizing yourself with additional 16% latency. If your application downloads a JSON file to show user some data, your users will enjoy an additional 2s delay. If your messenger parsers JSON file from server to show a notification, that notification will be late by ~2s etc.

the proguard config on json is usually moot as well since obfuscation is pointless due to string constants being retained for matching the json structure.

Does not matter unless you are trying to use Proguard for obfuscation. Personally, I am using it as optimizing Java compiler.

Having APT-based adapters for other JSON libraries is cool, but sparsely supported third-party plugin is still strictly worse than a library like LoganSquare, that uses APT as it's primary operation mode. Until Moshi/Gson/Whatever switch to annotation processing by default, those libraries aren't interesting for me.

@trevjonez
Copy link
Contributor

It comes down to a "lets see the test data" where the test was actually a real world test of a slow stream of json over an IO socket not some blob of json string already in memory.The way you deal with the json is only a tiny piece of the whole picture.

I know logan square has touted huge speed gains but the tests I looked at have all been untrue to real life consumption.

@re-thc
Copy link

re-thc commented May 30, 2018

According to https://github.com/fabienrenaud/java-json-benchmark LoganSquare is still one of the fastest JSON processors. Would definitely be sad to see it going. Would be great if we can contribute and maintain it further. It's not just useful for Android (but even Java and server land).

@Alexander--
Copy link

@hc-codersatlas I am actually surprised, that some processors are so much faster, compared to LoganSquare. For example, dsljson seems to have huge upper hand... Except that it's benchmark does not use streaming mode, it just parses from a single byte[] array:

https://github.com/fabienrenaud/java-json-benchmark/blob/79b0b465b440f8a2f6f7f22ee4905e3c7328c3f4/src/main/java/com/github/fabienrenaud/jjb/databind/Deserialization.java#L83-L87

The same is true for most of the library benchmarks: some accept Strings, other byte[] arrays or Okio sources...

There are also few things not accounted for by those benchmarks — for example, they don't measure the initial hiccup, associated with reflective lookups (for libraries, that use those).

@re-thc
Copy link

re-thc commented Jun 2, 2018

@Alexander-- the only 1 that's really faster as you mentioned is dlsjson. It definitely does things differently.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants