-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed to add /home/vcap/app/ to Spark environment #106
Comments
codefromthecrypt
pushed a commit
that referenced
this issue
Jun 27, 2018
Prevents this in cloud foundry, which unzips jars: ``` 2018-06-12T15:56:13.65+0200 [APP/PROC/WEB/0] OUT JVM Memory Configuration: -Xmx1406944K -Xss1M -XX:ReservedCodeCacheSize=240M -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=178207K 2018-06-12T15:56:14.02+0200 [APP/PROC/WEB/0] ERR 18/06/12 13:56:14 INFO ElasticsearchDependenciesJob: Processing spans from zipkin-2018-06-12/span 2018-06-12T15:56:14.25+0200 [APP/PROC/WEB/0] ERR 18/06/12 13:56:14 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2018-06-12T15:56:14.84+0200 [APP/PROC/WEB/0] ERR 18/06/12 13:56:14 ERROR SparkContext: Failed to add /home/vcap/app/ to Spark environment 2018-06-12T15:56:14.84+0200 [APP/PROC/WEB/0] ERR java.lang.IllegalArgumentException: Directory /home/vcap/app is not allowed for addJar 2018-06-12T15:56:14.84+0200 [APP/PROC/WEB/0] ERR at org.apache.spark.SparkContext.addJarFile$1(SparkContext.scala:1817) 2018-06-12T15:56:14.84+0200 [APP/PROC/WEB/0] ERR at org.apache.spark.SparkContext.addJar(SparkContext.scala:1840) 2018-06-12T15:56:14.84+0200 [APP/PROC/WEB/0] ERR at org.apache.spark.SparkContext$$anonfun$12.apply(SparkContext.scala:466) 2018-06-12T15:56:14.84+0200 [APP/PROC/WEB/0] ERR at org.apache.spark.SparkContext$$anonfun$12.apply(SparkContext.scala:466) 2018-06-12T15:56:14.84+0200 [APP/PROC/WEB/0] ERR at scala.collection.immutable.List.foreach(List.scala:381) 2018-06-12T15:56:14.84+0200 [APP/PROC/WEB/0] ERR at org.apache.spark.SparkContext.<init>(SparkContext.scala:466) 2018-06-12T15:56:14.84+0200 [APP/PROC/WEB/0] ERR at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) 2018-06-12T15:56:14.84+0200 [APP/PROC/WEB/0] ERR at zipkin.dependencies.elasticsearch.ElasticsearchDependenciesJob.run(ElasticsearchDependenciesJob.java:179) 2018-06-12T15:56:14.84+0200 [APP/PROC/WEB/0] ERR at zipkin.dependencies.elasticsearch.ElasticsearchDependenciesJob.run(ElasticsearchDependenciesJob.java:165) 2018-06-12T15:56:14.84+0200 [APP/PROC/WEB/0] ERR at zipkin.dependencies.ZipkinDependenciesJob.main(ZipkinDependenciesJob.java:72) ``` Fixes #106
#107 should do it |
codefromthecrypt
pushed a commit
that referenced
this issue
Jun 27, 2018
Prevents this in cloud foundry, which unzips jars: ``` 2018-06-12T15:56:13.65+0200 [APP/PROC/WEB/0] OUT JVM Memory Configuration: -Xmx1406944K -Xss1M -XX:ReservedCodeCacheSize=240M -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=178207K 2018-06-12T15:56:14.02+0200 [APP/PROC/WEB/0] ERR 18/06/12 13:56:14 INFO ElasticsearchDependenciesJob: Processing spans from zipkin-2018-06-12/span 2018-06-12T15:56:14.25+0200 [APP/PROC/WEB/0] ERR 18/06/12 13:56:14 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2018-06-12T15:56:14.84+0200 [APP/PROC/WEB/0] ERR 18/06/12 13:56:14 ERROR SparkContext: Failed to add /home/vcap/app/ to Spark environment 2018-06-12T15:56:14.84+0200 [APP/PROC/WEB/0] ERR java.lang.IllegalArgumentException: Directory /home/vcap/app is not allowed for addJar 2018-06-12T15:56:14.84+0200 [APP/PROC/WEB/0] ERR at org.apache.spark.SparkContext.addJarFile$1(SparkContext.scala:1817) 2018-06-12T15:56:14.84+0200 [APP/PROC/WEB/0] ERR at org.apache.spark.SparkContext.addJar(SparkContext.scala:1840) 2018-06-12T15:56:14.84+0200 [APP/PROC/WEB/0] ERR at org.apache.spark.SparkContext$$anonfun$12.apply(SparkContext.scala:466) 2018-06-12T15:56:14.84+0200 [APP/PROC/WEB/0] ERR at org.apache.spark.SparkContext$$anonfun$12.apply(SparkContext.scala:466) 2018-06-12T15:56:14.84+0200 [APP/PROC/WEB/0] ERR at scala.collection.immutable.List.foreach(List.scala:381) 2018-06-12T15:56:14.84+0200 [APP/PROC/WEB/0] ERR at org.apache.spark.SparkContext.<init>(SparkContext.scala:466) 2018-06-12T15:56:14.84+0200 [APP/PROC/WEB/0] ERR at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) 2018-06-12T15:56:14.84+0200 [APP/PROC/WEB/0] ERR at zipkin.dependencies.elasticsearch.ElasticsearchDependenciesJob.run(ElasticsearchDependenciesJob.java:179) 2018-06-12T15:56:14.84+0200 [APP/PROC/WEB/0] ERR at zipkin.dependencies.elasticsearch.ElasticsearchDependenciesJob.run(ElasticsearchDependenciesJob.java:165) 2018-06-12T15:56:14.84+0200 [APP/PROC/WEB/0] ERR at zipkin.dependencies.ZipkinDependenciesJob.main(ZipkinDependenciesJob.java:72) ``` Fixes #106
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi Adrian
I'm trying to store tracing-information of Zipkin in an Elasticsearch database. Zipkin-Server and ES are running in Cloud Foundry. I've already figured out that I need an additional app running
zipkin-dependencies
in order to see any dependencies between applications.Unfortunately, the required environment-variables are not read but I guess that's already part of #45. Therefore, I set the connection details manually during deployment:
When I now push the application, I see the following exception during startup:
In case this has anything to do with Java versions (as I read Java 7), note that the app was pushed with the Java Buildpack of Cloud Foundry having
Open Jdk JRE 1.8.0_172
shipped with.Could you please point me into a direction how I can solve this issue? As a work-around I simply use Zipkin's InMemory-DB
but this will cause Zipkin periodically to crash when all memory is used up¯\(ツ)/¯
Thanks and best regards
EDIT: Looks like there's something like a ring buffer in place taking care of memory-usage.
The text was updated successfully, but these errors were encountered: