Skip to content
This repository has been archived by the owner on Jan 17, 2019. It is now read-only.

port 8080 already in use #50

Open
WangYongNingDA opened this issue Dec 13, 2017 · 5 comments
Open

port 8080 already in use #50

WangYongNingDA opened this issue Dec 13, 2017 · 5 comments

Comments

@WangYongNingDA
Copy link

i have deploy griffin and run at local,but when i run java -jar service/target/service.jar, it says port 8080 is used.how can i change the port to others?

@bhlx3lyx7
Copy link
Contributor

bhlx3lyx7 commented Dec 13, 2017 via email

@WangYongNingDA
Copy link
Author

hi bhlx3lyx7
thank you,I have reslove this problem.but when i use livy to submit sparkjob,the errors are
as below:

Warning: Skip remote jar hdfs://wecloud-cluster/project/pgxl/griffin/griffin-measure.jar.
Warning: Skip remote jar hdfs://wecloud-cluster/project/pgxl/griffin/datanucleus-api-jdo-3.2.6.jar.
Warning: Skip remote jar hdfs://wecloud-cluster/project/pgxl/griffin/datanucleus-core-3.2.10.jar.
Warning: Skip remote jar hdfs://wecloud-cluster/project/pgxl/griffin/datanucleus-rdbms-3.2.9.jar.
java.lang.ClassNotFoundException: org.apache.griffin.measure.Application
        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:270)
        at org.apache.spark.util.Utils$.classForName(Utils.scala:175)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

@bhlx3lyx7
Copy link
Contributor

It seems like to be some version issue, the griffin-measure.jar is using scala 2.10 by default, which means the spark version should be matched with scala 2.10.
Griffin measure works on spark 1.6, but the scala version could be modified, you can package your own jar for scala 2.11, just modify scala.version and scala.binary.version in the pom.xml of measure module, run mvn clean package in the measure directory.

@WangYongNingDA
Copy link
Author

WangYongNingDA commented Dec 18, 2017

Thanks for your response,i have resloved the problem by change the config.but now the job can be submitted to the yarn cluser,other problems comes.
According to the confs in the sparkJob.properties,sparkJob.spark.jars.packages is "com.databricks:spark-avro_2.10:2.0.1".
When I submit jobs by using "--packages com.databricks:spark-avro_2.10:2.0.1 ",it works well.But when use livy to submit jobs,the errors comes:

Application application_1512628890181_92576 failed 2 times due to AM Container for appattempt_1512628890181_92576_000002 exited with exitCode: -1000
For more detailed output, check application tracking page:http://xy180-wecloud-198:8088/proxy/application_1512628890181_92576/Then, click on links to logs of each attempt.
Diagnostics: File does not exist: hdfs://wecloud-cluster/user/pgxl/.sparkStaging/application_1512628890181_92576/com.databricks_spark-avro_2.10-2.0.1.jar
java.io.FileNotFoundException: File does not exist: hdfs://wecloud-cluster/user/pgxl/.sparkStaging/application_1512628890181_92576/com.databricks_spark-avro_2.10-2.0.1.jar

I have delete the code about spark.jars.packages in the class SparkSubmitJob,it also don't works.

@bhlx3lyx7
Copy link
Contributor

It's a tricky issue, the measure module supports avro file by this package, but we don't want to package it in measure.jar, so we need to tell spark where it is.
sparkJob.spark.jars.packages=com.databricks:spark-avro_2.10:2.0.1 will let spark application download this package when it starts, it exits if download fails.
You can download this package manually, put it in hdfs, and set the config in sparkJob.properties, like sparkJob.jars_4=hdfs:///path/to/spark-avro_2.10.jar, then modify some code in service module to enable this property when submit to livy, that will resolve this issue.
In the next version, we'll consider to modify sparkJob.properties, to fit livy parameters friendly.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants