-
Notifications
You must be signed in to change notification settings - Fork 444
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Target Java 17 for main branch #5139
Comments
I can revisit this and see what I come up with, it would be nice if we just needed to add a new java module option, etc but I'll try and find out if there's a bigger issue preventing JDK 17 from being used. |
I tested this out with main (4.0.0-SNAPSHOT), Uno, and accumulo-testing and it seems to be working. Based on the Hadoop Jira ticket it seems like there are some incompatibilities with libraries such as Guice 4.x but testing with the extra java arguments seems to work in my testing. I didn't see any map reduce failures as noted in slack. I made the following changes to get things to work:
|
I had a bunch of weird issues at first trying to test until I realized I had to also update accumulo-testing to target JDK 17 and depend on accumulo 4.0 and once I did that (and removed some of the deprecated features that were removed in 4.0) things worked without issue |
Awesome! Thanks, @cshannon . I think that unblocks us from targeting Java 17, but to help with development, we need some changes in the accumulo-testing repo and in the fluo-uno repo. |
Hadoop is using some older dependencies that still require access to internal JDK features so this enables that by adding JVM args See apache/accumulo#5139
Hadoop is using some older dependencies that still require access to internal JDK features so this enables that by adding JVM args See apache/accumulo#5139
fluo-uno should now correctly start up Yarn when running with JDK 17 and execute map reduce correctly after merging apache/fluo-uno#305 . I think we need to wait on changes to accumulo-testing for changes to the main branch in accumulo first. The biggest thing to change in the testing repo is to bump the target JDK to 17 to be in sync with the target JDK of 17 in accumulo when ti's time. |
Java 17 has some nice new features, and earlier LTS versions of Java are EOL. For new development branches, we should have the option of using newer Java features. Java 17 has records, for example, that are of particular interest. Also of personal interest to me, is the ability to use the more robust switch statements.
The main blocker for this is that there is an unknown issue with starting Hadoop Yarn for MapReduce on a Java 17 VM. So, we can't put Accumulo byte-code in to anything going through MapReduce until that's figured out. We do know that the HDFS itself works on JDK 17, and the HDFS client does as well, because we've been running that in our ITs for awhile now. So, it's just whatever the issue is with Yarn, as far as I can tell.
For testing, apache/fluo-uno#297 circumvented the problem by skipping starting Yarn when running on with Java newer than version 11, but that's not a suitable solution for any production deployment of Accumulo (it's not even a suitable solution for testing, if you want to test MapReduce).
The issue may be quite trivial... I'm not sure.
Some initial investigation occurred in Slack that may hint at some Java module options need to be added to the yarn env script, or something similar, but there may be more to it than that.
For reference, Hadoop tracking ticket for Java 17 support is here.
The text was updated successfully, but these errors were encountered: