-
Notifications
You must be signed in to change notification settings - Fork 459
Add instance name to HDFS #5458
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 14 out of 14 changed files in this pull request and generated no comments.
Comments suppressed due to low confidence (2)
test/src/main/java/org/apache/accumulo/test/VolumeIT.java:183
- [nitpick] Consider adding a clarifying comment here explaining that exactly two files (one for instance id and one for instance name) are expected. This will help maintainers understand the test intent.
assertEquals(2, list.size());
server/base/src/main/java/org/apache/accumulo/server/fs/VolumeManager.java:227
- [nitpick] The error message here could be more specific by differentiating between duplicate instance id files and duplicate instance name files for clearer debugging.
throw new IllegalStateException("Accumulo found multiple instance ids in " + instanceDirectory);
"Accumulo not initialized, there is no instance id at " + instanceDirectory); | ||
} else if (files.length != 1) { | ||
log.error("multiple potential instances in {}", instanceDirectory); | ||
InstanceId instanceId = null; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this changed code could be greatly simplified if the instance and name were serialized as json into one file that has a name that we are expecting. I think InstanceInfo can just be serialized and deserialized using Gson in this case.
Having one file instead of a prefix negates the need to handle multiple possible files for the name and id. Also, having one file means we are only reading one file from HDFS instead of N.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's what I originally was going to do. However, the current design is to rely only on the filename/directory contents, rather than try to read any files. This reduces the amount of round trip RPC calls, because there is no need to talk to any HDFS DataNode, and is therefore much more robust against whole classes of cluster failure scenarios. That's the current design, so I kept the same goal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I didn't realize that was the goal. Thanks for the insight.
public class InstanceInfo { | ||
|
||
private final String name; | ||
private final InstanceId id; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made a comment elsewhere about just serializing and deserializing this object using a single file in HDFS in a known location. If we do that, then I think we can collapse the instance version into this object as well and get rid of that file. It may also make sense to add a version number to this object to handle changes over time, much like a serialVersionUid.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because of the goal to minimize the amount of cluster RPC requests needed to read the contents of the file, and only require talking to the HDFS NameNode, the design is to keep the info in the file names. However, I do think we could move the version into this instance directory with a different prefix, and also add it to this object.
Add instance name to HDFS volumes, and move instance ID to the same directory.
This is necessary as the first step to bootstrapping chrooted ZooSession for ServerContext without a a second ZooKeeper connection to look up the instance name from the ID.
TODO: