docs: fix automountServiceAccountToken nesting in Pod examples#55663
docs: fix automountServiceAccountToken nesting in Pod examples#55663Rekt-Dev wants to merge 1 commit intokubernetes:mainfrom
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
|
|
Welcome @Rekt-Dev! |
|
This is effectively the same since order doesn't matter in yaml. Is there a reason for making this change? Please note the CLA requirement. That must be signed before any contribution may be accepted. |
Thanks. |
✅ Pull request preview available for checkingBuilt without sensitive environment variables
To edit notification comments on pull requests, go to your Netlify project configuration. |
|
Can you share a direct copy/paste of what you have that is not working for you? My guess would be you have some stray indentation. As long as it is not indented, the original content and the updated version you are proposing is identical other than the order of properties. |
sure. apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2026-05-04T12:22:50Z"
name: bot-sa
namespace: automated
resourceVersion: "2647"
uid: 96a1ccf5-930c-4e0f-9b40-92766717a77froot@controlplane ~ ➜ k edit sa -n automated bot-sa root@controlplane ~ ✖ k replace --force -f /tmp/kubectl-edit-3412595445.yaml
|
|
That's very difficult to read without markdown formatting, but please just share the yaml you are using that corresponds to what you are proposing to change here. That is where there is an issue, so without seeing exactly what you are using it's very hard to help correct it. |
|
Completely guessing from all of that output, but I think the contents of |
yep. apiVersion: v1
kind: ServiceAccount
automountServiceAccountToken: false
metadata:
creationTimestamp: "2026-05-04T12:22:50Z"
name: bot-sa
namespace: automated
resourceVersion: "2647"
uid: 96a1ccf5-930c-4e0f-9b40-92766717a77f |
|
That needs to be formatted to be able to see the actual content with indentation. Please use code block formatting so it isn't displayed as plain text. |
This looks like your corrected version. Now that you've reformatted the earlier description, I think I can see what was happening. You had: apiVersion: v1
kind: ServiceAccount
metadata:
automountServiceAccountToken: false
creationTimestamp: "2026-05-04T12:22:50Z"
name: bot-sa
namespace: automated
resourceVersion: "2647"
uid: 96a1ccf5-930c-4e0f-9b40-92766717a77fIn this version you'll notice that you have metadata:
automountServiceAccountToken: false
creationTimestamp: ...The |
|
first of all thanks a lot for your help. Ive tried again on a different cluster now, my private 3 node cluster, ive moved the automount field everywhere with all kinds of identations, 4-5 different locations, the only one that replace force agrees to replace is where i posted in the beginning of the PR, beneath kind. let me know if you want more yamls. thanks again ! |
and just for reference - this is the documentation: apiVersion: v1
kind: ServiceAccount
metadata:
name: build-robot
automountServiceAccountToken: false |
Without seeing your edits and exactly how you are doing that, I can't say. But I'm pretty sure you're adding indentation or similar issues when doing it based on the error you are seeing. The yaml content of: apiVersion: v1
kind: ServiceAccount
metadata:
name: build-robot
automountServiceAccountToken: falseand apiVersion: v1
kind: ServiceAccount
automountServiceAccountToken: false
metadata:
name: build-robotAre semantically equivalent and would cause no changes in k8s behavior when parsing the yaml. |
|
Hi. After retesting with a clean declarative manifest, automountServiceAccountToken: false behaves correctly as expected. The issue only appears during a workflow involving kubectl edit followed by kubectl replace --force when modifying live objects. In this case, the failure appears to stem from YAML reserialization during the round-trip editing process rather than any Kubernetes API validation issue. For clarity and to avoid ambiguity in the reproduction steps, I also tested the same configuration using a fresh ServiceAccount manifest. This was only done to isolate the behavior and confirm that the field itself is not the source of the issue—the original observation was based strictly on modifying a live object via kubectl edit, as described above. It may be worth noting for documentation or kubectl guidance that live object editing followed by force replacement is not a reliable mechanism for structural YAML modifications, as the intermediate serialized output can introduce parsing issues even when the final intended manifest is valid. Recommended practice remains declarative updates or patch-based changes for this type of modification. Thanks ! |
If the edit failed, then the attempt to force apply the change would also be expected to fail unless you edit the file. Did you make changes to it? There is something you are modifying in that edit that is incorrect. If you can cat the temporary file and paste its contents, in a code block and exactly as it is in the file, then we might be able to tell what is happening. In that original edit it looks like the line was added in the wrong place. It is either that, or it's being added with the wrong indentation. Another option would be to use something like peek or other screen recorder to capture exactly what/how you are doing the |
sa_replace_issue_k8s.mp4 |
|
You're putting the line right in the middle of the The edit should work fine if you put it before the |
I appreciate the technical context. I see your point that this structure works for a brand-new declarative manifest. However, I want to clarify that throughout this entire PR and thread, my focus—and all the data I’ve shared—has been strictly regarding the kubectl edit and replace --force workflow on live objects. If you look at the documentation I have open in sa_replace_issue_k8s_2.mp4, I am copying the example verbatim. When that exact layout is used during a live edit, it triggers a conversion error (seen at 0:39). This cost me an hour of my time because the documentation suggests a layout that fails in a live-cluster context. Since we want to maintain the current layout for new files, we could instead maybe add a remark or warning to this section. Something like: Note: When using kubectl edit to modify a live ServiceAccount, ensure automountServiceAccountToken is placed at the root level (the same level as kind or metadata). Placing it nested within the metadata block as shown in some examples may cause reserialization errors during a replace --force operation. As shown at 0:58 in the video, moving it to the root level is the only way to get the API server to return serviceaccount/service edited. Adding this small note ensures the documentation serves both declarative and imperative workflows without being a 'blind spot' for others. |
Well, the problem is you edited it incorrectly. Then you tried to force apply the temporary file without editing that file to correct your error. So there isn't any changes that need to be made. If you give it an invalid change it is going to fail. |
I put it exactly where the documentation shows, its open in my video side by side to prove it. |
|
Indentation matter in yaml. The docs show the line after the indented Documenting how yaml works is likely outside the scope of the k8s docs. |
Thank you for the additional references. I believe there may be a misunderstanding about what this PR is addressing. This is not about YAML key ordering. I agree that YAML ordering does not affect semantics. The issue demonstrated in the video is related to API field location and how kubectl edit handles updates to an existing ServiceAccount. In the video and screenshots, you can see that automountServiceAccountToken was placed exactly where the current documentation shows it. It was not inserted arbitrarily or mid-metadata; it was copied verbatim from the documented example. That placement works correctly when creating a new resource from scratch. However, when editing an existing ServiceAccount using kubectl edit, placing the field in that documented location does not result in the expected behavior. This appears to be related to how the API server processes updates to existing objects, not YAML syntax. The goal of this PR is not to change YAML structure arbitrarily, but to clarify this distinction so users editing existing resources understand where the field must be placed for the change to take effect. YAML ordering versus API field location is a common source of confusion, and this PR aims to reduce that confusion in the documentation. |
No, the issue demonstrated in the video is you pasted the line in the wrong location, and didn't understand how yaml structure works to understand why that was wrong. It doesn't matter how the example is updated in the documentation since it won't always 100% match what a user sees on their local cluster. If the user doesn't understand that putting a root level property in the middle of a nested block in the yaml is wrong, moving things around is not going to help.
Nothing in this is specific to how kubectl handles edits. It properly gave an error, with about as good of an error message as it could to let you know the yaml structure you tried to give it was invalid. Again, this was ignored or misunderstood, so nothing that rearranging the order of the example yaml in the docs is going to help if that is not understood. I get this can be confusing. But as I'm trying to point out, the example is correct, the behavior is correct - you tried to make an invalid edit to the object and were prevented from doing that. |
Thanks for the feedback. To clarify the intent and the behavior shown in the video: The manifest itself is valid and works as expected when using kubectl apply for a new object, as mentioned earlier, where the full ServiceAccount object is created and the API server accepts and normalizes the field. The issue demonstrated is specifically in the kubectl edit workflow on an existing ServiceAccount. In this case, Kubernetes is no longer processing a full object, but a strategic merge patch against the existing stored state. What I observed is that automountServiceAccountToken behaves differently in this patch-based update path compared to initial creation. Depending on how the field is represented in the live object during kubectl edit, the update may be rejected or not persisted, even though the same field placement works correctly during creation via kubectl apply. This is not about YAML ordering or indentation. The field placement in the example is correct as shown in the documentation. The difference appears to come from how Kubernetes handles patch application vs full object creation, and how server-side reconciliation processes that field on existing resources. The goal of the PR is to reflect this distinction so users understand that behavior during kubectl edit on existing ServiceAccounts may not match the behavior when creating a new object from the same manifest. |
This is entirely about YAML ordering and indentation. You are placing the property in an invalid location. That is not a documentation issue. That is a misunderstanding of YAML structuring. |
Thanks for taking the time to review this and for the feedback. |





Closes #55467
Description
Issue
Closes: #