Coding Guidelines & Conventions
- Always be clear about what clients or client configs target. Never use an unqualified
client. Instead, always qualify. For example:rootClientorgClientpclusterClientrootKcpClientorgKubeClient
- Configs intended for
NewForConfig(i.e. today often called "admin workspace config") should uniformly be calledclusterConfig- Note: with org workspaces,
kcpwill no longer default clients to the "root" ("admin") logical cluster - Note 2: sometimes we use clients for same purpose, but this can be harder to read
- Note: with org workspaces,
- Cluster-aware clients should follow similar naming conventions:
crdClusterClientkcpClusterClientkubeClusterClient
clusterNameis a kcp term. It is NOT a name of a physical cluster. If we mean the latter, usepclusterNameor similar.- Qualify "namespace"s in code that handle up- and downstream, e.g.
upstreamNamespace,downstreamNamespace, and alsoupstreamObj,downstreamObj. - Logging:
- Use the
fmt.Sprintf("%s|%s/%s", clusterName, namespace, namesyntax. - Default log-level is 2.
- Controllers should generally log (a) one line (not more) non-error progress per item with
klog.V(2)(b) actions like create/update/delete viaklog.V(3)and (c) skipped actions, i.e. what was not done for reasons viaklog.V(4). - When orgs land:
clusterNameorfooClusterNameis always the fully qualified value that you can stick into obj.ObjectMeta.ClusterName. It's not necessarily the(Cluster)Workspace.Namefrom the object. For the latter, useworkspaceNameororgName. - Generally do
klog.Errorforreturn err, but not both together. If you need to make it clear where an error came from, you can wrap it. - New features start under a feature-gate (
--feature-gate GateName=true). (At some point in the future), new feature-gates are off by default at least until the APIs are promoted to beta (we are not there before we have reached MVP). - Feature-gated code can be incomplete. Also their e2e coverage can be incomplete. We do not compromise on unit tests. Every feature-gated code needs full unit tests as every other code-path.
- Go Proverbs are good guidelines for style: https://go-proverbs.github.io/ – watch https://www.youtube.com/watch?v=PAAkCSZUG1c.
- We use Testify's require a lot in tests, and avoid assert.
Note this subtle distinction of nested require statements:
require.Eventually(t, func() bool {
foos, err := client.List(...)
require.NoError(err) // fail fast, including failing require.Eventually immediately
return someCondition(foos)
}, ...)
require.Eventually(t, func() bool {
foos, err := client.List(...)
if err != nil {
return false // keep trying
}
return someCondition(foos)
}, ...)
Using Kubebuilder CRD Validation Annotations
All of the built-in types for kcp are CustomResourceDefinitions, and we generate YAML spec for them from our Go types using kubebuilder.
When adding a field that requires validation, custom annotations are used to translate this logic into the generated OpenAPI spec. This doc gives an overview of possible validations. These annotations map directly to concepts in the OpenAPI Spec so, for instance, the format of strings is defined there, not in kubebuilder. Furthermore, Kubernetes has forked the OpenAPI project here and extends more formats in the extensions-apiserver here.
Replicated Data Types
Some objects are replicated and cached amongst shards when kcp is run in a sharded configuration. When writing code to list or get these objects, be sure to reference both shard-local and cache informers. To make this more convenient, wrap the look up in a function pointer.
For example:
func NewController(ctx,
localAPIExportInformer, cacheAPIExportInformer apisinformers.APIExportClusterInformer
) (*controller, error) {
...
return &controller{
listAPIExports: func(clusterName logicalcluster.Name) ([]apisv1apha1.APIExport, error) {
exports, err := localAPIExportInformer.Cluster(clusterName).Lister().List(labels.Everything())
if err != nil {
return cacheAPIExportInformer.Cluster(clusterName).Lister().List(labels.Everything())
}
return exports, nil
...
}
}
A full list of replicated resources is currently outlined in the replication controller.