In a highly available (HA) Kubernetes control plane using an external etcd topology, you typically run three control plane nodes and three separate etcd nodes, totaling six hosts, making D correct. HA design relies on quorum-based consensus: etcd uses Raft and requires a majority of members available to make progress. Running three etcd members is the common minimum for HA because it tolerates one member failure while maintaining quorum (2/3).
In the external etcd topology, etcd is decoupled from the control plane nodes. This separation improves fault isolation: if a control plane node fails or is replaced, etcd remains stable and independent; likewise, etcd maintenance can be handled separately. Kubernetes API servers (often multiple instances behind a load balancer) talk to the external etcd cluster for storage of cluster state.
Options A and B propose four hosts, but they break common HA/quorum best practices. Two etcd nodes do not form a robust quorum configuration (a two-member etcd cluster cannot tolerate a single failure without losing quorum). One control plane node is not HA for the API server/scheduler/controller-manager components. Option C describes a stacked etcd topology (control plane + etcd on same hosts), which can be HA with three hosts, but the question explicitly says external etcd, not stacked. In stacked topology, you often use three control plane nodes each running an etcd member. In external topology, you use three control plane + three etcd.
Operationally, external etcd topology is often used when you want dedicated resources, separate lifecycle management, or stronger isolation for the datastore. It can reduce blast radius but increases infrastructure footprint and operational complexity (TLS, backup/restore, networking). Still, for the canonical HA external-etcd pattern, the expected answer is six hosts: 3 control plane + 3 etcd.
=========