The IT team’s action—capping how many requests a single user can make per second and then delaying or dropping aggressive connections—is the defining behavior of rate limiting. In DDoS conditions, especially when the portal is under a surge of automated or abusive traffic, rate limiting enforces a policy that restricts request frequency from a source (such as an IP address, session, API key, or user identifier). This helps preserve availability by preventing any one client (or a small set of clients) from consuming a disproportionate share of application and infrastructure resources.
The key wording in the scenario is that “aggressive connections are delayed or dropped while most legitimate customers continue to use the service.” Rate limiting is designed for precisely this outcome: it introduces friction for abusive traffic patterns while allowing typical user behavior through. Depending on implementation, controls can respond with delays (throttling), temporary blocks, connection resets, or HTTP error responses (for example, “too many requests”) when limits are exceeded. This is commonly applied at the edge (reverse proxy/CDN), load balancer, WAF, or application gateway to reduce pressure on backend services.
Why the other options are not the best match:
Shutting Down Services (B) is an extreme measure that sacrifices availability to stop an attack; the scenario explicitly states service largely continues.
Absorb the Attack (C) refers to scaling capacity or using scrubbing centers/CDNs to handle volume without necessarily restricting individual requester behavior; the described control is specifically per-user request caps.
Degrading Services (D) generally means intentionally reducing functionality or quality (e.g., disabling non-essential features) to keep core services alive; here, the main technique is enforcing request-rate thresholds.
Thus, the countermeasure strategy being used is A. Rate Limiting.