Amazon SQS standard queues provide at-least-once delivery, which means messages can be delivered more than once. When Lambda is triggered by an SQS standard queue, duplicate delivery can occur due to retries, visibility timeouts, or transient errors. Therefore, “processed multiple times” is expected behavior unless the system implements deduplication or idempotency.
The most cost-effective way within the provided options to reduce duplicate processing is to use an SQS FIFO queue with deduplication. FIFO queues support exactly-once processing semantics within the constraints of the service (by preventing duplicate message delivery within the deduplication interval when a MessageDeduplicationId is used or content-based deduplication is enabled). This directly mitigates duplicate processing without requiring large architectural changes.
Option B (DLQ) is important for handling poison messages, but it does not prevent duplicates in normal processing; it only captures messages that fail repeatedly.
Option C (concurrency = 1) reduces parallelism but does not eliminate duplicates; the same message can still be delivered again if the visibility timeout expires or retries occur.
Option D is a major redesign and not cost-effective for simply addressing duplicate SQS message processing.
???? Note: In real production systems, the best practice is to make processing idempotent (so duplicates do no harm). But among the choices, FIFO + deduplication is the most direct and cost-effective fix.
Therefore, switch to an SQS FIFO queue and use deduplication.