by imron 7 hours ago

I like to use uuid5 for this. It produces unique keys in a given namespace (defined by a uuid) but also takes an input key and produces identical output ID for the same input key.

This has a number of nice properties:

1. You don’t need to store keys in any special way. Just make them a unique column of your db and the db will detect duplicates for you (and you can provide logic to handle as required, eg ignoring if other input fields are the same, raising an error if a message has the same idempotent key but different fields).

2. You can reliably generate new downstream keys from an incoming key without the need for coordination between consumers, getting an identical output key for a given input key regardless of consumer.

3. In the event of a replayed message it’s fine to republish downstream events because the system is now deterministic for a given input, so you’ll get identical output (including generated messages) for identical input, and generating duplicate outputs is not an issue because this will be detected and ignored by downstream consumers.

4. This parallelises well because consumers are deterministic and don’t require any coordination except by db transaction.

cortesoft 6 hours ago | [-2 more]

How is this different/better than something like using a SHA256 of the input key?

Edit: Just looked it up... looks like this is basically what a uuid5 is, just a hash(salt+string)

dmurray 5 hours ago | [-1 more]

This doesn't sound good at all. It's quite reasonable in many applications to want to send the same message twice: e.g "Customer A buys N units of Product X".

If you try to disambiguate those messages using, say, a timestamp or a unique transaction ID, you're back where you started: how do you avoid collisions of those fields? Better if you used a random UUIDv4 in the first place.

imron 3 hours ago | [-0 more]

You don’t generate based on the message contents, rather you use the incoming idempotent id.

Customer A can buy N units of product X as many times as they want.

Each unique purchase you process will have its own globally unique id.

Each duplicated source event you process (due to “at least once” guarantees) will generate the same unique id across the other duplicates - without needing to coordinate between consumers.

bknight1983 6 hours ago | [-0 more]

I recently started using uuidv5 for ID generation based on a composite key. This allows a more diverse key set for partitioning by UUID