I'm the author of jdupes. The comparison that I think u/Freeky tried to link to is here. I'd like to point out that rdfind's performance seems to come from assuming checksums (hashes) are correct (see rdfind's algorithm explanation on Github), so there is never a byte-for-byte comparison of file contents. You'd get the same speed boost in jdupes with the -Q option, but trusting hashes as substitutes for full-file comparisosn has the potential to cause data loss if you happen to "discover" a hash collision.
jdupes also supports a lot of file exclusion and behavior features that rdfind does not have. I just added -X newer/older today, which allows excluding files newer or older than a certain date/time, and also recently added APFS CoW (reflink/clonefile) support on top of the BTRFS/XFS dedupe support.
One more thing to note about the dupd benchmarks: dupd builds a file hash database, then deduplicates by using the database. That's why it is massively faster. It's kinda "cheating," kinda not "cheating." It's not accurate to compare dupd against most other duplicate finders because of its split methodology. The high speed ranking in the benchmark assumes you've already built the database, so it omits the lengthy initial process of building the database; jdupes, fdupes, rdfind, rmlint, etc. do not build a database.