After maintaining data for years and guessing I have terrabytes of duplicated data. I couldn't find a great solution so I made my own using fdupes.
Firstly, use eyeballs to ensure there will not be any problems:
fdupes -rnm .
If all seems well then remove the duplicates:
fdupes -rndN .
I'm sure this can be approved on. The fundamental concept here is to determine if there is going to be any problems before automatically deleting a bunch of files.