![]() Then I looked at the wanted image sizes and selected the images which matched that first, if any. It worked really well, a crappy, heavily compressed JPG with a water mark checked out as a dupe compared to the original, while two sequent frames in a slow video didn't, and there was a pretty wide margin between their difference values. Then it was just a matter of setting a threshold for what was considered a dupe. This means that it's tolerant towards small differences, but big differences hit hard. When I've done the rough filtering, I compared the points using the square of the differences. I also saved some extra data (average R, average G, average B and so on) to have something database searchable to do most of the filtering, before comparing points. Before testing, I rescaled the image to a fix format (to catch rescales and some crops/borders). ![]() More points towards the center of the image, fewer at the edges. ![]() Basically, I tested a number of points in each image, few enough to save in a database. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |