About Sumthing

Verifying imagery of every donation

By
Guido
published on
January 9, 2024
5
min

Our platform allows you to donate to projects that are restoring nature, and follow the progress first-hand. Restoration practitioners around the world use our infrastructure to provide you with 1:1 feedback on the positive impact they've made with your donations. When it comes to the stream of pictures coming directly from the partner, our platform runs a ton of analyses to ensure that the proof you're receiving is real, unique and wouldn't have happened without your support! 
To optimize for accountability and efficiency, our Ops team has developed their own AI-models and tech-driven verification dashboard.
We think it's pretty cool, and we'd love to tell you all about it! 

A picture says more than a thousand words

Every image tells a story, not just visually but also through its data. Beyond the visible, images contain metadata with details like camera specs, size, orientation, and perhaps more importantly where and when it was taken. Leveraging this data, we've developed several models to check and validate each image.


1. Check it the metadata matches our agreements

Our analyses starts with checking the images metadata on the date, time, longitude, latitude and altitude where it was taken, and cross-referencing it against the impact contract we have with the local project. Are the pictures all unique in terms of metadata and have they been taken after we've placed the order? And are they in the place we know the partner has rightful access to? Are the intervals at which the pictures are taken representative of the time it takes a human being doing the work to make them? This is our first step to confirm the photo's authenticity and that it isn't missing crucial details like location data.

Apparently there is one image that misses the metadata, so let’s skip!

2. Deduplicate double data

To prevent accidental re-uploads, we compare file names and image hash codes. A hash code of an image is a unique digital fingerprint, generated from the image's content, to identify and differentiate it from others. Comparing both names and has codes ensures every image in our dataset is unique and we avoid any double counting within our platform.

The image hash helps to take out any accidental doubles! 

3. Align the orientation

Seeing your donation come to live is great but if you have to tilt your head to see it in the right orientation it is a bit annoying. Also the other AI checks we have lined up work better if the pictures all have the same orientation. So we run an algorhytm to check each image’s orientation, if one is detected this is presented to our Ops team to adjust where needed.

So, let’s set this one straight!

4. Check if the picture is clear

The next AI powered verification is to check the images for sharpness. A blurry or shaky image doesn't do justice to your contribution and skews the results of our further analysis and is therefore rejected.

Would you be happy with this as proof? Neither would we. So this one is rejected!

5. Filter out any pictures that aren't proof

What if one of our practitioners accidentally uploads an image that isn't proof of the implementation of your donations, like an accidental selfie instead of a picture of a tree that gets planted. Our outlier AI model identifies those images that are much different than the rest we've received from this partner, ensuring we keeping only those that are genuine proofs of your donation.

The middle one is a nice image, but it's very different and not one we can link to a donation!

6. Checking pictures for similarity

Now, let's explore our most exciting model. Imagine having two images of the same tree, each taken from a different angle. To address this, we've developed an AI model, trained on our dataset, to detect such similarities and prevent double counting.

With a human eye we can see immediately: this is the same tree.

Keeping a human touch

Automation is cool and efficient, but it still takes some human judgement to make the final call. So just to be sure we keep a human eye on the outcomes of the models, evaluating the highest risk imagery. With every proof upload to the platform learns from the human feedback and gets better!

Additionally, we conduct a bit of detective work, tracing the paths taken by photographers. This ensures that only genuinely duplicate photos are flagged as invalid, like when someone inadvertently captures the same tree while walking in a circle. Our models might not hit the bullseye every time, but with a human in the loop, and with 3rd party sources like satellite or drones, we're pretty confident we are getting quite accurate results.

Curious about the inner workings of these models or eager to help improve them? Send us an email! ([email protected]).

Success Stories