
Over several months of lockdown, my cctv camera was accumulating photographs whenever it would sense motion. Acquiring 85 different images be then fed into three different object detection programmes. IBM's Cloud Pak for Data, Google's Cloud Vision API and Amazon's Rekognition Console for the purpose of finding out how a machine would label photographs taken by another machine.
Through feeding the cctv images through these programmes, they generated 190 unique labels. I attempted to photograph 74 of them, by carrying lists around with several labels written on them at a time. 30 of these labels are showcased within the project.
My response images were then put into CVAT (Computer Vision Annotation Tool) to be hand labelled with each box representing the inspired label that prompted me to take the photograph, mimicking the process of how algorithms are taught to detect what objects are.
The final images shown were all taken on a Yashica Mat 124g and developed at home by myself, as a way of adding the human touch and vision narrative of the project.


Example of IBM's Cloud Pak Data labels

Example of Google's Cloud Vision API labels

Example of Amazon's Rekognition Console labels