Triple
Chaser

Automated pipeline for rendering
realistic image datasets to train
computer vision ML classifiers.
For our contribution to the 2019 Whitney Biennial at New York’s Whitney Museum of American Art, we developed a machine learning and computer vision workflow to identify tear gas grenades in digital images. We focused on a specific brand of tear gas grenade: the Triple-Chaser CS grenade in the catalogue of Defense Technology, which is a leading manufacturer of ‘less-lethal’ munitions.

Developing upon previous research, we used ‘synthetic’ images generated from 3D software to train machine learning classifiers. This led to the construction of a pragmatic end-to-end workflow that we hope can also be useful for other open source human rights monitoring and research in general.
Unreal Engine
Substance Designer
Autodesk Maya
Labels that appear on the Triple Chaser around the world are recreated in Photoshop.
Weathering and wear effects are parameterised in Substance Designer.



Referencing this model alongside found photographs, we used Adobe’s Substance Designer to create parametric (that is, variable along certain axes) photorealistic textures for the canister.

In the texture we have created, each of the following aspects of the texture can be either continuously or discretely modified at render time using the Unreal Substance plugin:

Weathering and wear effects. Using mask generators in Substance, we simulated full-material weathering effects such as dust, dirt, and grime. We also physical deformations such as scratches and bends.
Environment properties;
HDRI, time of the day, lighting,
weather effects & surface textures.


Camera settings such as;
positioning, object framing, focal length,
depth of field and exposure.


Image post-effects such as;
a curated set of LUT's for color grading,
film grain and image compression artifacts.
Coloured 'masks' tell the classifier where in the image the Triple-Chaser grenade exists.



A major benefit of using synthetic training sets in a machine learning workflow is that we essentially get image annotations for free. When images are rendered from a 3D scene, the pixel masks of the object is information that Unreal already contains, and as such we can easily create an extra render artefact that contains the information relevant for annotation.
Using this pipeline, which begins with an accurate 3D model and renders it via seeded randomizations in texture, environment, and lighting, we were able to produce thousands of synthetic images depicting the Triple-Chaser. However the pipeline is developed with generalisation in mind and is capable of rendering machine-learning-ready, annotated datasets for any given 3D model.

This development is part of a larger project, which aims to train effective machine learning classifiers for a range of objects whose tracking is of interest to human rights and OSINT investigators. These include objects such as chemical weapons, tear gas canisters, illegal arms, and particular kinds of munition.
Our research paper, 'Objects of Violence: Synthetic data for practical ML in human rights investigations' is selected and presented at NeurIPS 2019 under the catagory; AI for Social Good.
In partnership with Praxis Films, we presented the story of this research project as a video investigation, which premiered at the 2019 Whitney Biennial.
THE VIDEO
PATTERNS
A
A
Alican
Aktürk