• pix2pix Research

    This website gathers the experiments I conducted with the pix2pix model in Torch, in the context of the “Data and Machine Learning for Artistic Practice” (IS71074A) course taught by Dr Rebecca Fiebrink at Goldsmiths University, London, 2019.
    On the next slide, you’ll find a short video demonstrating how to train and generate paired data using the pix2pix Conditional Adversarial Network in Torch. Below, you’ll find a gallery for each model displaying some outputs obtained with the training data from the other models as test data. All 81 outputs are visible in a table from this overview page as well. (opens in new tab)
    You can scroll with your keyboard arrows
  • arendt_facetracker

    This is a gallery of output data made with the arendt_facetracker model with pix2pix. The model was trained on extracts of an interview with Hannah Arendt.
    training data: 403 pairs made with ofxFaceTracker2 on openFrameworks.
  • beauty_beast_hed

    This is a gallery of output data made with the beauty_beast_hed model with pix2pix. The model was trained on extracts of Disney’s Beauty and the Beast.
    training data: 315 pairs made with holistically-nested edge detection on Google Colaboratory.
  • cinderella_facetracker

    This is a gallery of output data made with the cinderella_facetracker model with pix2pix. The model was trained on extracts of Disney’s Cinderella.
    training data: 108 pairs made in Illustrator with a tablet.
  • julien_facetracker

    This is a gallery of output data made with the julien_facetracker model with pix2pix. The model was trained on a short video of myself shot by ✌my partner with my phone.
    training data: 206 pairs made with ofxFaceTracker2 on openFrameworks.
  • julien_hed

    This is a gallery of output data made with the julien_hed model with pix2pix. The model was trained on a short video of myself shot by ✌my partner with my phone.
    training data: 232 pairs made with holistically-nested edge detection on Google Colaboratory.
  • macron_hed

    This is a gallery of output data made with the macron_hed model with pix2pix. The model was trained Emmanuel Macron’s speech in the aftermath of the fire in the church of Notre-Dame.
    training data: 343 pairs made with ofxFaceTracker2 on openFrameworks.
  • mathilde_facetracker

    This is a gallery of output data made with the mathilde_facetracker model with pix2pix. The model was trained on a short video of ✌my partner shot by me on my phone.
    training data: 134 pairs made with ofxFaceTracker2 on openFrameworks.
  • mathilde_hed

    This is a gallery of output data made with the mathilde_hed model with pix2pix. The model was trained on a short video of ✌my partner shot by me on my phone.
    training data: 302 pairs made with holistically-nested edge detection on Google Colaboratory.
  • travolta_facetracker

    This is a gallery of output data made with the travolta_facetracker model with pix2pix. The model was trained on extracts of Tarantino’s Pulp Fiction.
    training data: 322 pairs made with ofxFaceTracker2 on openFrameworks.