How It Generates

A portrait is generated once the dataset is collected from participants. Collected data in the form of JSON files, are sent via email and become the texture of the portrait. art of the information from the data is used to form the head of the portrait following a set of rules. Using a data analysis code, the gathered data is carefully examined and converted into the portrait.

01 BASE SHAPE OF THE PORTRAIT
  • Base shape of the portrait is created from a selfie that is found on the drive. If there is no selfie file, the head is placed with the default head shape from Blender.
  • 02 THE NUMBER OF HEADS
  • The number of heads is based on the number of languages found on Google Translate, Youtube history, Google search history.  English is the default language for this portrait. Therefore, the head is positioned up if the user uses English. If the found languages don't have a similar structure, they will be upside down in relation to each other.
  • 03 SKIN TEXTURE
  • Skin texture is composed of images from search history data. Recent search keywords from the data are converted into images to produce the texture.
  • 04 OVERALL COLOR
  • Overall color is based on the number of files in the drive. If there are more than 400 files, the overall tone of the portrait is in red.
    Less than 100: Blue, 100 - 400: Yellow, more than 400: Red
  • 05 OBJECTS
  • Objects around the head are from the icons of the most visited sites.