myStreet

Harvard GSD Machine Aesthetics, spring 2023
Instructor: Panagiotis Michalatos

Bias exists within all machine learning models, and the goal is usually to reduce and counteract it as much as possible. But what if we worked with that limitation? Based on that premise, this project creates the opportunity to enhance a pedestrian experience based on personal bias, using readily available public data, Google Street View.


Premise

Dataset

Exploration of Outputs

Classification Categories

Live Camera Activation


Outline:

Neural network goals are to generalize knowledge & reduce bias, but what would a biased network look like and be used for?

Being an urban designer, I'm very interested in streets. I rapidly judge them by their function, aesthetic, and general feeling based on factors including the number of trees, texture & scale of buildings, pedestrians, the width of street pavement, etc.

Can a computer be trained to pick up on these characteristics?

Dataset

Exploration of Outputs

To explore the possible ways that trained models could organize and understand data, a few tests were carried out, including autoencoders, optimization tests, and image segmentation.

model: VGG16Autoencoder
visualization: PCA, 2D

To ensure the model correctly reads the images, this test starts with random objects and then modifies their shape, color, and size with each step until their combination most closely represents the original image.

Project Classification

Further adding qualities to the trained model, I classified manually classified images based on my own biases.

Classification Categories

By adding classification to images, I could explore what adding random shapes could do to an image to inject something like “banality” into a place.

Activation Test on Trained Photos

Activation Test on Untrained Photos

Live Camera Activation

vgg model (preferred)

More information available upon request: