Skip to content

A summer student project by students at the TEFT Lab for Kystverket. We utilized the YOLO algorithm and Google Maps API to classify the colors of buildings in images. The project aims to create a database of building colors to be used in Kystverkets' digital twin application for more realistic representation.

Notifications You must be signed in to change notification settings

Kystverket/summer23ai-building-colors

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

44 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Building detection/color classification model

Table of Contents

  1. About the project
  2. Overview
  3. Setup
  4. Usage
  5. Prototyping
  6. API reference
  7. Documentation
  8. Contact

About the project

A summer student project by students at the TEFT Lab in Ålesund for Kystverket. The project aims to automatically classify building colors to be used in Kystverkets' digital twin application for more realistic representation. We utilized the YOLO algorithm and Google Maps API to detect buildings and classify their color from image data. It is still a work in progress, but we currently have a semi-automatic system for classifying buildings based on address location. It can be used for testing on a small geographical area and further prototyping. More information about the project and our thoughts on the way forward can be found in the documentation. The project report should be read before starting.

DISCLAIMER : This is only a proof of concept and another image source most likely will be needed as to not break the Google Maps API Terms of Service paragraph 3.2.3 c). Google does however encourage creative use of their services, so this project might be in a gray area. As for other sources, 1881.no has the best images we have found so far, with "skråfoto" of many towns in Norway. They do not however have a freely available API to use.

Overview

Important contents

  • index.html is used for fetching images to be put into the model. The functionality lies within js/automatic-screenshot.js
  • The yolo_implementation folder contains jupyter notebooks for using the model.
  • The tools folder contains code for fetching images for constructing a dataset to train the model on.
  • training_data contains some unlabeled images which can be used to build a new dataset to train your own model on. Please note that these images have already been used to train our model.
  • prototyping contains some tests we did with other image sources.

Setup

  1. Clone or fork the repository
  2. Obtain a Google Maps API key from https://developers.google.com/maps/documentation/javascript/get-api-key
  3. Enter your API_KEY in the config.js.template file inside the config folder. Save the file as config.js in the same folder.
API = {"API_KEY": "your-API key"}
////////////////////////////
//This is a template for the config.js file that should be added to run
//the different scripts. Add your personal google API key to this file.
//When this is done, rename the file to "config.js"

This should be enough to run all the "frontend" code except the csv-reader.js inside prototyping/google_45degree. More information about the prototypes can be read in the separate section on prototyping. Setup of the model itself is described in the README.md file inside yolo_implementation/.

Usage

Building a dataset

In this project we used Roboflow to create our own dataset for building the model. Please refer to the documentation for further details on how to construct a dataset. Our project with different dataset versions can be found here: building datasets. v12 is the version used for training our latest models. v16 contains the same images but images tagged with either "shadow" or "bad" are excluded. v13 joins all color classes into one superclass called buildings. The older datasets are mostly redundant.

To fetch the images we used a tool inside tools which allows the user quickly grab screenshots of specific buildings. To use it open the tool.html file. This will render four different Google Maps views. Center the building you want by clicking the topmost map. This should automatically center all the maps. Sometimes you will need manual adjustment of a map to get a good image, so make sure to double check before downloading. Download the images by clicking on the 'Take screenshot' button.


Model training

Please refer to our guides for how to train a Yolo model:

  • Object detection 101 - a general guide on how to train and export your own Yolo model, with excavator detection as an example.
  • Object_Detection_Buildings - a guide on how to train and export your own Yolo, with building color classification as an example.

When using the guides you will need a code snippet to import the dataset you would like to train the model on from Roboflow. From the link Press the "Download dataset" button.


Then press the highlighted button to copy the code snippet.


Model use

The process of classifying building colors is currently split into two steps. First we download images of the address locations we are interested in. Then we run the model on the downloaded images. This is not ideal, but should be a solvable issue. However it is where we are currently at. The issue arises from the fact that the images we get are screenshots of Google Map views, and not just images fetched directly from a URL. Our further thoughts about how to combat this issue going forward can be found in the project report. The steps to use the model are:

Downloading Images:

  1. Open the index.html file

  2. Enter the address or postalcode you want to get images from. This will send an API request to the Open Address API from Kartverket.

    Screenshot

  3. That should yield the following prompt. Input the range of addresses you want images of and press 'last ned'. This will iterate through the list of addresses and save images from south, west, north and east from each address to your browsers default Downloads folder. Most browsers will allow you to set another default downloads folder. This can be useful for immidiately getting the images to the right place for running the model.

    Screenshot

  4. If using Google Chrome you have to press "Allow" to download the images. If fetching a bunch of addresses this popup will need to be pressed again after some time to continue screenshotting. A workaround is to use another browser such as Mozilla Firefox. Also note that if you do not press "Allow" the script will still iterate through the addresses and display the map views for each address. This is a waste of API requests so keep that in mind. To stop the capturing of screenshots at any point press "Stans nedlasting", refresh or exit the page.


Running the Model

  • Read the README.md file inside yolo_implementation/ for setting up and running the model.

Prototyping

We have also experimented with different some different image sources from Google, namely Street View and Static Satellite map. In prototyping\google_streetview and prototyping\google_staticmap respectively there is code for fetching images from these sources using Python.

The prototyping\google_45degree\screenshot_old.js file is our first successful attempt at taking screenshots from Google Maps views automatically. It may be simpler to read and build on than the current program. To run it open the screenshot.html file. In the same folder there is also an address_getter.py. Similarly to the main program this fetches address locations from Kartverket, but writes them to a .csv or .json file. In addition, there is a csv_reader.js. This file uses the built-in file system module 'fs' from Node.js so make sure that you have Node.js installed or something like CodeRunner in VS Code to try it.

API Reference

Documentation

Contact

About

A summer student project by students at the TEFT Lab for Kystverket. We utilized the YOLO algorithm and Google Maps API to classify the colors of buildings in images. The project aims to create a database of building colors to be used in Kystverkets' digital twin application for more realistic representation.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published