You know about planet (formerly Planet Labs), right? Their aim is to map every part of the Earth’s surface everyday using their Dove satellites. One Pixel for every area every day. Temporal data at its best!
If you get an image every day and you want to monitor it (change being the obvious reason) a human is only going to be able to do so much. Filling your organisation with Earth Observation specialists might not be that viable and besides people have weekends off and go on holidays. The data will keep coming though!
Planet kindly allowed me access to their API and data over California and the United Kingdom for 30 days. Obviously, this is for non-commercial use so it goes almost without saying that everything I show and describe below is not being used commercially and all screen shots and images are reproduced with kind permission from Planet.
Planet provides a nice web interface
The areas I had access to are highlighted on the map. These things are relatively intuitive.
I can define bounding boxes, set a temporal coverage (at the bottom of the screen), move cloud cover, ground sample distance and off nadir angle sliders and change the data (Landsat, planet or rapid eye). To be honest it works nicely and has a very smooth interface as well.
The Planet API was what I was most interested in using though. I like setting up a query and running it and getting the data back without using the GUI. Where to start? Luckily, easy_install planet worked for me (look… I have the latest version already!).
If it doesn’t work for you, try using Python 2.7; have a read here
Next stop, documentation. It is up to you what your preferred entry point is – command line or API I am using the API here.
All references are provided
And an example
To be honest all I am going to do here is adapt the example to provide, ahem, an example!
All I am doing here is using a search on the 20 most recent images in the planet archive for my area of interest (i.e. California and UK). Then I am looking to see how cloudy they are and if they have more cloud than 0.5% I throw them away, else I append the data to a list. Then I slice the first 10 in the list and download them to my Data_out folder.
If you load it into a viewer / GIS you get, for example, something like this:
Yes it’s a geotiff (this one is in Cornwall, in UTM Zone 30N). It is worth checking that the image is correctly referenced (of course). Each image is about 50mb in size covering approx. 550sqkm.
That is pretty nice stuff, obviously, you can change the search parameters – I was just taking the first images in the archive available.
Automatic Edge Detect Please!
I like the idea of being able to do something to the images before even touching them and edge detect is a pretty simple process. OpenCV is such a smart computer vision library (and includes machine learning!). Install the Python libraries
Check that you can run import cv2 with no errors and you are set.
You will not believe how easy it is to get the edges.
It is effectively 3 lines of code – Boom!
Define the frame
frame = cv2.imread(“D:/Planet_labs/API/data_out/20161021_102722_0e19_visual.tif”)
Run the edges
edges = cv2.Canny(frame,100,200)
save the image
I get the edges!
Created images don’t have any georeferencing!
That can be solved with a bit of code as well. Stackoverflow has the answers to this by convert all the geotiffs into .tfws and then use the same worldfile to the new edges file (jgw if you are using jpgs). I get a nice result.
So how to do this at the same time as downloading?
Create a function called edges and once all the images you need are downloaded search the directory.
directory = ‘D:/Planet_labs/API/Data_out’
for filename in os.listdir(directory):
frame1 = os.path.join(directory+”/”+filename)
I am also looking at machine learning, but I am not quite there yet!
Many thanks to Planet for permission to show the images in this blog, especially to Alex Bakir
All the code is available from my Github page