Science Officer…Scan For Elephants! | Hackaday

If you watch a lot of espionage or terrorism movies these days, there is usually a scene in which a government official enhances a satellite image to show a clear picture of the main villain’s face. Do modern spy satellites have such a resolution? We don’t know, and if we did, we couldn’t tell you. But we know that even with unclassified resolutions, scientists use satellite imagery and machine learning. count things like elephant populations.

When you think about it, it is a difficult problem to count the populations of wild animals in their habitats. First, if you go in person, you are violating the target animals. Even a drone is likely to upset timid wildlife. Then the problem arises to try to cover a large area and find out if the elephant you see today is the same as the one you saw yesterday. If you guess wrong, you will reduce or recalculate.

Oxford scientists counting elephants used the Worldview-3 satellite. It collects up to 680,000 square kilometers every day. You do not disturb any of the observed creatures, and since each shot covers a huge area, your problem with double counting disappears.

Not unique

Obviously, counting animals from space is nothing new. Brute force makes a student count from a photo. But automated methods work under certain circumstances. Everything from whales to penguins has been counted from orbit, but water or ice is usually used as a background.

There are even efforts to derive animal populations from secondary data. For example, the number of penguins can be estimated by the spots they leave on the ice. Yes, these spots.

However, when counting the number of elephants in Addo Elephant National Park in South Africa, there was no clear origin. The areas are wooded and it often rains. The other challenge is that elephants do not always look the same. For example, they are covered with mud to cool. Can a machine learn to recognize different elephants from high-resolution space images?

How high?

Image copyright DigitalGlobe / Lockheed Martin

Worldview satellites have the highest resolution available to commercial users. The resolution is up to 31 cm. For Americans, this is enough to choose something about 1 foot long. This may not sound very impressive until you realize that the satellite is about 383 miles above the earth’s surface. It’s about taking a picture of New York and seeing things in Newport News, Virginia.

Researchers have not specifically tasked the satellite to explore the park. Instead, they removed historical images from passages above the park. You can find out what data that the satellite has, although you may not receive the best or most up-to-date data without a subscription. But even the data you can get is quite impressive.

According to the newspaper, the archival images they use cost $ 17.50 per square kilometer. Asking for new photos pushed the price to $ 27.50 and you had to buy at least 100 square kilometers, so satellite data is not cheap.

Training

Of course, a necessary part of machine learning is learning. The test dataset contained 164 elephants on seven different satellite images. People counted to give the supposed correct answer for training. Using an evaluation algorithm, people average about 78%, and the machine learning algorithm averages about 75% – no big difference. Just like humans, algorithms were better in some situations than others and could sometimes reach 80% for certain types of matches.

Free data

Want to experiment with your own eye in the sky? Not all satellite data costs money, although resolutions may not suit you. Obviously, Google Earth and Cards may show you some satellite images. USGS also has data for about 40 years online and NASA and NOAA I have quite a a littlealso, including NASA’s high-resolution Worldview. Landviewer gives you some free images, although you will have to pay for the highest resolution data. ESA works Copernicus which has several types of images from Sentinel satellites and you can get Sentinal data from ADVANCE browser or Watchtower, too. If you don’t mind Portuguese, the Brazilians have nice portal for images of the southern hemisphere. JAXA – the Japanese analogue of NASA – has own site with data for a resolution of 30 meters. Then there is one of the Indian equivalent, ISRO.

The V-2 took this picture 65 miles up in 1946.

If you do not want to log in, [Vincent Sarago’s] Remote pixel the site allows you to access data from Landsat 8, Sentinel-2 and CBERS-4 without registration. There are also other: UNAVCO,, UoM,, Scaling, and VITO. Of course, some of these images have a fairly low resolution (up to 1 km / pixel), so depending on what you want to do with the data, you may need to look for paid sources.

It has a wide range of resolutions as well as data type, such as visible light, IR or radar. However, all this defeated the most modern things in 1946, when the V-2 was taken from 65 miles up. Things have come a long way.

High sky

Imagine these same techniques working with aerial photography as you could get from a drone or even a camera on a pole. This can be more cost effective than buying satellite imagery. This made us think about what other computer vision projects are yet to explode on stage.

Maybe one day our 3D printers will compare their output in real time with the input model to detect printing problems. This would be the best “out of incandescent” sensor and could also detect loss of bed adhesion and other abnormalities.

Electronic data from a thermal image or electron beam stroboscope can deliver pseudo images as input to an algorithm like this. Imagine training a computer on what a good board looks like and then having it identify bad boards.

Of course, you always can take your own satellite images. We have seen this done many times.