Artificial intelligence is a powerful tool, but until recently, has required substantial technical expertise even for a proof of concept. Drone technology has equally been either cheap but unreliable or very expensive and required significant training.
In this video, watch as cofounder Christopher Penn shows the integration of a DJI Tello drone, Google Photos, and IBM Watson Studio’™s Visual Recognition to show a use case for AI in identifying solar panels on a house as a proof of concept.
Using IBM Watson Studio™, you’ll see how to get started creating a machine learning model from photos on your smartphone (or straight from your drone) with absolutely no coding at all, just dragging and dropping photos inside Watson Studio.
The commercial applications of affordable drone technology and AI for small, midsize, and enterprise businesses are legion, such as:
- Arborists identifying sick or healthy trees from the air
- Automotive companies identifying cars in various states of disrepair
- Insurance companies identifying home risks
- City planners managing traffic flow
- Town managers identifying property and variance needs
- Chimney cleaners identifying homes in need of maintenance/repointing
Disclosures: Trust Insights is an IBM Registered Business Partner, and any purchases made with IBM may benefit the company financially. IBM provided a free DJI Tello drone to create this project’s video assets.
Can’t see anything? Watch it on YouTube here.
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.
Today’s episode we’re going to go over how easy it is to use drone technology and the IBM Watson visual recognition technology from IBM Watson studio to make it easy for a small business and midsize business or an enterprise to leverage the power of artificial intelligence.
So here’s the experiments. IBM sent me full disclosure, I’m an IBM champion, which means that I recommend IBM products and services. And my company trust insights is an IBM registered business partner. So should you buy anything through us IBM, we do receive financial compensation.
So IBM sent out these drones, the DJ, hello drones, you can see a short clip of me flying one here, and a sample of some of the things that I was taking
photos of, I’m using Google Photos to to offload the drone photos from my smartphone. Because it’s instant, the moment you you as long as you’re on Wi Fi is that goes right into the cloud. And then you can pull
data from Google Photos via the API. So you can see I’ve got a variety of photos here, some with solar panels of my house, and others that are just general neighborhood photos. In visual recognition. Once you get your data, you can decide what kind of model Do you want to create, I went for a custom model, because there are general walls that described like, Hey, here’s just a what’s in the image or faces or food. But I wanted one that was very specifically about if I give Watson something to recognize, what will it come up with? What will it say it has or has not found.
So you can see here, I’ve got a bunch of different
photos. Here, I’ve got some negative photos, meaning that these are photos that do not contain
the target outcome. And let’s go ahead and take a look at what the model looks like. I’ll click on Edit, retrain.
And what we see is, here are all of the photos that I have chosen that show this is what solar panels look like, these are the ones on the top of my house. And then here’s a whole bunch of Hughes photos that are similar houses, but definitely not solar panels. And then there’s a bunch of other photos in here, like random things, me and a Halloween costume, etc. To give Watson a sense of, hey, these are all the things that are not solar panels. So if we go and click on trained the model is, has been
pre trained in the past,
I go click test. Now we have an opportunity to drag and drop photos in to see if Watson can tell the difference. Let’s start by putting in a picture of my house. Now, this is a very different picture that was in the training library. This is the very back of the house. And you can still see, we can tell us humans, there are solar panels here, but the vast majority
of the roof does not have solar panels on it. So that’s clearly a photo that’s slightly different. Let’s put in a picture of someone having a mistake. Let’s put a picture something that’s rectangular, you can see there’s a thing that could
be interpreted as a solar panel. Let’s go ahead and see what else what other things we put it we could put in this picture of slides from a presentation conference presentation,
I can see this one. This detected solar panels here,
clearly not solar panels here,
not solar panels here. This one is an example era that it looks enough like a roof that we would need to add this to the negative library and say, Nope, that’s not what we’re looking for. Here’s the power of Watson studio and the visual recognition service. When we have something like that, where we know this is an example of something I don’t want. I go ahead and upload the image,
and I add it to our model
go into the unclassified images,
I take this image, and we classify it in the negative models. And this is not solar panels.
And then we hit retrain.
Now, you can see
I’ve written no code for this I have clicked
on, I’ve not
had to use my keyboard even once I’ve just dragged and dropped the things that I want to train this model on. Now, the applications for this are legion if you are doing any kind of visual inspection. So if you are, for example, a business that deals with auto insurance claims, you can show a whole bunch of front impact crashes or back bumper impact crashes or things that were spurious like false claims, and be able to build a library of photos taken with smartphones, taking what drones taken with anything and
use it to help classify
whether or not something is is real or fake. If you using a drone of some kind,
you could fly over and take photos of trees is this tree healthy is a tree not healthy, and once you’ve used your domain expertise, you could as an arborist fly over an entire neighborhood and identify those those properties or buildings where there is unhealthy growth and you can make recommendations okay we can see from above that there’s some issues here you can use a drone for traffic recognition is this Pat traffic pattern a good one or a bad one you could obviously digress. people’s cars from the air if you want to.
But the power of Watson studios artificial intelligence is that it is so easy to get started. It is so easy to train a model you drag and drop your pieces together and let Watson do the heavy lifting. So give it a try.
You should enter IBM competition it’s open till December 4, 20th, 2018
to win a drone one of 1500 DJI Tello drones and try this out yourself. As you can see, we’re not using anything super technical or super complex. We have the drone which
connects to smart phone, the smartphone connects to Google Photos, we load the photos up and bring them into our model, train the model and then have it do its recognition. So it is easy for you to get started and try out the service. All you need is an IBM Cloud account and you can set one up to trial one up for free for 30 days. Cloud. ibm. com
Thanks for watching and
please subscribe to our YouTube channel and newsletter
one help solving your company’s data
analytics and digital marketing problems. This is trust insights.ai unless you know how we can help you
Need help with your marketing data and analytics?
You might also enjoy:
Get unique data, analysis, and perspectives on analytics, insights, machine learning, marketing, and AI in the weekly Trust Insights newsletter, Data in the Headlights. Subscribe now for free; new issues every Wednesday!
Want to learn more about data, analytics, and insights? Subscribe to In-Ear Insights, the Trust Insights podcast, with new 10-minute or less episodes every week.