Analyzing #OnCallSelfies

Pagerduty is a popular service used by software teams to automatically receive pages from applications. It's super handy. Within the app, sometimes it prompts users to "Post an #OnCallSelfie". This is a fun idea but I got to thinking, are people generally happy, or are they upset when they post these selfies?

I used the following APIs and Services for this:

  • Python-Twitter : A python wrapper for the Twitter API to get #OnCallSelfie photos.
  • Google Vision API : Processes the image files and gets their sentiment.

One really cool API recently offered by Google is the Google Vision API. This API will take image data, and can do a lot of work on it such as parsing face boundaries, tagging possible locations in the image, and getting facial sentiment. The last part is what was very interesting to me. Facial Sentiment is trying to read what the face is conveying from a photo. It's really amazing!

There are several emotions the Vision API can attempt to read: Anger, Joy, Sorrow, and Surprise. The ones I was interested in were Anger and Joy.

I ended up grabbing a little more than 100 image files from Twitter uploaded with the #OnCallSelfie hashtag. Once I ran all of these images through the Vision API, I got the following average values for sentiment:

joy: 2.23  
anger: 1.15  
sorrow: 1.31  
surprise: 1.32  

This data is on a 5 point scale with 1 being Very Unlikely, and 5 being Very Likely.

I wasn't sure what I was expecting from this exercise honestly. I assumed that generally there would be no distinguishable emotion that led the others, but I was wrong. Joy seems to be the most likely emotion, and anger was the least likely. Most and Least likely is completely arbitrary though, as according to the API, none of the averages were even "Possible" (3 out of 5).

All told, the vision API is very interesting, and I'm sure I will keep playing with it. I'll report back more cool uses of it soon!