WHY IS THERE A NEED FOR A CHANGE?
Imagine the investment of time, money, and effort that it takes to create a feature-length film. Now consider what this prized creation is reduced to... a single word category and a few adjectives. We can do better, we must do better... The story's emotional structure is a part of the identity that makes a movie unique. To understand and analyze this at scale, we need to use AI and deep learning techniques to understand storytelling at its core. Two well-known examples that demonstrate this concept are "war" movies. They are exactly the same according to traditional metadata. But with what we can see below, these films have two very different emotional storylines and they leave us in two very different emotional states. Why is this important? Because this understanding leads to a world-class customer viewing experience, and the alternative is recommending just another war movie... of which there are over 1300 and growing.
SCENE DETECTION COLOR PALETTE MUSIC STRESS LEVELS SOUND EFFECTS SCENE MOOD
Saving Private Ryan has as we can see on the graph a very intense beginning as they storm the beach of Normandy, followed by a slower mid-section of character-building, ending with a long and intense final fight scene where we feel stress, suspense and fear for our favorite characters. Are they going to make it or not?
Ridley Scott wanted the viewers to feel the stress a soldier felt on the battlefield with the movie Black Hawk Down. So as you kan see in the data the movie is stressful all the way through to the end. Rarely can we as viewers relax during a movie like this, and afterwards we feel completely worn out.
Hover on images for more information
HOW IT WORKS
Everything you need to know about a piece of content can be found within the content itself. The colors, objects, sounds, action levels, and much more are there to make you feel exactly the way the director intended. By combining the latest techniques in AI with extensive expertise in film creation, Vionlabs are able to extract all the relevant information needed to determine the emotional impact of content at scale. We call this extraction a “Fingerprint” and this extraction represents the highest quality description of the content available in the industry today. This new approach to describing content is solving the often very expensive and inaccurate Video Metadata problem. Why overpay for data that is disappointing, woefully inadequate, and counterintuitive? If a picture is worth a thousand words, how much is a movie worth?
WE CARE ABOUT RELEVANCE AND DIVERSITY
Have you ever experienced or felt like current recommendation engines are keeping you in a bubble you can't get out of? Don't blame the recommendation engine, blame the data! The problem with manually added metadata is that it tends to get very one-dimensional, leading you down a single path, while consuming more, reinforcing that path with more of the same. We have therefore constructed our multi-dimensional fingerprints with a focus on Relevance + Diversity. The example below illustrates the difference. Instead of just highlighting "Sports" we are also finding connections to "underdog", "oppression" and "racial", allowing the recommendation engine to serve you with a much more diverse and relevant selection of connections:
Similar content to Woodlawn based on METADATA
Recommended for user:
STUCK IN A SPORTS BUBBLE
Similar content to Woodlawn based on
Recommended for user:
RELEVANT AND DIVERSE SELECTION
Woodlawn - 2015
ACCESS TO VIDEO FILES
EXCTRACT AND PROCESS DATA
PRODUCT & API ACCESS
HOW TO GET STARTED
UPLOADING YOUR OWN CONTENT
Our fully automated video analysis system supports:
Processing of +8000 assets a day
Deployment in your AWS or GCP environment
Deployment on your premises
Fully API-based for both upload and results retrieval
All information to be found at docs.vionlabs.com