Take a Peek Under the Hood at the Technological Approach Driving SocialVoice.ai

SocialVoice.ai is the first AI solution that can analyse everything an influencer has ever said across every video, every frame, word by spoken word.
With our solution, brands and agencies can now find out exactly what is being said ‘inside video’ about their market and brands. More importantly, the AI technology in our solution enables you to identify instances of red flag content such as toxicity, quality of tone, sentiment, topic specialism and much, much more. 

SocialVoice.ai now not only provides unique video and voice-based insights to agencies and brands, but can now provide vital missing data to top social listening platforms. The feedback on this solution, the Influencer Integrity Report, has been amazing. Described as:

“Ground-breaking” and “Like Air. Without this, social listening platforms can’t survive”.

Today’s blog is dedicated to the formalities – the technology behind SocialVoice.ai and our cutting-edge innovation.

Our Tech:

Our product emerged from a background of deep engineering expertise, advanced algorithms for automatic information extraction, transformer based technology for decoding and graph neural networks for enhanced pattern recognition.
In addition to the cutting-edge machine learning approaches used, we also combined output with insights gained using traditional data science and analytics tools.

Our technology is unique. It is based on many years of deep research, and brings the first commercially available solution that can analyse video content at a very fine grained level, at scale and speed. While others have made progress in this area, it is generally extremely resource intensive to operate such solutions at scale. Such work generates interesting research artefacts but is of little commercial value or societal benefit.

Our contribution is backed by numerous patents, and is focused on the problem of search space reduction. This allows our technology to be operated and deployed at global scale reducing both operational costs and environmental impact. Our technology takes a robust and systematic approach to the problem.

We load in video at high volume, separate the audio and video segments, then distribute the processing of each individual analysis over multiple clusters of containerised services controlled by Kubernetes schedulers. Each service has a single responsibility, but similar services are grouped together to maximise resource allocation; for example, vision scene identification and object detection are deployed in the same Kubernetes pod to reduce data transfer when the same resource is being analysed.

Our Tools:

We use a combination of tools to achieve our objectives. Most of our machine learning work is carried out and deployed using Python, but in some areas we use Rust for extreme performance improvement. We leverage the Pytorch framework extensively, and combine this with custom inhouse predictive models and also open source models we have fine-tuned for our specific domain requirements.
For processing at speed we use Apache Spark, and deploy using a combination of cloud vendors including Azure and Google Cloud. In addition to deep analysis using machine learning based tools, we also lean heavily on statistical approaches to identify pattern correlation across time-based events. 

The key message regarding the tools and techniques we use, is that in order to achieve our goal and deliver the product to our customers, we have brought together a carefully coordinated team of cross-discipline specialists and technologies in a way that has not been done before. Our combination of cutting edge and traditional approaches together with deep big-data engineering expertise (covering CiCd, DevOps and MLOPs), enables us to operate innovation and product delivery at scale.

We also use tools that allow us to monitor our compliance with both legal and ethical obligations while using advanced AI and machine learning, including both internal tooling and also resources from SHAP and AI Fairness 360.

Our Innovation:

SocialVoice.ai’s innovation extends beyond the simple analysis of voice and video image frames. We harness multimodal technologies to understand the relationship between background scenes, the emotion of the spoken word, vocal tonality, and the associated speaker’s moving image. 
Combined, this gives us a deeper and far more holistic understanding of the meaning of what is being said than ever before. In addition, using network-based graph analysis, we go deeper still, by providing evidence-based insight for brands and marketers relating to sponsorships, promotions and reviews, that would otherwise be hidden from view, in a veritable ‘video black box’.

Our technology is based on advanced research carried out by our CEO Allen O’Neill as part of his PhD. He is a recognised expert in the field, and has been awarded ‘Most Valued Professional’ in AI and Cloud technologies by Microsoft for 7 years running. In addition, he is a Fellow of the Computing Society, a Chartered Engineer and a Microsoft Regional Director (one of only 200 globally). 

We have a growing portfolio of patents (both granted and pending) around our technology. We are adding to the state of the art not only through commercial innovation, but also by publishing academic and general technical papers about the advances made, as well as making valuable curated models and datasets available to the community. 

Conclusion: 

Here at SocialVoice.ai we have leveraged our technology to be the basis of a large-scale video analysis platform that we know can, and will, change the way video analysis is carried out globally. 

SocialVoice.ai is highly significant at this moment in time, as the written word becomes less relevant, and mobile and video-first quickly become the norm.
Current Influencer Recruitment solutions are full of dangerous gaps, risking campaign damage and brand image safety. We are excited to see the impact SocialVoice.ai will have on the current Influencer Recruitment landscape, and look forward to sharing more behind the scenes updates in the future!

Leave a Reply

Your email address will not be published. Required fields are marked *