Bizzlogic's One-Day Challenge #2

by

Fatih Inan

February 7, 2024

How We're Making Avatars Smile, Wink, and Win Hearts!

I'm Fatih Inan, the CTO at Bizzlogic, usually found tinkering behind the scenes. Today, I’m excited to pull back the curtain and show you what happens when we ignore all the complex details of shipping a feature and just have fun! Our findings should interest enthusiasts and professionals interested in avatar technology, motion capture, ARKit, and Metahumans. It offers a glimpse into the intricate process of bringing more life and realism into the virtual world, a journey that is not only technical but also creative and hopefully insightful.


What’s This One Day Challenge?

Once a month, we at Bizzlogic hit pause on our regular tasks and dive headfirst into the ocean of new tech. It's our very own One Day Challenge (calling it ODC now), and it's as cool as it sounds! Our goal is to innovate, experiment, and sprinkle some extra awesomeness on our 'Meadow' platform.

Our Latest Tech Adventure

One of the obvious criticism against our avatars was the lack of expression. Our vision is all about giving our avatars the power to express - a smile, a thoughtful glance, a bigger emotional spectrum. We aimed for the stars and decided to create a selfie mode for our virtual avatars. Think "hey, look where I am" types of selfie videos, but for our digital twins, using ARKit and Metahuman technology. We found this Idea exciting for many reasons. Firstly, we could finally move towards a more profound experience of sharing content from within Meadow. And more importantly, we could evaluate a possible new future for our avatars. All this while having the chance to explore the Metahuman, live link functionality that has been out there for quite a while!

Aha, Moments and Lessons

Thanks to virtual production’s growth, we’ve got more gadgets and gizmos than a tech store on Black Friday. Epic Games and many individuals are expanding different unique solutions that await to be combined into proper use cases. Stepping out of the cloud streaming maze helped us see a bigger, clearer picture. We are always faced with limitations in our daily lives about how we actually integrate new features into the product and all the surrounding questions. Our ODC allows us to ignore that and just go for it.

I want to point out one element that was crucial for this: Live Link. It's a great technology paired with an iPhone app that you can connect to the Unreal Engine or record on your device. It provides an effortless and fast solution to capture or livestream facial captures. Likewise, it's versatile and a lot of fun! Create your own avatar with Metahuman, record a face-acting performance for a virtual avatar, or livestream it entirely! Thanks to plenty of examples, we were able to get it running quite fast.

So, after trying and testing existing avatars and the first prototype, it was clear that we were onto something. The first significant lesson was the value of more expressive virtual avatars. They bring a new dimension to user engagement and interaction within virtual space. Our experiments and results made it clear that we will need to rework our avatar system in the near future to keep pace with these advancements. I believe that by creating avatars that can express emotions and interact more naturally, we can bridge the gap between virtual and real-world experiences, making the content more relatable and engaging. It's also an obvious step required as soon as our interactions happen within Virtual Reality.

Produced Face tracking footage can be utilized in different ways.

Outlook and Roadmap

Looking ahead, we are excited to add these innovations to our roadmap in two parts. Firstly, we will introduce a new selfie mode to allow users to share content more expressively within the Meadow world. Secondly, we plan to implement face tracking for avatars using the webcam of your device, enhancing the platform's expressive capabilities. We are also exploring extending this with full-body motion capture and VR headset capture technologies like Quest Pro and Apple Vision. How do we turn this into a feature? You will understand why innovation like this might fail without free space: We need to crack an ARKit-free solution that works with your webcam, the communication between the cloud server and the user, secure connections, reliability between the webcam, and the lighting situation of every individual. You need to account for different resolutions, the internet connection and so much more. Our current solution simulates facial expressions by extracting your voice input.  

I recommend reading about Oculus LipSync if you are keen to know more. Basically, AI is being leveraged to train what type of facial expression is being made using syllables, volume, and expressiveness.

Thankfully, there are countless individuals and companies who might have the building blocks ready for those issues. Meanwhile, there is merit in just expanding the recording functionality within Meadow  , even without the webcam element.

We will share more here as soon as it materializes into another prototype.

Live tracking is a much more complex but also more immersive and authentic

The Tools Behind the Innovation

This journey was fueled by a combination of advanced tools and technologies. We relied heavily on Unreal Engine 5, Live Link, Metahumans, ARKit, iPhones, and our Meadow Avatars. These tools were instrumental in bringing our vision to life, allowing us to experiment with and refine our ideas into tangible features.

As we continue to innovate and explore the limitless possibilities of the metaverse, stay connected with us at Bizzlogic. Our journey into the virtual realm is an ongoing adventure, full of discoveries and breakthroughs, and we are excited to share our big moments with you.

Fatih Inan, Chief Technology Officer, Bizzlogic  

Fatih Inan

CTO | Real-time Developer | Human Interaction Designer

February 7, 2024