How Roblox Avatar Tech Is Evolving

How Roblox Avatar Tech Is Evolving

We currently support two different tech stacks for avatars: A legacy tech stack (R6) that supports older avatars and experiences; and a newer tech stack (R15) that supports all avatar styles and capabilities.
To ensure that any avatar style will work in any experience and everyone can access the latest features, we’re working to unify these into a single tech stack. 
This presents technical challenges, so we’re working closely with our developer community to release tools that will ease the migration onto the new unified tech stack.

Avatars are increasingly becoming a part of our identity. At Roblox, we want each of our more than 65 million daily users to have an avatar that they feel truly represents them—not only how they look, but also how they express themselves to others in real time. This becomes even more important as we release immersive communication tools like Connect, which is a new way for anyone 13 and older to call friends on Roblox as their avatar. For people to feel truly connected as their avatars, they need to be able to react and show emotion in the moment. We need avatars capable of more complex facial expressions, lip syncing to voice, and nonverbal cues, such as shrugging or nodding. 

To ensure that everyone can see themselves reflected in these immersive worlds, we’ll need a greater variety of elements that people can mix and match to make avatars that represent them. That means more body and head types to choose from, as well as more clothing, makeup, and accessory types, and more hair and skin colors, textures, and styles. To rapidly expand the choices for these items, we are working to make it much easier to create new avatars and empower more people to bring their ideas to life. We’ve come a long way since our first blocky yellow avatar, and we aren’t finished yet. 

As avatars evolve and improve, we also want to ensure that the latest advancements, including layered clothing, facial animation, chat with voice, animation packs, and emotes, are available for every avatar, in every experience. Today, only avatars built on our most modern tech stack—called R15—have access to the latest mobility and expression capabilities. That’s because we currently support two distinct avatar tech stacks. The R6 tech stack was designed for the classic blocky-style avatars, which have only six body parts, and the experiences built for those avatars. The R15 tech stack was designed to support avatars with up to 15 body parts, so it supports all avatar styles—blocky, humanoid, and fantasy—and experiences built for all avatars. Supporting dual tech stacks has created limitations and frustrations for developers and creators. 

We currently support more than 15 years of experiences, many of which were designed for R6 technology and are not working as seamlessly with the newest, most expressive avatars as we’d like. For example, if someone with an avatar built on R15 enters an experience built on R6, their avatar may look and move differently than usual—their avatar would no longer be able to make facial expressions. If they had layered clothing, such as a jacket over a shirt, their avatar would revert to simpler clothing. In addition, some experiences, like obstacle courses, are built around specific avatar sizes. We know this isn’t ideal for those who use or create for Roblox. 

See also  The Best Apple Watch Deals Today (May 2023)

We want everyone on Roblox to have access to our most advanced avatar technology so they can fully embody their digital identities and create amazing experiences and visuals. We also want to be backward compatible with existing avatars and experiences. Given all of this, we’re being very thoughtful about how we approach this unified tech stack, to avoid creating further disparities and to create a path forward that minimizes the amount of manual work required. We will provide the developers building these worlds with the tools and support to keep their experiences vibrant and engaging while maintaining the feel they want for their experience. 

Moving to a unified tech stack

Our avatars—blocky, humanoid, or completely fantastical—should just work in any experience, with any accessory. We want to remove any friction creators and users have felt to date. We also want creators to retain control over the look and feel of their experiences, whether they support R15 tech, or R6. To support all of these new features and capabilities—now and as we continue to innovate—we’re unifying the technical architecture that supports all avatars.

We’ve heard from our developer community that they want to keep the look and feel of the classic blocky avatar style, but they also need us to enforce consistent avatar sizes and proportions. We also heard that they want tools to make it easy to load avatars built on R15 tech into R6 experiences now—and the ability to automate the process of converting R6 experiences to R15 standards. Our longer-term goal is to build a layer that will enable R6 experiences to work with the R15 stack, while minimizing any specialized code we would need to maintain.

Earlier this year, we shared the R6 to R15 adapter. The adapter works as an emulation layer, allowing R6 scripts to run on R15 bodies, without requiring any action on the part of the avatar’s creator. When an R15 avatar joins an R6 experience, the adapter enables it to move in the same way as an R6 avatar. This allows developers to immediately try out R15 avatars with just one click and see how well they work before making any updates to their experiences. With this new adapter, R15 avatars retain features like layered clothing and facial expressions, but can still join an R6 experience and move as the developer originally intended.

See also  Can text messages be retrieved once deleted?

Our next step will be a suite of conversion tools to allow developers to easily migrate their R6 experiences to the R15 tech stack. These tools will help developers convert an experience’s script, character, and animations and help them test the conversion as they go. The conversion tools will use the R6 to R15 adapter so developers can publish their experiences in the middle of conversion without breaking. Finally, we plan to give developers the ability to adjust avatar scale to any desired setting, including mirroring the classic Rthro avatar style. This gives developers consistency for obstacle courses and unlocks the potential for building new types of Roblox experiences. 

Beyond the unified avatar tech stack

Migrating to a unified tech stack is a necessary step for us to support developers and users as we improve avatar technology and introduce new features and tools. But it’s just the beginning. Unifying all avatars on one tech stack will make it easier for developers to take advantage of new real-time communication tools, such as Connect. For these calls to feel like a natural conversation, we’ll all need access to newer avatar capabilities like facial expressions, emotes, and voice syncing. We also want to enable a much broader variety of avatars so we recently opened the doors to avatar creation by any of our UGC members. We also announced that we’re working on a generative AI tool to enable anyone on Roblox to easily create an avatar from an image and a text prompt. 

Our goal is always to be a platform that connects people with safety and civility in mind, so we’re thoughtful about how we’ll moderate the creations and interactions with these new avatars. As tools like generative AI democratize and accelerate creation, our moderation efforts need to keep pace, leveraging a combination of AI and human moderators. Some of the challenges that we’re currently addressing are directly related to the combinatorial nature of avatar creation and the vast number of social interactions on the platform. We’ll share more details about our moderation tools as we release them. 

Ultimately, we intend to enable anyone to create and customize avatars from scratch—even from within an experience. This will unlock unlimited ways for people to express their individuality. From a technical and creator standpoint, they also present a number of interesting technical challenges to solve:

How does a creator design items for a vast array of avatars with no restrictions to body symmetry, number of limbs, or facial features, while also supporting features like layered clothing or the ability to animate the avatar’s facial features?
How can we enable more people to create avatars without having to use professional 3D graphics software?
How can someone’s personalized avatar fit seamlessly into any experience they find on Roblox? 
With the rapid proliferation of UGC avatars and powerful generative AI techniques, how can our teams optimize our grid and cloud for maximum stability, as well as low latency, and efficiency?

See also  Who makes the most money from commercials?

We are working to solve these challenges with new tools for creators, new infrastructure to make the platform even more reliable, and continuing to communicate transparently with our creator community. By getting everyone onto one unified tech stack, and releasing tools to make all of this easier, our creators will be able to do what they do best: Blow our minds by creating things we never could have imagined. 

The post How Roblox Avatar Tech Is Evolving appeared first on Roblox Blog.