Image default
Guest Articles

The Metaverse: Helping Platforms Keep Us Safe in New Digital Territory

With the rise of the Metaverse, users are always at threat of getting exposed to harmful content. How contextual AI is helping detect and curb it?

The concept of the metaverse is not a novel one; people have been seeking to redefine reality on different online platforms since the concept of the internet existed. Yet this current iteration of the metaverse has become something completely new, garnering mainstream interest and billions of dollars in investment in recent years. While this presents a unique opportunity to foster community and drive innovation online, an expansion in popularity and a more complex online reality also opens the door to a greater risk of harm – one that is more difficult to detect and proactively combat.

Legacy Web 2.0 platforms hold decades of experience in confronting online harm and have evolved from a basic level of content moderation toward adopting more specialized tools to curb the relational growth between the number of users and the amount of harmful content.

As we transition to web 3.0, not only is the rate of change for risk in the metaverse far faster, but the data is also much richer – allowing for more ways for harmful content to evade detection.

As the metaverse continues to develop, there’s much to deliberate over – from regulation to how businesses leverage the platform, and of course, the potential for abuse. Fortunately, one of the best tools we have for keeping the metaverse safe already exists: Contextual AI.

Context is key for understanding the meaning of what we read, see, and hear in everyday life. We don’t process the world in isolation; we analyze content along with its surroundings. The same is true when it comes to determining what is harmful and what is not in virtual reality.

Keywords and phrases can be considered harmful in certain situations but not in others. For example, the word “queer” has historically been a pejorative term but more recently has been reclaimed by the LGBTQ community as a way to take back their power and identify themselves in a celebratory manner. Depending on how “queer” is used in a conversation online, it may be threatening – and should be identified and taken down – or acceptable to remain up. Visually, images of a breastfeeding mother or newborn baby’s first bath can be flagged for nudity, though in context we understand that these are neither pornographic nor child sexual abuse materials.

On a web 2.0 platform, contextual AI takes into consideration as much data as possible – such as content within the video itself along with its title, description, user thumbnail, and comments – when calculating the probability of harmfulness. In the data-rich metaverse, there is even less reason to analyze content in isolation. We will need to understand how and where the content exists in order to fully understand its usage. Just like in real life, who is saying something to whom and in what space they are interacting determines the meaning behind the interaction and if users are in harm’s way.

The rich complexity and unstructured nature of the metaverse, especially in comparison to web 2.0 platforms, gives users seemingly endless opportunities to develop new worlds and possibilities to engage with others. At the same time, that makes it easier than ever to embed violative items more deeply and therefore harder for platforms to detect. 

A harmful symbol, like a swastika, for example, would be quickly picked up by existing image recognition algorithms. But in the metaverse, it could be hidden in the grain pattern of a chair and only visible upon zooming in. Or, it could be in the arrangement of the chairs in a room, so it is visible only upon zooming out and looking at the context across which each chair is placed.

If malicious content like this remains hidden unless on a need-to-know basis, then platforms and their content detection tools must be in the know. Critically, context can tell us where to look and also how to look.

Of course, to accurately predict risk – AI models must be kept up to date. The world is constantly changing, and the defining properties of risk, or the features that are predictive, change over time. Given the countless ways to evade detection in the metaverse, AI leads must ensure that the definition of context, or where and how they are looking for signals, evolves so that they continue to look for signals in the right places and add those to their models.

Furthermore, models need to be retrained and redeployed on an ongoing basis to ensure they accurately reflect the world and incorporate changing policies. Platform policies, which are relative to platforms’ willingness to accept risk, are also another level of context that needs to be incorporated into any automated content moderation solution.

The metaverse presents an unparalleled opportunity for individuals and businesses alike, which is why safety must be prioritized in this new digital frontier. Well-intentioned users will always play a role in upholding the social contract and looking out for the safety and well-being of others in the metaverse. However, contextual AI adds another necessary layer of protection from nefarious actors seeking to take advantage of the freedom and opportunity the metaverse holds.

Matar Haller is the Director of Data Science at ActiveFence, the world’s leading trust and safety company.

For more such updates and perspectives around Digital Innovation, IoT, Data Infrastructure, AI & Cybersecurity, go to AI-Techpark.com.

Related posts

Prioritizing Cyber Risk Management in a World of Uncertainty

Alison Furneaux

Oracle Cloud for CIOs: Lead Your AI Vision to Reality

Oracle

Why do you need to secure the email gateway?

Oren Eytan