Strange IndiaStrange India


It’s a cool summer evening in 2030, and my 16-year-old daughter and I are walking down our street, stargazing with augmented reality glasses. Above us the majestic night sky is clear, superimposed with information about distant stars that are configured into constellations like Pegasus, whose legend I use to teach lessons about life. It’s a beautiful moment.

Further along, we pass a wooden fence tagged with a litany of swear words and slurs. I see the graffiti through my glasses, but my daughter, whose glasses are set to filter out inappropriate content, does not. Nor does she understand the cause of agitation written on the faces of those nearby.

I’m excited about the first possibility, but I worry about the second. While I appreciate the ability to shield my teenager from unseemly content, I also understand the importance of having meaningful conversations about why certain words and actions can harm others. That can’t happen if children never experience them in the first place.

We continue walking and see a young homeless man panhandling in front of the store. Here, the role of parental controls is more murky. Inadvertently or by design, an algorithm classifies his prone posture upon the sidewalk, ragged attire, and a note asking for money as inappropriate for children, and renders his appearance and surroundings to be more benign. While altering and architecting our experiences of the world may seem far-fetched, for years we have been constantly learning about the impact of similar algorithmic biases redacting what we are shown online.

What would prompt my daughter to ask questions about important societal concerns like homelessness and empathize with those experiencing it if, in her world, she never sees it? What if others, who prefer an “idealized” world, also choose these settings in their AR glasses? How can we have meaningful conversations on how to address these challenges if large portions of the population are oblivious to the true conditions of our community?

We’re closer to dealing with these kinds of moral issues than you might think. Facebook now plans to pursue Mark Zuckerberg’s vision of transforming from a social media company into a “metaverse company”–and we’ve already seen a glimpse of what it could look like with Workrooms bringing a sense of presence and select gestures. Media fragmentation and echo chambers have already shattered our common reality. If left unchecked, the metaverse may only make things worse. It won’t be long until each of us is able to live in an entire world tailored to our own personalities, interests, and tastes, which may further erode our shared experiences and make it harder us to meaningfully connect.

Collective experiences are essential to our ability to bond and cooperate. Much of the division we see today is a product of our splintering digital realities. When we’re not encountering the same problems, it’s hard to come together to develop solutions and empathize with others. Filter bubbles are ultimately an empathy problem.

The reality is, we’re already living in a near-infinite number of realities online. Moments after we begin browsing, our web experiences diverge. We each see wildly different things based on who we are, where we live, what content we consume. The things we like get resurfaced again and again in different forms, each new iteration more enticing than the last. Eventually, our life online is entirely our own, which can lead to selective and self-reinforcing worldviews–and thereby alternate realities.

Not only do many (if not most) of us still struggle to differentiate between what’s real and what’s fake, we often don’t realize that these experiences are heavily influenced by outside actors with an agenda–whether it’s as banal as selling a new product or as sinister as shaping political beliefs and sowing discord. The metaverse will apply that dynamic to real-world interactions.

Time and again when companies develop new technologies, they rarely do so while considering the possibility of adversaries. We’ve seen this with baby monitors, AI, and of course social media platforms. The metaverse isn’t immune. It’s not difficult to imagine nefarious actors injecting extremist or toxic content into metaverse experiences directly.



Source link

By AUTHOR

Leave a Reply

Your email address will not be published. Required fields are marked *