

Overview:
Project Overview
An immersive multimedia installation combining AI-powered sentiment analysis with a real-time 3D video simulation that visualizes emotional responses to online news content.
Problem: Online media content is increasingly designed to trigger emotional responses and influence human behavior—often subconsciously. As users scroll, swipe, and consume content, they may remain unaware of the cumulative impact on their mental state, decision-making, and interactions with others.
Challenge: How might we make the emotional effects of online media consumption more visible, tangible, and consciously understood—so that individuals can better recognize, reflect on, and respond to the influence it has on their daily lives and communities?


Solution & Outcome
In 2017, as AI was just emerging, I set out to explore how it could expose the hidden emotional influence of online media. Partnering with a software developer, I built a system that scraped news articles in English, Ukrainian, and Russian, using a custom AI model to detect the dominant emotion in each.
These predicted emotions—anger, fear, joy, sadness, and more—were visualized in a real-time 3D video simulation of a stylized Ukrainian family reacting to the news through gestures, text, sound, and expression.
Commissioned by Pinchuk Art Centre, the installation attracted wide attention, sparked public discourse on media manipulation, and was featured in both local and international press, including Vogue and L’Officiel.

Challenges & How We Overcame Them
Bringing this project to life in 2017 meant navigating multiple technical, linguistic, and conceptual challenges. Here's how we tackled them:
Challenge: Visualizing complex human emotions in a relatable way.
→ Solution: Designed a 3D simulation of a hyperbolized Ukrainian family reacting in real-time with animation, sound, and text to AI-detected emotions.Challenge: Limited AI capabilities in 2017 for real-time sentiment analysis, especially in speech-to-text.
→ Solution: Switched from live TV broadcasts to scraping text from news websites for analysis.Challenge: AI model was trained only on English-language content.
→ Solution: Integrated automated translation (via Google Translate) to convert Ukrainian and Russian articles into English before analysis.Challenge: Google blocked our translation service mid-project.
→ Solution: Pivoted to StarDict, a local Linux-based word-by-word translation tool—which surprisingly worked with our word-weighted AI model.Challenge: Generating real-time, relevant emotional commentary was technically unrealistic.
→ Solution: Pre-generated a library of emotionally tagged commentary based on previously analyzed articles and keyword structures.What I learned & potential use cases
In 2017, AI tools were limited—no transformers, no LLMs, no Whisper for speech-to-text. Building emotional intelligence into machines required creative workarounds.
This project taught me how to design empathetic systems, simplify technical complexity, and communicate emotional nuance through multimedia.
Most importantly, I realized that recognizing how online content affects our emotions takes conscious effort. This sentiment analysis algorithm has the potential to evolve into a powerful tool across several use cases:
Fake News Detection: Flag emotionally charged, potentially viral content for fact-checking before it spreads.
Emotional Content Filtering: Let users filter news or social media based on emotional tone—choosing what kind of emotional content they want to engage with.
Marketing Optimization: Use emotion prediction for A/B testing to identify which campaigns are more likely to resonate or go viral.
Emotion-Driven Content Creation: Reverse-engineer content based on desired emotional impact—helping creators, marketers, and founders craft more emotionally compelling stories.