Alexa Meets Reddit
When I do not go to office, I spend about 25% of my total web time on Reddit. I spend a lot more than that when I do go to office. (I only browse r/programming, I swear!) I also come from a long line of people that can barely see without glasses. It was high time I did something about it. And so, an idea was formed.
The idea started out as a way to access Reddit without looking at a screen. Various options were explored. However, we live in the golden era of Voice User Interfaces or VUIs. With explosion of Google Homes and Amazon Echos, VUIs are inescapable. After a month of crazy, here we are with Alien Browser for Reddit.
The Core Concept
The basic idea is to enable Reddit browsing over voice. I had a vague idea in mind for subreddits like AskReddit, ELI5, TIFU, etc. Such “textual” subreddits are great contenders for a VUI. Alexa is an excellent storyteller.
Comments are easy enough to handle. For the most part, comments are just… text. Add the commenter’s name to the speech and you have near complete comment browsing experience.
I had a minimum viable product going based on textual subreddits and comments.
A Dash of Personality
Reddit users are (mostly) people and people have personality. There is no reason every commenter should sound the same. Moreover, it is important to distinguish comment’s content from the Alexa’s narration.
Amazon Polly to the rescue. Amazon Polly is a service that enables text to speech synthesis in a bunch of voices. These voices have a distinct dialect, pitch, loudness, etc. Alien Browser randomly assigns a voice to a comment and narrates the content in that voice. This gives an impression that the commenter is actually speaking to the listener.
The playlist above has two tracks: a Question and an Answer. Notice how both the voices sound different.
The Meme Challenge
Memes are love. Memes are life. What is Reddit without its memes? Heck, what is life without memetics? Alas, memes are images and Alien Browser could only narrate the textual information. This presents a challenge.
Fortunately, we also have power of AI to extract information out of images. With sufficient training, we can extract text out of a meme — on the fly! Theoretically, it is also possible to extract enough info to match meme with one of the templates. Lucky for us, AWS provides all of this in a handy API via service called Amazon Rekognition. It is recognition, but with a k — get it?
Now, Alien Browser can not only narrate text, but also tell the listener what a meme says. As a pleasant side effect, other subreddits like r/GetMotivated work with extra awesomeness. (What kind of a pretentious snob chooses motivation over depression memes though?)
What’s more! We also get a card inside the app with the image embedded in it.
The News Problem
Used judiciously, Reddit can be an excellent source of news. Reddit also provides a great platform for general reactions and discussions around a news item. Unfortunately, media is biased towards the bad news and that is not good for our health. It is known. There is a lot of positive news that barely gets reported. Even if reported, it is lost in the avalanche of media.
What if we could focus only on happy (or neutral, even) news? With the power of AI infused sentiment analysis, this is a reality. You can tell Alien Browser to “only tell the happy news” and it will filter out all the negative news for that session. Again, the sentiment analysis is powered by Amazon Comprehend — an AWS offering.
Looking Ahead
It has been a great experience making something accessibility oriented. It presented challenges I hadn’t faced before. There are minor kinks to be ironed out, bugs to be squashed. There are a few UX improvements in pipeline too.
In the mean time, go ahead and give it a try! Here’s a handy link.