Excitement about Artificial Intelligence reached a fever pitch in the design and tech community this year. Most teams I talk to now are actively exploring ways to leverage A.I. in their work — whether it’s to enhance an existing product or to create all new “A.I.-native” types of services. Taking it a step further, many are now also looking at how A.I. will change the creative and design process itself.
Generative Models and Design
Using Artificial Neural Networks (ANNs) and very large sets of training data through a process known as Supervised Learning, computers can now understand the content of images and language. You probably see this technology used everywhere, from Facebook auto-tagging people in pictures, to Siri/Alexa/Google understanding simple spoken utterances. But as a side effect of this, with these same Neural Network-based models, computers can now go beyond understanding, and can actually generate new images and new language. If you’ve seen things like Google’s Deep Dream, the @deepDrumpf twitter handle or the popular Prisma app, these are basic implementations of the technique and what are called Generative Models. The research in this space is moving very fast, with new breakthroughs appearing almost every day, and while rudimentary at this stage, it signals a potentially significant shift for authors, designers and content creators.
A Hackathon at R/GA
I had been working with Jason Toy, founder of Somatic.io, on projects in this space. With a shared interest in getting these ideas out of the theoretical and into prototyping and experimentation, we organized a hackathon-style event, focused on “A.I. for designers” with R/GA and Somatic. We brought together a likeminded group of designers, data scientists and engineers across our networks to build stuff over the course of a full day.
But training new models—especially with machine learning—tends to take a long time (days to weeks). We had to do things a bit differently than we would have for other hackathons. To make it a productive session, we focused on a few key themes and projects, and for each one prepared tools and bootstrap code in advance.
Here’s what the teams came up with — in just one day.
Natural Language Generation
The first project was all about Machine Learning and language, language understanding, and the potential impact of generative models on writing. Inspired by papers such as Andre Karpathy’s and word-based RNN text generation approaches we wanted to see what we could do in a short period of time.
This team definitely set a high bar. They built a Rap Bot where users could battle the greatest rappers of any generation by selecting their favorite rap era, which would then dictate the corpus of rap lyrics used to generate responses during the battle. The team was able to bang out a Facebook Messenger bot implementation using Reply.ai (one of the companies in R/GA’s Commerce Venture portfolio).
The Tensorflow, word-rnn based framework yielded some fun and impressive results, given the short training time we had. That said, we continue to work on a more complex approach using skip-thought vectors but this was outside the scope of the hackathon, in part due to the long training time needed.
Neural Style
The second project explored “Neural Style,” an idea expressed in papers like this one, and Justin Johnson’s open source implementation. Jason’s company Somatic.io provides APIs that enable applications of generative models, which was key in accelerating experimentation. Using their APIs, we were able to get the less technical folks up and running very quickly, applying styles to their images.
Tidak ada komentar:
Posting Komentar