By Mike O’Connor | 13 December 2019
Adobe MAX, billed as the world’s largest creativity conference, was recently held in Los Angeles. We sat down for a Q&A with Stephen Nielson, Director of Product Management for Photoshop, to get the latest on Photoshop and its transition to mobile platforms, and to find out how machine learning can enhance, and not detract from the creative process.
Australian Photography: Can you tell me a little bit about your role? How long have you been involved with Photoshop?
Stephen Nielson: I’m Director of Product Management for Photoshop (PS), and have been on the team now for eight years, which still means that I’m kind of young on the team.
I first started using it on Version 5.5 back in the 90s. Mostly it was part of [putting together] yearbook stuff, probably like a lot of people when they get started with it. During school, I started doing a lot of photography and I did some wedding photography semi-professionally. I’ve always been really into photography and have always admired what you could do with Photoshop. Coming to Adobe to work on Photoshop was a dream.
AP: Photoshop has become a cultural icon. Is there a pressure that comes with working on something that means so much to so many people?
On the one hand to work on a program like Photoshop is very satisfying, It’s very fun. But yes, it’s also intimidating. Especially when you’re responsible for the legacy of this product that has such a powerful brand and meaning and association.
It’s also complicated in that you talk to any PS users and the way they use the tool is very different. And so it’s challenging to choose what to develop next, because what will be phenomenal for one person might be ‘meh’ to another person.
AP: It’s also now 30 years old. How do you keep it relevant?
SN: The early days of PS were wild and crazy because it was just so new. What’s been really exciting to see is that PS is not at its peak, and it is still growing and expanding.
Really, there’s no shortage of ideas of what we can do. What’s accelerating our growth and making the future look so exciting are two things. The first is the idea that we can bring PS to these [mobile] devices.
We’ve attempted this as a company before and many people have attempted this before, but it’s always been a significant compromise. The idea now that we could run the actual PS code base on a mobile device is relatively new. The other is the machine learning technologies through Adobe sensei (Adobe’s Machine Learning algorithm).
This is what we’ve been dreaming about for years. We’ve always had ideas for things we could do, capabilities that we would like to add to the product, but have never had the means or the technology to do it because it was just so complicated.
Machine learning has just blasted open the doors and opened up a lot of things that were previously not possible or not feasible, [and now it’s like] maybe we could actually do that. It’s still really hard work, and there’s still a lot of research and a tonne of work to do. But it’s really exciting to see how much opportunity there is because of machine learning.
But in terms of relevancy, we’ve also made a significant investment in rewriting the guts of PS. People kind of tease us sometimes like, ‘oh, Photoshop is 30 years old, I bet that code is a mess!’ And I won’t say that, you know, it’s perfect – no code is. Or people might say ‘You should rewrite Photoshop.’ And [in reality], we have, I mean, we’ve been rewriting it piece by piece for 30 years, and there’s very few parts that haven’t been touched.
In the latest release, a major part of it was completely rewritten. It was one of the last parts that had never been rewritten.
AP: What were the challenges in bringing PS to mobile platforms?
SN: The original code of PS was designed to run on a device with something like 4MB of memory, so bringing it to iPad actually fits and scales well. But that said there were still an enormous number of challenges to bring over the engine in a complete way, as well as completely redesigning the UI and building an entirely new UI on top of it. But, you know, that the core was pretty well suited for for the task in a way.
But [despite this], initially it seemed like it just could not be done because devices were too underpowered and it would be too much work. There’s a meeting we had, I think it was in December 2016, and it’s burned in everybody’s mind because the initial prototype of the architecture was shown at this meeting, and we realised it would be possible.
AP: What does it mean for Photoshop Fix and Mix now you have the standalone program on iPad?
We’ll have more to announce about that soon. There’s some really interesting concepts in Fix and Mix, but they are both quite workflow focused [programs]. PS on iPad and PS on desktop don’t prescribe a specific workflow – they are incredibly open ended, which is intimidating to some users but also liberating to others. I still think there’s a place for workflow-centric programs like Fix and Mix, but the open ended format of PS will be popular I think.
AP: Let’s talk about Adobe Sensei – what are some of the implications of bringing this technology into Photoshop?
We’re really trying to leverage the advances in machine learning to automate mundane tasks that nobody likes doing, or to speed them up and to unlock totally new creative capabilities. We’re not interested in removing a human or replacing a designer or something, but we want to remove barriers to a workflow.
Creativity is a uniquely human trait. We can write an algorithm that will generate something, but it isn’t necessarily beautiful, or well designed, or thoughtful or meaningful. For the most part it’s the human input that makes it interesting and valuable and meaningful.
If we can speed up that creative process, not take away from it, and make it easier for people to just get to the end result faster, quicker, easier, and in more places, that’s just going to unlock and allow people to step back and step up to purely the more creative role rather than doing the creativity and the mundane, really menial tasks.
AP: What’s your favourite Photoshop feature?
SN: In terms of new developments, the Object Selection Tool is new, and it’s just so cool. I like to call it the ‘shrink wrap tool’. It’s like you’re putting a thin layer plastic over the object and take out the shrink wrap gun.
I really love the new Content Aware Fill workspace. It’s just such a powerful idea of what we were talking about before, and is this amazing algorithm for generating content that didn’t exist previously.
But [of course] the computer isn’t very smart at just doing it by itself – sometimes it gets it right and sometimes it doesn’t. With the new Content Aware Fill workspace, you can kind of collaborate with the computer, with the algorithm, and say, ‘hey, don’t use that part, and use this part, or use this setting.’ And you can get a real time preview and kind of iterate on the result with the computer. You can just get some amazing results that way.
Mike O’Connor travelled to Adobe MAX courtesy of Adobe.