DiscoverAll Things PixelGoogle AIPixel CameraHelp at HomeWellnessPodcastReviews & AwardsGift Guide
Podcasts - Season 5, Episode 5
Upgrade your videos with Pixel Video Boost
Pixel 8 Pro’s Video Boost transforms grainy, shaky videos into studio quality, even in challenging low-light situations. In this episode, we reveal the secret sauce that makes it possible.

Everyday moments through a pro-level lens 

Video Boost on Pixel 8 Pro makes videos shine with better lighting, finer details, and richer colors, delivering stunning quality from dusk until dawn. On this episode of the Made by Google Podcast, we sit down with product managers Kevin Fu and Lillian Chen to learn how it’s possible to transform low-light footage into something sharp, smooth, and rich in color. 

Expert video processing 

Video Boost relies on multiple technologies to improve video quality. For better lighting, it uses HDR+ with Night Sight, a process that pulls in a large range of dark-to-bright lighting data, and machine learning techniques that stabilize each frame and reduce motion blur so the result is ultra sharp. Tune in to learn more about another key ingredient that makes this feature possible: cloud processing

Stop fretting, start filming 

Whether you’re grabbing footage of a downtown cityscape or recording a concert, Video Boost erases the dread of wondering if anything will turn out. Just open your Google Camera app, switch to video mode, and tap Settings to turn on Video Boost. 

And there’s more where that came from. Listen in as Kevin and Lillian drop additional tips and tricks for getting the most out of this feature. 

Transcript

Lillian 00:00:00 So what we're doing is we're combining technology we've had with the power of cloud processing, which is a really novel approach to smartphone photography. I don't think we've seen it elsewhere in the field yet. And by using the power of the cloud, we're able to make videos so much better.

Voiceover 00:00:14 Welcome to the Made by Google podcast, where we meet the people who work on the Google products you love. Here's your host Rachid Finge.

Rachid 00:00:22 Today we're talking to Kevin Fu, product manager for the Pixel camera, and Lillian Chen, a senior product manager for the Google Photos experience on Pixel. They know everything about Video Boost, a feature that improves the quality of videos with enhanced lighting and color, especially at night.

Voiceover 00:00:40 This is the Made by Google podcast.

Rachid 00:00:43 Empower everyone to easily and beautifully capture life's moments. Kevin, that's the mission statement you have on the internal Google directory. Can you tell us a little bit about why you picked that?

Kevin 00:00:54 Yeah, sure. I feel like we're all super busy working on different projects, you know, so many things going on at work, so I find it very important to always kind of remember why we're doing what we're doing. So like why do people take photos with their camera? People essentially use their camera as a tool to capture moments so that they can create, remember, and connect with others. So it's like a very personal experience. I like this mission statement because it reminds myself how do we make it easy for them to capture those moments? How do we make sure that the moments that capture are as good or even better than they saw in real life? I think most importantly, empowering everyone here just means that we're building a camera for anyone in the world regardless of their background, their age, their gender, their skin tone, uh, and et cetera. I like to use this mission statement to remind, you know why we're building camera.

Rachid 00:01:45 So you are on the Pixel camera team. Something I learned over the past few years is that there are both people who are like hardcore into photography, maybe as a hobby before joining the team and then some people, they're not so much into it, which equally is useful when creating a camera. But I'm just wondering like what is your background in that sense?

Kevin 00:02:02 Yeah, so I wouldn't say I'm like a hardcore photographer. I would say I'm a hobbyist. I take a lot of photos with my Pixel, I take photos with my mirrorless camera very, very occasionally. But I would say I'm mostly a hobbyist when it comes to photography.

Rachid 00:02:18 Amazing. Lillian, welcome back to the Made by Google podcast. You were here when we were talking about the Audio Magic Eraser and another amazing feature of Pixel. What have you been up to since then?

Lillian 00:02:30 So great to be back after working on Audio Magic Eraser that we launched last October. I've been working on other features. We also announced that in Made by Google and then I've also been busy of course working on new features and feature upgrades for Pixel 9. So it's definitely a busy season right now.

Rachid 00:02:43 Lillian, for people who dunno what Video Boost is, how would you explain what it does?

Lillian 00:02:48 I would say that kind the goal of Video Boost is to unlock the best video quality ever that you've seen on a smartphone and we're just starting on Pixel 8 Pro to do that for our users. In terms of how it does that, it combines kind of technology we've used for a long time in Pixel camera for still images, but that's really expensive competition, expensive to do on a smartphone. So what we're doing is we're combining technology we've had with the power of cloud processing, which is a really novel approach to smartphone photography. I don't think we've seen it elsewhere in the field yet. And by using the power of the cloud we're able to make videos so much better

Rachid 00:03:22 And one of those areas where it does make videos better is when it's really, really dark. So it reminds me of Night Sight, which we had for photography think since Pixel 3. Actually remember the first time I saw it in action and was like, is this even real? And it turned out fortunately it was real. So it's the goal with Video Boost to have like a similarly transformative impact on video capture?

Lillian 00:03:45 Yes, that is. I think when we first started out this project and were trying to talk to different users about where they were happy with their video quality and where they weren't low light video was still an area where people very quickly named as those are instances in which their video were better and these are users who are using top of the line phones, right? So fairly happy users but still saw a need, a user need for improvement. So when we thought about what we would want to use Video Boost to make better for users, night site was top of our list of something we wanted to try to apply to video.

Rachid 00:04:16 So Kevin videos are often like taking sort of pictures 30 times a second. So I guess you took Night Sight, you multiplied it by 30 and that was the end of the workday, right?

Kevin 00:04:26 It's not a copy and paste, unfortunately it would've made our lives a lot easier. Video just much more complex than photos in terms of both in a computational sense and in a quality sense. So we need to solve for both the first problem, fortunately it's a much easier one to solve for our case. So Night Sight essentially works by aligning and merging multiple frames. So you take multiple frames in order to get a really good, really high quality frame and you have to do that for like 30 frames per second depending on your setting. So that's typically how Night Sight works. But thanks to cloud computing we can process them with a lot more freedom. We can process them in a less constrained fashion so we can decide how many frames that we want to align emerge and how are we gonna merge them. We can even take this to another level.

Kevin 00:05:14 We can even do much better. We can even run more complex algorithms simultaneously. For example, we can run deep blurring, we can run stabilization, we can run color correction, so we can run a lot of very heavy computations. It's all possible because of cloud processing. So that's the first challenge that fortunately was easier to solve. The second challenge is an interesting one. It's a uniquely video problem that a lot of people didn't realize. So if you think about it, a photo, it really just one single photo. If it looks good, it looks good, but for video it's like 30 photos or second. So it's important that we take consistency into account. We call it internally the temporal consistency. So imagine if you look at a video being played back and if you see this weird flickering of brightness, if you all of a sudden you see this color shift, it would be a very jarring experience. So that's what we spend a lot of time and effort to get right. We wanna make sure that when you look at Video Boost video, you don't get this random flickering of brightness, you don't get this random changes of colors. So yes, I would say what are the biggest challenge with bringing a lot of these photo algorithms into video temporal consistency. It's something that we have to get right?

Rachid 00:06:24 I'm just wondering what is the single biggest technical hurdle? Maybe it was that one, but maybe there's another one in bringing Video Boost to life, Lillian, what initially seemed impossible? Yeah.

Lillian 00:06:34 Well what Kevin just talked about seems really hard to me, . Um, but something else that we had to get over in the early days of Video Boost is figuring out how do we even use the cloud for processing images? It's something Pixel camera has never done before. We've always been focused on really fast processing. So what seemed both like a bit impossible , but also like a paradigm shift for our users in the beginning was this question we asked ourselves what would happen if we didn't constrain ourselves to on-device, real-time processing, what if we could unlock the best of Google's computational photography and AI by using the cloud? So to build Video Boost, we needed to bring together a new end-to-end imaging pipeline that bridged both Pixel camera app on device and Google Photo servers. Something that didn't exist before. We also needed to bridge, as I said before, the pipeline to the Photos cloud where we could then apply a set of algorithms. So to do this, we had to hire specific people for their skillset so that our engineering team had the right expertise. And we're also really lucky to have really smart people from camera, from Google research with AI expertise and knowledge about computational photography as well as people who understood how to even do ML processing in the cloud. And we also got a lot of advice from people who had experience with video backup and video server infrastructure.

Rachid 00:07:49 I guess with a feature like Video Boost, you don't know how great it is until it's built, right? It's maybe a research paper, but you don't know how effective it it is until you see it for the first time. So question to both of you. When did you realize that this feature had true potential?

Kevin 00:08:04 Yeah, I think for me it was probably when I first saw the very first Night Sight video demo. We are all very familiar with how Night Sight photos look, right? It's not that surprising, but to see it in video for the first time, it was just, we've never seen anything like that before. So I remember when we saw the before video, it was a very dark video of someone's neighborhood, right? It was very dark, couldn't really see a lot of things. And when the team told us, hey, the cloud process result finally came back and we all were very excited to see it and we were just stunned by how bright and how clean it was. And I think what's interesting is that it even revealed things that we didn't realize was in the shot. Like there was this big red car parked in the street that we were only able to see because of the booster video. So I think that was the moment what I thought, you know, we had something that's gonna be exciting for our users.

Rachid 00:08:58 Lillian, did you see that same video or did you see something else the first time from Video Boost?

Lillian 00:09:03 The early demos were were really exciting I think for sure. So I think Kevin's example is great, but I think for me, Video Boost's potential to really let users really hit home. As you mentioned Rashid, when we're kind of later in the stages of our development process, when we started our internal dog food, that's when we let Google employees start testing our features and we ran a Video Boost Halloween contest to get people to use it and give us feedback and all the examples from that contest is what got me really excited because Video Boost would take videos that were captured at dusk or nighttime with lots of Halloween lights, orange decorations and make them look really awesome. Just really vibrant, really well lit. And then just the energy and participation we saw with that competition from our dog fooder also made me really excited for our launch.

Rachid 00:09:45 I think what stands out about Video Boost in the way it works, that it is a combination of on-device and cloud processing. So what is the benefit of this hybrid approach and how do you balance speed and quality? Kevin?

Kevin 00:09:58 I think to put simply hybrid approach is basically giving you the best of both worlds. We give you a great video and we give you an even better video. So what do I mean by that? When you take a video boost video, we actually give you two videos, right? We give you a undevised process video that's fast, that's ready to go. You can view it, play it, share it immediately if you want. We call it the initial video, but we also give you a second video that's cloud process that's gonna take a bit of away time, but it's gonna give you a much better quality video. So that's your boosted cloud process video. I would say for the most scenes, uh, our initial video or our on-device process video, it's more than enough to give you a stunning looking video. It runs what we call LiveHR+, which is a machine learning based HR plus algorithm that's single frame and we actually build it right into our tensor chip.

Kevin 00:10:54 So it's gonna be able to run very efficiently. It's gonna give you that same punch and vibrance that people love from our HR plus photo right into our video, right? It does that in low light. We also have techniques that we use such as pixel binning, which is essentially combining four adjacent pixels into one giant pixel that gives it four times the light sensitivity. We also use dual exposure, you know, so we did a lot of things to make it look good even in low light. So I would say the on-device process initial video is already pretty good, but like Lillian mentioned, those that come into Video Boost, they want the best of the best. So this is when the booster video comes into play. So in cloud processing what we do is we take that already pretty great initial video, we just build on top of it, we make it 10 times better by processing our state of the art video algorithms like a trip plus like nice side like video and blur and all that good stuff and generate a video that's much, much better. So if you're someone that wants to share video on social media, like immediately you can just take that initial video. Or if you're someone like me who's always looking for the best quality, you know, I don't really share videos social immediately. I'm the type of guy that likes to, you know, before bed, curate my video a little bit, edit my video a little bit. And if you're someone like that, then Video Boost is for you.

Rachid 00:12:24 Now Kevin, just to be sure, we talked a lot about Video Boost at night, but does it have any benefit in a daylight as well?

Kevin 00:12:30 Yeah, absolutely. Video Boost and Daylight is gonna process HR plus, which you know, the benefits I think a lot of people are familiar with. You know, it's gonna give your video this extra punch of color, it's gonna give your video, uh, more contrast, more details. And on top of that we also have a lot of improvements with Stabilizations. Yeah, so if you shoot video boost videos in a daylight, you're gonna find a video to be very pleasing looking.

Rachid 00:12:58 I think the beauty of this all is that all the complexity that Kevin and his team is working on, me as a user, I don't need to know anything about it. I just, I guess have to turn it on. So maybe tell us where do I turn it on?

Lillian 00:13:10 Yeah, that's right. So to turn on Video Boost first go to your Pixel camera app and enter video mode by tapping that little video icon on the bottom of your screen. And then visit video settings by tapping the gear icon on the bottom right hand side of your screen. You'll see in the list of settings, Video Boost is the second one. So when you turn it on, um, Video Boost is on. You'll notice that if you're trying it for the very, very first time, it might default to 4K setting because we're trying to help maximize the quality of your Video Boost. Um, but you can always change it back to full HD if that's your preference. And then we also highly encourage people to try Video Boost with 10 bit HDR turned on as well. But you'll see that yeah, certain settings can be on or off. Combined with Video Boost.

Rachid 00:13:53 We now can establish that AI can help improve video quality. Are there any other areas in smartphone videography where AI can potentially assist and improve what people are making?

Kevin 00:14:06 I would say beyond quality, we know that users want to express themselves creatively. Some users aspire to create these very cinematic looking videos like the professionals, but they're having a hard time doing that today because in between just getting good looking video frames and a professionally made video production, there are a lot of steps along the way. So I think an area in AI for videography that I'm personally very excited about is how can we make that creative workflow simpler for our users, for everyday users? How can we democratize programme videography or cinematography so that anyone, regardless of their skill level can just produce professional looking videos? That's an area where I'm excited to see AI push forward in videography.

Rachid 00:14:57 Now this season in the Made by Google podcast, we give our Pixel Super fans the opportunity to ask a question to the guests of the podcast. And we got a question this time from Edgar Martinez and he wants to know if Video Boost could work completely on the Pixel device itself. Great question, Edgar. So what do you think? Is it possible at some point in the future perhaps?

Lillian 00:15:18 Yeah, really great question. I would say, you know, today's quality that you see on Video Boost is only possible with the help of the cloud right now. Um, and that's because as we mentioned, there's all this heavy computational photography done in parallel on the cloud done frame by frame and it would require so much compute power. It doesn't, that type of compute power doesn't exist on smartphones today. But I think the good news is that we'll keep working to make our on-device video quality better and better as well. And this is where kind of Kevin can talk about our plans for on-device video.

Kevin 00:15:50 Yeah. In terms of on-device video, you know, we're always going to be improving year over year. I think many years ago when we first introduced HR plus to steel photos, we didn't think it was possible to lend it in video, but as the technology evolves, our research team was able to come up with a clever solution to bring that to on-device video with live H plus that, you know, I just kind of talked about. So the same thing as the technologies evolve, as the on-device capability evolve, we're definitely gonna see more and more technologies and use cases lending in the Unadvised video. So that's very top of mind for us.

Rachid 00:16:29 Now. We love to offer our listeners a top tip, something they should definitely try with the feature or product you created. So when we talk about video boost on Pixel 8 Pro, what is definitely one specific thing they should try with it?

Kevin 00:16:42 Yeah, bonus tip, I would say choose low light scenes with a lot of colors. Try to take Video Boost videos in dark places with a lot of colors. Think cityscapes at night. Night markets, art shows or Team Lab, for those of you who don't know, is an amazing live show in Japan, right? You will get really, really incredible shots with Video Boost there, you will see Video Boosts magically amplify the colors in the dark. Uh, so really, really cool. I recommend people try that out.

Rachid 00:17:11 Thank you so much for creating Video Boost. And I also say that as a father of young kids, like, you know, creating videos of them just before they go to sleep in a dark bedroom, I think even a year ago was impossible to sort of capture these moments. So thank you also as a dad for, for creating a feature. And thanks for joining the Made by Google podcast, Kevin, Lillian.

Lillian 00:17:30 Thank you so much. Great to be here.

Kevin 00:17:32 Yeah, thank you for having us.

Voiceover 00:17:34 Thank you for listening to The Made by Google podcast. Don't miss out on new episodes. Subscribe now wherever you get your podcasts to be the first to listen.

Related podcasts
Fall in love with your Pixel (again) The Google AI phone at an unbeatable value Don’t trash it, fix it
Where to listen
Share this podcast