Nebula Blog

How the streaming sausage is made.

Now Hiring: Junior Production Talent

We’re hiring junior video production people. If you’re excited and talented and looking for your first (or second) real job in creator economy video production, we’d love to hear from you.

Nebula Studios is a world-class team of exceptional people. Our team has led or has had their hands in projects from nearly all of our creators, including Real Life Lore, Jet Lag, Modern Conflicts, LegalEagle, PolyMatter, Lindsay Ellis, and honestly it would be faster to type up a list of the folks who haven’t used the Studios team yet. Video essays, stage plays, documentaries, feature films. I’m truly amazed by the range and diversity of talent.

Digging into process over the last few weeks — categorizing things and getting a better handle on the differences between rapid-turnaround YouTube videos and prestige-style Nebula Original — we’re left with a much better understanding of how to build into the future, and a large part of that comes down to an interesting problem we only now realize we face: we don’t have enough junior people.

This is good news from most angles. It means we’ve built such an effective talent development pipeline that we’ve very efficiently helped junior folks gain the experience to move up. However, it also means that we’re currently missing the next round of junior people. We want to keep our mentoring skills sharp, and some tasks are simply better suited to folks who are learning the ropes. As more and more of our senior people get assigned full-time to top creators, we have more need for enthusiastic fresh eyes.

Entry-level jobs are important. They’re how we learn when we’re just starting our careers, and in an industry that is still very much in the invention phase, it’s more important than ever that there be great places for people to gain experience and see how things are done in the real world. YouTube-style video production has distinct needs, and there’s no YouTuber University where one can go to learn the skills and processes specific to this kind of work. (Not even — humblebrag — my NYU class on being a professional YouTuber.)

We’re well-aware that there’s a vast sea of hungry young editors, motion graphics designers, artists, thumbnail designers, and sound engineers out there who want to break into the industry and work with top creators. Many of them have aspirations of being creators themselves. We think our problems are complementary, so we’re adding a bunch of positions to our jobs page for junior production positions.

These aren’t unpaid internships. These are paying jobs where you’ll be on a team of skilled, experienced professionals with the history, context, and scar tissue of working on many high-profile projects with a variety of personalities. Our goal is to help turn junior people into senior people over time, growing our talent pool and offering more diverse perspectives and services to the creators we represent and serve.

If you’d like to be a part of that future, take a look at our jobs listings. Our standards will remain ever high, but we’re looking for enthusiastic, raw talent, not extensive resumes or fancy schools. Starting now, we’re going to keep all junior and mid-level production positions open perpetually — we can always make room for more exceptional people.

The Prince: Special Edition

Now streaming: Abigail Thorn’s The Prince, fully remastered, with a full behind-the-scenes video and a Q&A with Jessie Gender.

Leading up to the original release, we recognized that there were areas we’d like to give a bit more polish. This is a filmed live performance; not everything goes perfectly, and not every microphone stays where it’s supposed to. None of these issues were showstoppers, and in the spirit of theater, the show must go on.

However, we feel that The Prince is an important piece of art, and part of our responsibility as stewards is to ensure that it’s preserved in the best state possible, so immediately after release we set ourselves to the task of adding the polish we believe the play deserves.

In the original cut, some of the audio came out distorted or muffled. Probably from a mic coming untaped and rubbing against costume. Mike Wuerth and Graham Hearther, two of the producers on the Nebula Studios team (and both audio engineering masters in their own right), did an incredible job of cleaning up the tracks. The show was filmed over two nights, so Mike was able to use the audio from the second night to fix some microphone problems. Graham went through and precision-adjusted EQ to improve intelligibility in some areas. All of this with a careful eye on not encroaching on any of the actors’ performances.

The lighting design of the stage isn’t always easy to capture on camera. Ryan Alva, our resident colorist, pulled in the original raw footage and gave every shot a more intentional color-grade. Not to change anything, but to make the experience more immersive and theatrical.

We also made a few small edits to the timing of camera changes, since a camera lingering too long on an stage actor can unintentionally weaken the performance through missing context.

But wait, there’s more! Also included now are a half-hour making-of documentary, taking you behind the scenes of the entire process, and a 42-minute Q&A with Abi, hosted by Jessie Gender.

This isn’t a George Lucas overhaul. We think the end result is a cleaner, more immersive and enjoyable experience. When I told Abi we were doing it, she insisted that it wasn’t necessary — that the original release was already great. I appreciate her indulging us and letting us finish what we started. It was worth it.

Lifetime Memberships

Update: We’ve ended the experiment for now. We hoped to land around 1,000 lifetime memberships sold in the month, but we blew past that in about four days. After exactly one week, we’re just over 1,500.

This was dramatically more successful than we anticipated, and in some interesting ways. Most of the people who went for lifetime were either new subscribers (56%) or folks upgrading from the bundle (32%). What does that mean? We have no idea yet.

The plan now is to pause, take a close look at the data and what it means, and think about how we’d want to go about doing another round. I think it’s safe to say we’ll do it again. Next time, we’ll clearly telegraph start and end dates for the promotion.

Thanks for indulging us, everyone. This was so much more successful than we could have hoped.

We’ve just launched a little experiment. For a limited time, you can sign up for a lifetime membership to Nebula. This isn’t a lesser tier of service. This gets you access to everything that the monthly and annual plans do. There’s no catch.

We’ve gotten a couple of comments on Twitter and YouTube about this being portentous of some kind of imminent implosion, so I thought I’d take a moment and explain why this experiment is interesting to us, and what we hope to get out of it.

The short version: raising cash makes sense for a streaming service to invest in producing bigger projects, and we prefer doing it via lifetime memberships versus giving up equity and taking on VC money.

The long version is also pretty interesting:

We knew at the beginning of the year that we would be taking over the handling for Nebula marketing costs ourselves. Up until recently, all marketing spend was handled by our bundle partner. We’ve been running models for a long time in preparation for this change and we were confident it would go reasonably well, but we aren’t a VC-backed startup or a trillion-dollar megacorporation; taking over hundreds of thousands of dollars in monthly marketing spend represents a significant risk for us. We had to plan for every scenario.

After a couple of months of spending nearly $1m total to sponsor our own creators, our team has learned a couple of things which have seriously surprised us.

All of our models were based on user trend data we had — data from direct subscribers. Until recently, direct subscribers were in the minority, and we’ve never really promoted direct subscriptions aside from the launch of Nebula Classes. That data suggested that monthly subscribers would be the dominant group. The reality? Not even close. Here’s a screenshot from our analytics dashboard. Dark orange is monthly subscribers. Light orange, top, is annual.

Graph showing dramatic increase in annual subscribers versus moderate increase in monthly subscribers.

Our total count of annual subscribers has more than doubled in two months.

Why is this so exciting? Well, annual subscribers stay longer, so there’s reduced churn. That’s obviously good for the stats. But annual subscribers also pay up front. While we theoretically make more money in 12 months from a $5 per month customer than we do from a $50 per year customer, we can do more with $50 of today money than we can with $60 of next year money. Spending, for example, $25 on marketing to bring in a new customer is reasonable if you expect the lifetime value of the customer to be, say, $75. But at $5 per month it’ll be five months before you break even. For some period of time, you’re dipping into your savings account. If a bunch of those new subscribers pick the annual plan and pay $50 up front, you can reduce the amount you have to take from savings. But if a whole bunch of people pick annual, you might come out ahead every month.

This is where we landed. Since the change to direct promotion, a wide majority of new subscribers are choosing annual, and in under 30 days we generated more first-month revenue from new subscribers than we paid to bring those subscribers in. By the end of March nearly 90% of our first-month revenue was from annual subscribers.

This is the buried lede: when we took over our marketing spend to focus on direct subscribers, we instantly became one of the largest and most successful sponsors on YouTube, and if things stay on their current course, we can keep that up sustainably without having to go back to savings again.

Okay, so why lifetime?

We aren’t currently spending from savings to pay for marketing. That’s great. But that first month did come from savings. It wasn’t a huge dent and we’re filling it back up, but again, we’re not a VC-backed startup. We don’t have hundreds of millions of dollars we don’t know what to do with. Our strategy is sustainability, and our growth comes from making good choices with and for our creators, not from spending indiscriminately on casting the widest possible net.

The original plan, back at the beginning of the year, was to use lifetime memberships as a buffer. A way to raise a little bit of capital in case things were rougher than our models suggested, and to do so without having to give up equity. As our actual performance outpaced even our most optimistic models, we realized there was still an interesting opportunity in raising capital this way.

Cash is king in any business — that isn’t unique to us — but Nebula is in an especially interesting position as a streamer. We have marketing costs, operational costs, and content costs. Operational costs are long-term, so you don’t typically want to make hires out of short-term cash bursts. Assume marketing is a relatively stable system for the moment. A short-term cash infusion for content? Now that’s interesting.

Our three biggest projects last year were a sci-fi movie about an evil coconut, a trans coming-out theater performance take on Shakespeare, and a travel game show. Super fun experiments that went over incredibly well, and suggest that our audience would love more creator-led big-scope projects. We have ideas. Our creators are pitching really cool stuff. We want to start making those projects real today, not in several months when the bank account fills up.

Okay, so why “experiment”? Why only for a limited time? What does success look like for this experiment?

The internal codename for lifetime memberships was “Project McRib.” The idea was that we’d turn it on, see how many people were interested, take some notes, then turn it back off until we felt like we understood what we were looking at. Then, if it makes sense, occasionally bring it back when we want to duplicate the experiment and raise some more cash for big stuff again later.

Maybe the pricing changes later. Maybe nobody is interested. Maybe having a more expensive option above the annual subscription makes annual the more attractive price point, improving cash flow and retention overall even if zero people sign up for lifetime. If there’s enthusiasm and people are trusting us enough to spend the money on a lifetime membership, that tells us something about how the audience sees us overall. If nobody goes for it, that also tells us something. My goal for the experiment was 1,000 lifetime memberships sold. In the first 24 hours we had over 300 people sign up. I’d say that bodes well.

Another thing to consider in all of this is the creators themselves. We fund Nebula Originals in part based on how well we think they’ll do for bringing in and retaining customers. If, say, Jessie Gender has an exciting new project in the works and announces it to her audience and a bunch of them go for annual or lifetime memberships, we can increase the budget. It’s not quite as one-to-one as something like Kickstarter, but the cash flow could have a very real impact on how we build projects, and that knowledge might make creators more enthusiastic about promoting those projects, leading to a wonderful virtuous cycle where creators get to make dope shit and the audience gets to enjoy dope shit. And because creator payouts overall are based on signups and profit, changes in our cash flow have a very real impact on their businesses, too.

We knew that it would be the most engaged, highest-value-potential subscribers who went for this. We set the price based on what our current data suggests that lifetime relationship looks like. But — and this is absolutely crucial to understand — this isn’t about maximum value extraction on either side. It’s okay if we lose a few bucks over the life of those subscribers. It’s okay if we come out slightly ahead. The real goal here is to give the most enthusiastic and excited people in the audience a chance to make a significant up-front impact to the creators and to the system we’re building.

I’ve never been less worried about the future of Nebula. I’d just much rather answer to the creators and the audience than to venture capitalists.

Snow Leopard

2023 will be Nebula’s Snow Leopard.

For the last four years, we’ve raced from milestone to milestone, adding new features, supporting new content types, and putting apps on new platforms. We’re proud of the work we’ve done, but when you’re constantly rushing to get to the next thing, you accumulate debt. Design debt, technical debt, expectational debt. We have tons of it. For Nebula to be what we dream it to be, we need to take the time to pay that off.

We recently posted a Reddit thread for subscribers to share their thoughts on the parts of the experience they felt could use improvement. When the dust settled, I was grateful for two things. One, every single post on that thread was thoughtful, kind, respectful, and supportive. The feedback was genuinely very useful. But two, there were absolutely no surprises for us. Every complaint we saw, every suggestion, every bug was something we’ve encountered or documented or captured from support emails.

Using Nebula should be a premium experience. When you sit down to watch a Nebula video, you should get lost in what you’re watching. When you’re looking for something to watch, it should be easy to find a great video or a great new creator. When you finish watching episode two of Jet Lag, Nebula should make it easy to get to episode three. Nebula should reward curiosity and exploration. This service costs you money, and our job is to create a low-friction system for you to enjoy the work you want to see from the creators you want to support.

“No features” doesn’t actually mean no features in the strictest sense. Recommendations, playlists, better player controls, and lots of other improvements are on the immediate roadmap. We consider these improvements to be “quality of life” features. Things that enhance the experience and remove friction. There are hundreds of little details, little pieces of the overall experience, that aren’t currently as refined as they could be. But they will be.

As we’ve discussed this plan with creators, staff, and audience, it has been truly galvanizing. I don’t think we’ve been as unified as a team on all fronts since the inception of Nebula itself. Sharing the plan publicly is partially about telegraphing our plan to the world, but also largely about keeping ourselves accountable. This is the mission: to polish the rough edges and make Nebula the best place to enjoy the videos and podcasts our creators produce. Every step we take this year will be in service of that mission. If you’d like to join us, keep reporting bugs and making suggestions. We’re listening.




P.S. Take a look at Apple’s promotional graphic for Snow Leopard. It’s almost too perfect.


Jet Lag: The Traffic

Not only was this our biggest day ever for signups, it was also our biggest day ever for traffic. In this post I’d like to give you a peek behind the curtain, to see how Jet Lag has impacted Nebula’s backend services, and to show you one of the tools we have at our disposal when things become too much to handle.

What we’re dealing with

Here’s 30 days of combined traffic to all of our backend services, between September 25th and November 25th:

We see a clearly defined daily cycle, and then every Wednesday we have these large spikes. These are caused by Jet Lag episodes being released.

“How do you know they’re Jet Lag?”

The release of videos is something we keep a close eye on. If we zoom in on one of these spikes and turn on our push notification annotations, this is what we see:

It looks quite dramatic, doesn’t it? What we see here is a ~3x increase in traffic to our backend services over the course of about 40 minutes. This is substantial, but not horrifying. I’d like to focus on two things we have in place for handling these spikes: autoscaling and rate limiting.


The backend services that make up Nebula run on Kubernetes in AWS. Kubernetes is a container orchestration platform; you tell it what you want to run and it figures out how to do it. Explaining Kubernetes in detail is outside of the scope of this post, but if you’re interested the official website has a great overview:

We make use of 2 different types of autoscaling: pod autoscaling, and node autoscaling. You can think of pods as independent units of work. Each pod can have multiple containers working together inside of it, and those containers have to run somewhere. This is where nodes come in. Nodes are computers that can run pods. In our case, EC2 instances. Pods additionally specify how much CPU, RAM, and disk they need, so you can also think of them as having different sizes depending on how much of each resource they need.

We have multiple types of pods for each of our services. The most important for this post is what we call “web” pods. These run the containers that handle web requests to our APIs, and each service has at least 3 of them. They contain application code to serve video data, handle signups of new users, create and cancel subscriptions, and so on. It’s these pods that have to absorb the 3x increase of traffic we see when a new Jet Lag episode drops.

Pod autoscaling

Pod autoscaling kicks in when a pod hits a threshold of resource utilization. Our web pod groups are configured to add in a new pod if the average CPU utilization of all pods is above 60%. We run a minimum of 3 web pods per service, for redundancy, and then we also configure a maximum in order to prevent excessive traffic from filling up our nodes and rendering other services unable to scale up.

Given that pods have a size, and a node can only run a finite number of pods, we also have…

Node autoscaling

Just like pods, nodes have a size. They have a finite amount of CPU, RAM, and disk space, and when they get full they won’t be able to run any more pods. For this reason, our Kubernetes cluster is also configured to add more nodes when it hits a threshold of utilization.

That was a lot of words. Let’s see how it works in practice.

This is the same 30 day period we used earlier, only this time showing the number of nodes and pods present across all of our backend services. We can see it follows a similar daily cycle, with spikes in the same places we see spikes in traffic. At peak, we can see we’re using up to 37 nodes. This is double what we need at our quietest times. Autoscaling allows us to save money, as well respond automatically to bursts of traffic.

Zoomed in on one of the spikes, you can more clearly see autoscaling reacting to a Jet Lag episode being released. Doing this automatically frees us up from having to respond manually in most cases, only having to periodically tweak maximums in order to make sure we have plenty of headroom, accounting for organic growth. It’s a solid foundation upon which we can grow without being too wasteful.

Rate limiting

Sometimes autoscaling isn’t enough. We rely on relational databases, and each service has its own database. Unfortunately, one of the places where relational databases struggle is write scaling, and Nebula is a write-heavy workload.

“Wait, write-heavy? How is that the case?”

It’s a little surprising, isn’t it? Isn’t Nebula mostly about serving content to people? It is, but an important aspect of that is remembering where you got up to. All of our apps are periodically reporting your progress through our videos and podcasts, so that if you suddenly lose connection on one device you can seamlessly resume where you left off on another device. This traffic, at peak, makes up around 70% of all Nebula traffic.

One of the good things about this traffic is that, when push comes to shove, we don’t need to serve it. All of our apps are written such that progress is saved locally, and if requests to save that local progress don’t succeed, they are retried later. It’s not ideal to delay this progress syncing, but it’s a small price to pay to maintain stability of the rest of Nebula. This is a valuable pressure valve for us, and we have had to use it recently.

This graph shows the percentage of traffic we are rate limiting over the last 30 days. Our logs show that all of our rate limiting is done against progress reporting requests, and we have a fairly consistent background rate of 5-7% of requests being rate limited. We’re strict with the rate limit on progress reporting, because we know the apps handle being limited well, so we try to skirt close to the expected rate of requests at all times.

You can see a big section in the middle of this graph, though, where we’re rate limiting significantly more requests than normal. This was in response to Jet Lag episode 5. Here’s a close-up.

(Trivia: this was my birthday!) And to show why we responded the way we did, here’s a graph of our database CPU utilization and CPU credit balance over the same time period.

The credit balance falling triggered an automated alert, and our response was to make use of our ability to rate limit specific endpoints on-the-fly to start limiting video progress reporting. This relieves pressure on the database, and allows our credit balance to start refilling. CPU credits are a mechanism AWS use to allow you to have bursty CPU usage. Every second you’re above 50% CPU utilization, your credit balance goes down. Every second you’re below 50%, your credit balance goes up. When you hit 0, AWS either throttles you (really bad for us), or charges you more money (bad, but not as bad as throttling).

Since this happened, we’ve upgraded the size of our database instance for this service, and haven’t had to use this pressure valve since. The most recent episode of Jet Lag, episode 7, released and triggered no alerts, requiring no intervention from the backend team. Success!

The future

It’s at this point you may be thinking: “why write this directly to the database at all? Why not keep this in Redis or some other in-memory data store?”

This has crossed our minds. It’s likely that the long-term future of progress reporting on Nebula is to move to using something where writes are cheaper and easier to scale, but for now we get a lot of benefit from only using relational databases. Introducing something new is a serious decision, one that would have us considering monitoring, alerting, disaster recovery, the effect it will have on onboarding new team members, the added complexity of another moving part. For now, paying a bit more for a beefier database instance made the most sense.