This blog now has a drop-down category called Modularity. But, many arteries of Modularity lead into a project called Factory 2.0. These two are, in fact, pretty much inseparable. In this post, we’ll talk about the 5 problems that need to be solved before Modularity can really live.

The origins of Factory 2.0 go back a few years, when Matthew Miller started the conversation at Flock. The first suggested names were “Fedora Rings”, “Envs and Stacks”, and Alephs.

What problems did Factory 2.0 want to solve?

#1 Repetitive human intervention makes the pipeline slow.
#2 Unnecessary serialization makes the pipeline slow.
#3 The pipeline imposes a rigid and inflexible cadence on products.
#4 The pipeline makes assumptions about the content being shipped.
#5 The distro is defined by packages, not “features” (Modularity).
#6 There’s no easy way to trace deps from upstream to product.


The great news is… if we had problems before, they’re about to get a lot worse. Does the Lego analogy say anything to you? This is how Modularity would look like without Factory 2.0.

What Factory 2.0 is not

Factory 2.0 is not a single web application.

Factory 2.0 is not a rewrite of our entire pipeline.

Factory 2.0 is not a silver bullet.

Factory 2.0 is not a silver platter.

Factory 2.0 is not just Modularity.

Factory 2.0 is not going to be easy.

Does Modularity mean anything with Factory 2?

Does Factory 2 mean anything without Modularity?

Problem Number 1: Automating Throughput

Repetitive human intervention makes the pipeline slow. This one can cover a lot of ground: Rebuild automation, compose automation, release automation.

Rebuilds and Composes

Builds: For this we’d like to build a workflow layer on top of koji called “the orchestrator” (or, the build orchestrator). The concept was originally confused with modularity-specific considerations, but we’d like it to be more general.


Take pungi and break it out into an ad hoc process alongside the buildsystem.

In the best scenario, compose artifacts are built before we ask for them.


We can do two-week Fedora Atomic Host releases now. Hooray!


Can we reconcile that with the mainline compose/QA/release process? The problem is much more intense for Red Hat just due to volume. We have uncovered ground in Bodhi for automation. The karma system is a predecessor, but it relies on humans. Can we fast-track some components based on Taskotron results?

How can we specify an (automated) policy for setting difference release cadences? (without hard coding it)

Problem Number 2: Pipeline Serialization

Unnecessary serialization makes the pipeline slow. This is less a problem for Fedora’s Infrastructure than it is for the Red Hat-internal PnT DevOps environment: things happen, unnecessarily, in serial. One big piece we (will) share here is the Openshift Build Service (OSBS) for building containers. We’re going to need to crack the nut to get around new problems (assuming we “go big” with containers).

Internally, we’re going to be using a special build key for this — which we’ll treat as semantically different from the gold key. Let’s consider doing the same in Fedora.

Problem Number 3: Flexible Cadence

The pipeline imposes a rigid and inflexible cadence on “products”.


Related to the previous point about Automating Releases. In the first analysis, “the pipeline is as fast as the pipeline is.”


Think about the different EOL discussions for the different Editions. Beyond that – a major goal of modularity is “independent lifecycles”. What does that mean in practice?

Let’s talk about pkgdb2 and its collections model.

Problem Number 4: Artifact Assumptions

The pipeline makes assumptions about the content being shipped. Remember we asked some Red Hat stakeholders what they wanted out of a next generation pipeline? There were some real gems in there. My favorite was: “I want to be able to build any content, in any format, without changing anything.”

This is fine


This one is an odd duck among the problem statements. Qualitative – not quantitative. Do we have to do gymnastics every time we add a new format? Or can we make that easier over time?

Autocloud and two week atomic, OSBS, Flatpak, snaps, rocket containers, etc… We can do anything. But how easily can we do it? Which leads us to….

The pernicious hobgoblin of technical debt: Microservices (consolidate around responsibility!), reactive services, idempotent services, infrastructure automation…

Problem number 5: Modularity

All Roads Lead to Rome. The distro is defined by packages, not “features”. There are some specific things about modularity (module build service, BPO, etc…). Really, this is where we tie all the threads together. Each has a certain value on its own, but if we can’t “do modularity” it won’t have the same effect.

Building modules


See the Modularity Infrastructure page. Then, visit the dev instance of the build pipeline overview app.

Problem Number 6: Dependency Chain

There’s no easy way to trace deps from upstream to product (through all intermediaries).

We can model deps of RPMs today, kinda. We can model deps of docker containers in OSBS.

Productmd manifests produced from pungi contain the deps of all our images. So, that’s great. But there’s no easy way to traverse deps all the way from upstream component to end artifacts.

Let’s expand pdc-updater.


And then we can use that data for great justice. 


There’s an opportunity to do something very cool with how we make the distro. Please tell us where we’re wrong. Hop in #fedora-modularity and #fedora-admin to join the party.

The so-called “Factory 2.0”

Presented at Flock 2016 by @ralphbean.

Slides available at