Some time ago, the Modularity team in Fedora attempted to organize a proper hackfest on Modularity.  The hackfest was intended to gather together members of the Fedora community (both internal and external to Red Hat) in Ireland and work through some of the bigger UX and packaging concerns around Fedora Modularity. Unfortunately, the planning and funding for the hackfest fell through. However, it turned out that we were able to pull together a less-ambitious hackfest in the Red Hat Boston office over Monday and Tuesday (at effectively no notice). The attendance was a bit limited, but we were able to get several people together along with several more through video-conferencing technology.

Among the attendees from Red Hat were Petr Šabata, Langdon White, Adam Šamalík, Mohan Boddu and Matthew Miller. From outside of Red Hat, we were joined by Neal Gompa and Igor Gnatenko.

Much of this two-day hackfest was spent identifying and scoping the most urgent problems that we need to solve. We opened the session by inviting Neal Gompa to report on his experiences with attempting to consume and build modules for the projects he works on in his day-job. In particular, his internal toolchain uses the Open Build Service (OBS) to build his tools on multiple operating systems. At present, OBS does not handle repositories with modular content appropriately. OBS relies directly on libsolvext for working with repodata which does not currently handle the module metadata. As a result,  the Fedora Modular and Red Hat Enterprise Linux 8 Beta AppStream repositories look like a collection of conflicting data.

We spent quite a lot of time in discussion on this topic and eventually broke it down into two specific problems to solve. First, we need to work with libsolvext upstream to add support for reading the modulemd YAML format. Second, we need to then convert the data read by this format into “solvables” that can be processed by libsolv. This will enable OBS to process modules in an appropriate manner (and may eventually be able to replace the implementation done by libdnf). The first issue is currently blocked on libsolv upstream’s unwillingness to include the libyaml parser, preferring to insist that modulemd be provided instead as XML. However, Igor (a member of libsolv upstream himself) has become sufficiently convinced that such a switch won’t happen and is going to approach the rest of upstream to reconsider their stance.

The third primary issue we identified was that there exists some content that is used when building Fedora and RHEL 8 Beta modules that are not published outside of the build-system, resulting in those modules being impossible to reproduce externally. In Fedora we have modules (called “buildroot-only” modules) that are not shipped in any public repository. As it relates to reproducibility of the distro, we will probably need to publish this content somewhere. We discussed with Mohan (the Fedora release engineering lead) that we may want to provide a new repository for this content that we will not mirror widely (and probably produce without deltarpms, etc so as not to unnecessarily slow the compose process).

The last big issue we discussed was the difficulty currently faced by users who want to build their own modules locally. To define the term “locally”, we mean that once the necessary dependencies are cached onto the local system, it should be possible to complete builds and rebuilds without any internet access. Today, the MBS has limited local build functionality out of the box for building Fedora modules. However, it is not truly local, as the build process will reach out to Fedora’s infrastructure including both Koji and PDC at some points. We agreed that the MBS needs to be updated to be able to either cache all content locally or be directed at a specific site-mirror of the repositories to be able to perform builds without access to the Fedora Infrastructure. In the case of RHEL 8 modules, this becomes even more urgent as anyone outside the Red Hat firewall will not have access to the Red Hat MBS and Koji instances.

Beyond the problems with network access, we also looked into what people will need to be able to do in order to compose and release their third-party modules in their own repositories, either internal to their organization or publicly as a non-official Fedora or RHEL repository. Today, there are painfully-difficult ways to accomplish this with the createrepo_c family of tools, but those present at the hackfest agreed that we need to enhance this experience by making module metadata a first class citizen in those tools. To that end, I opened several issues against the createrepo_c project:

  • Issue 131: Make module metadata a first-class citizen in mergerepo. This is necessary to ensure that tools that want to compose modular content will be able to update their final repos with the content produced by a local MBS build. In other words, it must be possible to take the module-specific repo created by an MBS local build and add it to a combined repository without a log of manual steps to set the module metadata correctly.
  • Issue 132: Add package checksums to the module metadata when creating a repository. This is needed for libsolvext, as it uses the checksums as the lookup key for each package.
  • Issue 133: Similar to how it handles packages, when running createrepo_c or mergerepo_c, it needs to be possible to exclude a subset of modules from the resulting repository.

As we were discussing these createrepo issues, we also identified several gaps in the libmodulemd API that will need addressing to support them.

  • Issue 208: In order to simplify things for libsolvext and createrepo_c, libmodulemd needs to be able to decompress modules.yaml.gz or modules.yaml.bz2 streams.
  • Issue 209: In support of the createrepo issue 133 above, we need to add a routine to make it simple to exclude an entire module (defaults object and all) from a ModuleIndex object.
  • Issue 212: We discovered in our discussions a shortcoming in the libmodulemd 2.x API. It made an incorrect assumption that separate CPU architectures would always be contained in separate repositories. Neal noted that there are some prominent repositories in the wild that converge multiple arches into a single repository. We will need to evaluate whether we can fix this without breaking API.

The hackfest also discussed a number of issues around upgrades from one release to the next, including the current issues plaguing efforts around the Fedora 30 Beta. We brought in Adam Williamson of the Fedora QA team for this part of the discussion. In this case, we reached a clear agreement that the currently-proposed workaround on the Fedora Devel mailing list (of requiring users to pass a special --setopt argument to the upgrade command) is not an acceptable solution for Fedora 30 Beta. We will ask that the DNF team find a way such that the existing expected commands dnf --releasever=$NEXTVERSION system-upgrade download must work without additional arguments.

Open Questions:

We had some discussions throughout the course of the hackfest that didn’t reach a clear consensus. The most contentious was around what to do about empty profile data. There are several uncommon cases that need to have clear UX decisions made around them.

A module does not have one or more of its profiles specified to be the default.

The module creator has not provided a defaults object into the YAML (in Fedora, this is done by requesting it be added by release engineering or submitting a pull request to the fedora-module-defaults repository). This may be intentional or unintentional. Fedora QA has asked us if they should be treating the lack of a defaults reference for the profile as a failure, since this is something that could be detected during post-compose testing and reported. My personal opinion on this case is that for modules in Fedora itself, we should indeed treat this as a bug and require that all modules have a defaults object that properly references a set of default profiles for any stream included in that compose.

The open question here is what the DNF experience should be if dnf module install modulename:modulestream​ is called. The current behavior is that DNF treats this as equivalent to calling dnf module enable modulename:modulestream (it makes the contents of this module explicitly available, but installs nothing at that time. Feedback from users of the RHEL 8 Beta have indicated that this is an unexpected behavior of the install verb. They’d prefer to see DNF report an error if install is called and results in no packages being installed. In part, this is because it would be silently hiding the possibility that the defaults are missing.

A module has explicitly set one or more of its streams to have no default profiles

This is a similar case to the above, except that a conscious choice was made by the module maintainer to say that this module has no reasonable default packages that could be selected. (For example, it could be a collection of popular libraries that extend a particular programming language, but there’s no obvious subset of them that makes sense to install. It may exist and have streams solely because it needs to be kept in sync with the interpreter version.)

The open question is the same as the previous one: how should dnf module install handle this case? In this particular example, it might be more acceptable that it follows the enable fallback, since the maintainer selected the lack of a profile explicitly. However, having context-sensitive differences can be difficult for people to process.

A module has a profile that contains zero RPMs

In this case, a profile definition has been made in the module metadata and it explicitly contains zero RPMs within it. Such an example might be for compatibility: the module previously provided a profile with that name that contained content, but it is no longer doing so. Retaining the name may have been done to allow existing scripts to avoid breaking. If we have a profile that contains zero packages, should it be an error if we attempt to install it? If not, what should the UX look like?