This is the second part of a series I promised during my Nest With Fedora talk (also called “Exploring Our Bugs”). In this post, I’ll be analyzing the bug report resolutions from Fedora Linux 19 to Fedora Linux 32. If you want to do your own analysis, the Jupyter notebook and source data are available on Pagure. These posts are not written to advocate any specific changes or policies. In fact, they may ask more questions than they answer.

End of life

When a bug report is closed in Bugzilla, the closer sets a resolution—basically a description of why it is closed. Beginning with Fedora Linux 19, we started closing bugs with an “EOL” resolution. This signified that the release had reached end of life. (This is why the analysis starts at F19.) Every six months when I do the EOL closure, I see people complaining about how Fedora never fixes their bugs. So I wanted to know how often that’s the case.

For most releases, 40–50% of bug reports are closed EOL. That’s higher than anyone would like. But what stood out to me was the periodic nature of the percentages. I don’t have a good explanation for that. I thought perhaps it was related to the RHEL development schedule, but that doesn’t seem to be the case.

Looking at EOL closures by component, I wondered which components have the lowest EOL closure rate.

ComponentEOL closures
Five components with lowest EOL closure rates

And the highest? 2,062 components have 100% (non-duplicate) EOL closure.

Happy or sad?

Beyond just the EOL closures, I wanted to look at how many bug reports were closed “successfully”. To do this, I took the (non-duplicate) resolutions and put them into the three categories below. We can debate the inclusion of certain resolutions in a category, but this arrangement seemed like a good starting point. I also considered including DUPLICATE in the “sad maintainer” category, but decided to exclude it from the analysis.

Bug report resolutions by category
Bug report resolution by category per release

As the chart above shows, the sad resolutions outnumber the happy resolutions for all releases. Sad user far exceeds sad maintainer each time as well.

What’s next?

It might be worth doing an analysis by component to see which have the highest and lowest happy resolution rates. I’m not sure what action we could take from that. I also wonder what other distributions and large upstreams look like in this regard. Are we worse at fixing bugs or are we just more honest about things that won’t get fixed?

In the meantime, there’s one more post planned. That last post will review our time-to-resolution stats. You can explore the data yourself, or look at my slides for more tables. If you have theories to explain anything you see in this post, let’s discuss in the comments.