Myon's Debian BlogChristoph Berg's Bloghttps://www.df7cb.de/blog/tag/debian.htmlChristoph Berg's Blogikiwiki2024-03-18T14:22:23Zvcswatch and git --filterhttps://www.df7cb.de/blog/2024/vcswatch-git-filter.html2024-03-18T14:22:23Z2024-03-18T12:45:40Z
<p>Debian is running a "<a href="https://qa.debian.org/cgi-bin/vcswatch">vcswatch</a>"
service that keeps track of the status of all packaging repositories that have a
<a href="https://www.debian.org/doc/manuals/developers-reference/best-pkging-practices.de.html#vcs"><tt>Vcs-Git</tt></a>
(and other VCSes) header set and shows which repos might need a package upload to push pending changes out.</p>
<p>Naturally, this is a lot of data and the scratch partition on qa.debian.org
had to be expanded several times, up to 300 GB in the last iteration.
Attempts to reduce that size using shallow clones (<tt>git clone --depth=50</tt>)
did not result more than a few percent of space saved. Running <tt>git gc</tt> on
all repos helps a bit, but is tedious and as Debian is growing, the repos are
still growing both in size and number. I ended up blocking all repos with
checkouts larger than a gigabyte, and still the only cure was expanding the
disk, or to lower the blocking threshold.</p>
<p>Since we only need a tiny bit of info from the repositories, namely the content
of <tt>debian/changelog</tt> and a few other files from <tt>debian/</tt>, plus
the number of commits since the last tag on the packaging branch, it made sense
to try to get the info without fetching a full repo clone. The question if we
could grab that solely using the GitLab API at salsa.debian.org was never
really answered. But then, in <a href="https://bugs.debian.org/1032623">#1032623</a>,
Gábor Németh suggested the use of
<a href="https://git-scm.com/docs/git-clone#Documentation/git-clone.txt---filterltfilter-specgt"><tt>git clone --filter blob:none</tt></a>.
As things go, this sat unattended in the bug report for almost a year until the
next "disk full" event made me give it a try.</p>
<p>The <tt>blob:none</tt> filter makes git clone omit all files, fetching only commit and
tree information. Any blob (file content) needed at git run time is
transparently fetched from the upstream repository, and stored locally. It
turned out to be a game-changer. The (largish) repositories I tried it on
shrank to 1/100 of the original size.</p>
<p>Poking around I figured we could even do better by using <tt>tree:0</tt> as
filter. This additionally omits all trees from the git clone, again only
fetching the information at run time when needed. Some of the larger repos I
tried it on shrank to <em>1/1000</em> of their original size.</p>
<p>I deployed the new option on qa.debian.org and scheduled all repositories to
fetch a new clone on the next scan:</p>
<p><img src="https://www.df7cb.de/blog/2024/df-month.png"></p>
<p>The initial dip from 100% to 95% is my first "what happens if we block repos
> 500 MB" attempt. Over the week after that, the git filter clones reduce the
overall disk consumption from almost 300 GB to 15 GB, a <em>1/20</em>. Some
repos shrank from GBs to below a MB.</p>
<p>Perhaps I should make all my git clones use one of the filters.</p>
PostgreSQL Popularity Contesthttps://www.df7cb.de/blog/2023/popcon-postgresql.html2023-08-26T21:49:40Z2023-08-26T21:49:40Z
<p>Back in 2015, when PostgreSQL 9.5 alpha 1 was released, I had posted the
<a href="https://www.df7cb.de/blog/2015/PostgreSQL_9.5_in_Debian.html">PostgreSQL data from Debian's popularity contest</a>.</p>
<p>8 years and 8 PostgreSQL releases later, the graph now looks like this:</p>
<p><a href="https://qa.debian.org/popcon-graph.php?packages=postgresql+postgresql-7.4+postgresql-8.0+postgresql-8.1+postgresql-8.2+postgresql-8.3+postgresql-8.4+postgresql-9.0+postgresql-9.1+postgresql-9.2+postgresql-9.3+postgresql-9.4+postgresql-9.5+postgresql-9.6+postgresql-10+postgresql-11+postgresql-12+postgresql-13+postgresql-14+postgresql-15+postgresql-16&show_installed=on&want_legend=on&want_ticks=on&from_date=&to_date=&hlght_date=&date_fmt=%25Y-%25m&beenhere=1"><img src="https://www.df7cb.de/blog/2023/popcon-postgresql.png"></a></p>
<p>Currently, the most popular PostgreSQL on Debian systems is still PostgreSQL 13 (shipped in Bullseye), followed by PostgreSQL 11 (Buster). At the time of writing,
PostgreSQL 9.6 (Stretch) and PostgreSQL 15 (Bookworm) share the third place, with 15 rising quickly.</p>
PostgreSQL and Undeletehttps://www.df7cb.de/blog/2021/postgresql-undelete.html2021-11-18T10:21:51Z2021-11-17T15:46:51Z
<h2>pg_dirtyread</h2>
<p>Earlier this week, I updated <a href="https://github.com/df7cb/pg_dirtyread">pg_dirtyread</a>
to work with <a href="https://www.postgresql.org/docs/14/index.html">PostgreSQL 14</a>.
pg_dirtyread is a PostgreSQL extension that allows reading "dead" rows from
tables, i.e. rows that have already been deleted, or updated. Of course that
works only if the table has not been cleaned-up yet by a VACUUM command or
autovacuum, which is PostgreSQL's garbage collection machinery.</p>
<p>Here's an example of pg_dirtyread in action:</p>
<pre><code># create table foo (id int, t text);
CREATE TABLE
# insert into foo values (1, 'Doc1');
INSERT 0 1
# insert into foo values (2, 'Doc2');
INSERT 0 1
# insert into foo values (3, 'Doc3');
INSERT 0 1
# select * from foo;
id │ t
────┼──────
1 │ Doc1
2 │ Doc2
3 │ Doc3
(3 rows)
# delete from foo where id < 3;
DELETE 2
# select * from foo;
id │ t
────┼──────
3 │ Doc3
(1 row)
</code></pre>
<p>Oops! The first two documents have disappeared.</p>
<p>Now let's use pg_dirtyread to look at the table:</p>
<pre><code># create extension pg_dirtyread;
CREATE EXTENSION
# select * from pg_dirtyread('foo') t(id int, t text);
id │ t
────┼──────
1 │ Doc1
2 │ Doc2
3 │ Doc3
</code></pre>
<p>All three documents are still there, but only one of them is visible.</p>
<p>pg_dirtyread can also show PostgreSQL's system colums with the row location and
visibility information. For the first two documents, xmax is set, which means
the row has been deleted:</p>
<pre><code># select * from pg_dirtyread('foo') t(ctid tid, xmin xid, xmax xid, id int, t text);
ctid │ xmin │ xmax │ id │ t
───────┼──────┼──────┼────┼──────
(0,1) │ 1577 │ 1580 │ 1 │ Doc1
(0,2) │ 1578 │ 1580 │ 2 │ Doc2
(0,3) │ 1579 │ 0 │ 3 │ Doc3
(3 rows)
</code></pre>
<h2>Undelete</h2>
<p><strong>Caveat:</strong> <em>I'm not promising any of the ideas quoted below will actually work in
practice. There are a few caveats and a good portion of intricate knowledge
about the PostgreSQL internals might be required to succeed properly. Consider
consulting your favorite PostgreSQL support channel for advice if you need to
recover data on any production system.</em> <strong>Don't try this at work.</strong></p>
<p>I always had plans to extend pg_dirtyread to include some "undelete" command to
make deleted rows reappear, but never got around to trying that. But rows can already be
restored by using the output of pg_dirtyread itself:</p>
<pre><code># insert into foo select * from pg_dirtyread('foo') t(id int, t text) where id = 1;
</code></pre>
<p>This is not a true "undelete", though - it just inserts new rows from the data
read from the table.</p>
<h2>pg_surgery</h2>
<p>Enter <a href="https://www.postgresql.org/docs/current/pgsurgery.html">pg_surgery</a>,
which is a new PostgreSQL extension supplied with PostgreSQL 14. It contains
two functions to "perform surgery on a damaged relation". As a side-effect,
they can also make delete tuples reappear.</p>
<p>As I discovered now, one of the functions, heap_force_freeze(), works nicely
with pg_dirtyread. It takes a list of ctids (row locations) that it marks
"frozen", but at the same time as "not deleted".</p>
<p>Let's apply it to our test table, using the ctids that pg_dirtyread can read:</p>
<pre><code># create extension pg_surgery;
CREATE EXTENSION
# select heap_force_freeze('foo', array_agg(ctid))
from pg_dirtyread('foo') t(ctid tid, xmin xid, xmax xid, id int, t text) where id = 1;
heap_force_freeze
───────────────────
(1 row)
</code></pre>
<p>Et voilà, our deleted document is back:</p>
<pre><code># select * from foo;
id │ t
────┼──────
1 │ Doc1
3 │ Doc3
(2 rows)
# select * from pg_dirtyread('foo') t(ctid tid, xmin xid, xmax xid, id int, t text);
ctid │ xmin │ xmax │ id │ t
───────┼──────┼──────┼────┼──────
(0,1) │ 2 │ 0 │ 1 │ Doc1
(0,2) │ 1578 │ 1580 │ 2 │ Doc2
(0,3) │ 1579 │ 0 │ 3 │ Doc3
(3 rows)
</code></pre>
<h2>Disclaimer</h2>
<p>Most importantly, none of the above methods will work if the data you just
deleted has already been purged by VACUUM or autovacuum. These actively zero
out reclaimed space. Restore from backup to get your data back.</p>
<p>Since both pg_dirtyread and pg_surgery operate outside the normal PostgreSQL
MVCC machinery, it's easy to create corrupt data using them. This includes
duplicated rows, duplicated primary key values, indexes being out of sync with
tables, broken foreign key constraints, and others. <em>You have been warned.</em></p>
<p>pg_dirtyread does not work (yet) if the deleted rows contain any
<a href="https://www.postgresql.org/docs/current/storage-toast.html">toasted</a>
values. Possible other approaches include using
<a href="https://www.postgresql.org/docs/current/pageinspect.html">pageinspect</a>
and <a href="https://wiki.postgresql.org/wiki/Pg_filedump">pg_filedump</a>
to retrieve the ctids of deleted rows.</p>
<p>Please make sure you have working backups and don't need any of the above.</p>
arm64 on apt.postgresql.orghttps://www.df7cb.de/blog/2020/arm64-on-apt.postgresql.org.html2020-05-04T12:28:54Z2020-05-04T09:20:28Z
<p>The <a href="https://apt.postgresql.org/">apt.postgresql.org</a> repository has been extended
to cover the <em>arm64</em> architecture.</p>
<p>We had occasionally received user request to add "arm" in the past, but it was
never really clear which kind of "arm" made sense to target for PostgreSQL. In
terms of Debian architectures, there's (at least) armel, armhf, and arm64.
Furthermore, Raspberry Pis are very popular (and indeed what most users seemed
to were asking about), but the raspbian "armhf" port is incompatible with the
Debian "armhf" port.</p>
<p>Now that most hardware has moved to 64-bit, it was becoming clear that "arm64"
was the way to go. Amit Khandekar made it happen that
<a href="https://intl.huaweicloud.com/en-us/">HUAWEI Cloud Services</a>
donated a arm64 build host with enough resources to build the arm64 packages
at the same speed as the existing amd64, i386, and ppc64el architectures.
A few days later, all the build jobs were done, including passing all
test-suites. Very few arm-specific issues were encountered which makes me
confident that arm64 is a solid architecture to run PostgreSQL on.</p>
<p>We are targeting Debian buster (stable), bullseye (testing), and sid
(unstable), and Ubuntu bionic (18.04) and focal (20.04). To use the arm64
archive, just add the normal sources.list entry:</p>
<pre>
deb https://apt.postgresql.org/pub/repos/apt buster-pgdg main
</pre>
<h2>Ubuntu focal</h2>
<p>At the same time, I've added the next Ubuntu LTS release to apt.postgresql.org:
focal (20.04). It ships amd64, arm64, and ppc64el binaries.</p>
<pre>
deb https://apt.postgresql.org/pub/repos/apt focal-pgdg main
</pre>
<h2>Old PostgreSQL versions</h2>
<p>Many PostgreSQL extensions are still supporting older server versions that are
EOL. For testing these extension, server packages need to be available. I've
built packages for PostgreSQL 9.2+ on all Debian distributions, and all Ubuntu
LTS distributions. 9.1 will follow shortly.</p>
<p>This means people can move to newer base distributions in their .travis.yml,
.gitlab-ci.yml, and other CI files.</p>
Announcing apt-archive.postgresql.orghttps://www.df7cb.de/blog/2020/apt-archive.postgresql.org.html2020-03-24T11:08:48Z2020-03-24T11:08:48Z
<p>Users had often asked where they could find older versions of packages from
<a href="https://apt.postgresql.org/">apt.postgresql.org</a>. I had been
collecting these since about April 2013, and in July 2016, I made the packages
available via an ad-hoc URL on the repository master host, called "the morgue".
There was little repository structure, all files belonging to a source package
were stuffed into a single directory, no matter what distribution they belonged
to. Besides this not being particularly accessible for users, the main problem
was the ever-increasing need for more disk space on the repository host. We are
now at 175 GB for the archive, of which 152 GB is for the morgue.</p>
<p>Our friends from <a href="https://yum.postgresql.org/">yum.postgresql.org</a>
have had a proper archive host (yum-archive.postgresql.org) for some time
already, so it was about time to follow suit and implement a proper archive
for apt.postgresql.org as well, usable from apt.</p>
<p>So here it is:
<b><a href="https://apt-archive.postgresql.org/">apt-archive.postgresql.org</a></b></p>
<p>The archive covers all past and current Debian and Ubuntu distributions. The
apt sources.lists entries are similar to the main repository, just with "-archive"
appended to the host name and the
<a href="https://apt-archive.postgresql.org/pub/repos/apt/dists/index.html">distribution</a>:</p>
<pre>
deb https://apt-archive.postgresql.org/pub/repos/apt DIST-pgdg-archive main
deb-src https://apt-archive.postgresql.org/pub/repos/apt DIST-pgdg-archive main
</pre>
<p>The oldest PostgreSQL server versions covered there are 8.2.23, 8.3.23, 8.4.17,
9.0.13, 9.1.9, 9.2.4, 9.3beta1, and everything newer.</p>
<p>Some example:</p>
<pre>
$ apt-cache policy postgresql-12
postgresql-12:
Installed: 12.2-2.pgdg+1+b1
Candidate: 12.2-2.pgdg+1+b1
Version table:
*** 12.2-2.pgdg+1+b1 900
500 http://apt.postgresql.org/pub/repos/apt sid-pgdg/main amd64 Packages
500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
100 /var/lib/dpkg/status
12.2-2.pgdg+1 500
500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
12.2-1.pgdg+1 500
500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
12.1-2.pgdg+1 500
500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
12.1-1.pgdg+1 500
500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
12.0-2.pgdg+1 500
500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
12.0-1.pgdg+1 500
500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
12~rc1-1.pgdg+1 500
500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
12~beta4-1.pgdg+1 500
500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
12~beta3-1.pgdg+1 500
500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
12~beta2-1.pgdg+1 500
500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
12~beta1-1.pgdg+1 500
500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
</pre>
<p>Because this is hosted on S3, browsing directories is only supported indirectly by
static index.html files, so if you want to look at some specific URL, append
"/index.html" to see it.</p>
<p>The archive is powered by a
<a href="https://git.postgresql.org/gitweb/?p=pgapt.git;a=tree;f=pgapt-db/sql">PostgreSQL database</a> and a
<a href="https://git.postgresql.org/gitweb/?p=pgapt.git;a=tree;f=repo/bin">bunch of python/shell scripts</a>,
from which the apt index files are built.</p>
<h2>Archiving old distributions</h2>
<p>I'm also using the opportunity to remove some long-retired distributions from the main
repository host. The following distributions have been moved over:</p>
<ul>
<li>Debian etch (4.0)</li>
<li>Debian lenny (5.0)</li>
<li>Debian squeeze (6.0)</li>
<li>Ubuntu lucid (10.04)</li>
<li>Ubuntu saucy (13.10)</li>
<li>Ubuntu utopic (14.10)</li>
<li>Ubuntu wily (15.10)</li>
<li>Ubuntu zesty (17.04)</li>
<li>Ubuntu cosmic (18.10)</li>
</ul>
<p>They are available as "<em>DIST</em>-pgdg" from the archive, e.g. squeeze:</p>
<pre>
deb https://apt-archive.postgresql.org/pub/repos/apt squeeze-pgdg main
deb-src https://apt-archive.postgresql.org/pub/repos/apt squeeze-pgdg main
</pre>
Cool Unix Features: pastehttps://www.df7cb.de/blog/2018/paste.html2018-03-09T09:06:21Z2018-03-09T09:06:21Z
<p><em>paste</em> is one of those tools nobody uses [1]. It puts two file side by side,
line by line.</p>
<p>One application for this came up today where some tool was called for several
files at once and would spit out one line by file, but unfortunately not
including the filename.</p>
<pre><code>$ paste <(ls *.rpm) <(ls *.rpm | xargs -r rpm -q --queryformat '%{name} \n' -p)
</code></pre>
<p>[1] See "J" in <a href="http://ifaq.wap.org/computers/abcsofunix.html">The ABCs of Unix</a></p>
<p>[<em>PS: I meant to blog this in 2011, but apparently never committed the file...</em>]</p>
Stepping down as DAMhttps://www.df7cb.de/blog/2018/Stepping_down_as_DAM.html2018-03-09T08:58:06Z2018-03-09T08:58:06Z
<p>After quite some time (years actually) of inactivity as
<a href="https://www.debian.org/intro/organization#dam">Debian Account Manager</a>,
I finally decided to give back that Debian hat.
<a href="https://lists.debian.org/debian-devel-announce/2018/03/msg00001.html">I'm stepping down as DAM</a>.
I will still be around for the occasional comment from the peanut gallery, or
to provide input if anyone actually cares to ask me about the old times.</p>
<p>Thanks for the fish!</p>
Salsa batch importhttps://www.df7cb.de/blog/2017/Salsa_batch_import.html2018-02-04T08:21:05Z2017-12-25T15:43:30Z
<p>Now that <a href="https://salsa.debian.org/">Salsa</a> is in
<a href="http://blog.snow-crash.org/blog/salsa.debian.org-git.debian.org-replacement-going-into-beta/">beta</a>,
it's time to import projects (= GitLab speak for "repository"). This is
probably best done automated. Head to
<a href="https://salsa.debian.org/profile/personal_access_tokens">Access Tokens</a>
and generate a token with "api" scope, which you can then use with curl:</p>
<pre>
$ cat salsa-import
#!/bin/sh
set -eux
PROJECT="${1%.git}"
DESCRIPTION="$PROJECT packaging"
ALIOTH_URL="https://anonscm.debian.org/git"
ALIOTH_GROUP="collab-maint"
SALSA_URL="https://salsa.debian.org/api/v4"
SALSA_GROUP="debian" # "debian" has id 2
SALSA_TOKEN="yourcryptictokenhere"
# map group name to namespace id (this is slow on large groups, see https://gitlab.com/gitlab-org/gitlab-ce/issues/42415)
SALSA_NAMESPACE=$(curl -s https://salsa.debian.org/api/v4/groups/$SALSA_GROUP | jq '.id')
# trigger import
curl -f "$SALSA_URL/projects?private_token=$SALSA_TOKEN" \
--data "path=$PROJECT&namespace_id=$SALSA_NAMESPACE&description=$DESCRIPTION&import_url=$ALIOTH_URL/$ALIOTH_GROUP/$PROJECT&visibility=public"
</pre>
<p>This will create the GitLab project in the chosen namespace, and import the repository from Alioth.</p>
<p>Pro tip: To import a whole Alioth group to GitLab, run this on Alioth:</p>
<pre>
for f in *.git; do sh salsa-import $f; done
</pre>
<p>(<em>Update 2018-02-04</em>: Query namespace ID via the API)</p>
vcswatch is now looking for tagshttps://www.df7cb.de/blog/2016/vcswatch_is_now_looking_for_tags.html2016-05-29T17:49:28Z2016-05-29T17:49:28Z
<p>About a week ago, I extended
<em><a href="https://qa.debian.org/cgi-bin/vcswatch">vcswatch</a></em>
to also look at tags in git repositories.</p>
<p>Previously, it was solely paying attention to the version number in the top
paragraph in debian/changelog, and would alert if that version didn't match the
package version in Debian unstable or experimental. The idea is that <em>"UNRELEASED"</em>
versions will keep nagging the maintainer
(via <a href="https://qa.debian.org/developer.php">DDPO</a>)
not to forget that some day this package needs an upload. This works for git,
svn, bzr, hg, cvs, mtn, and darcs repositories (in decreasing order of actual
usage numbers in Debian. I had actually tried to add arch support as well, but
that VCS is so weird that it wasn't worth the trouble).</p>
<p>There are several shortcomings in that simple approach:</p>
<ul>
<li>Some packages update debian/changelog only at release time, e.g. auto-generated from the git changelog using <em>git-dch</em></li>
<li>Missing or misplaced release tags are not detected</li>
</ul>
<p>The new mechanism fixes this for git repositories by also looking at the output
of <em>git describe --tags</em>. If there are any commits since the last tag, and the
vcswatch status according to debian/changelog would otherwise be <em>"OK"</em>, a new
status <em>"COMMITS"</em> is set. DDPO will report e.g. "1.4-1+2", to be read as "2
commits since the tag [debian/]1.4-1".</p>
<p>Of the 16644 packages using git in Debian, currently 7327 are "OK", 2649 are in
the new "COMMITS" state, and 4227 are "NEW". 723 are "OLD" and 79 are "UNREL"
which indicates that the package in Debian is ahead of the git repository. 1639
are in an ERROR state.</p>
<p>So far the new mechanism works for git only, but other VCSes could be added as
well.</p>
10 Years Debian Developerhttps://www.df7cb.de/blog/2015/10_Years_Debian_Developer.html2015-09-05T21:43:30Z2015-09-05T21:42:13Z
<p>I knew it was about this time of the year 10 years ago when my Debian account
was created, but I couldn't remember the exact date until I looked it up
earlier this evening: today :). Rene Engelhard had been my advocate, and Marc
Brockschmidt my AM. Thanks guys!</p>
<p>A lot of time has passed since then, and I've worked in various parts of the
project. I became an application manager almost immediately, and quickly got
into the NM front desk as well, revamping parts of the NM process which had
become pretty bureaucratic (I think we are now, 10 years later, back where we
should be, thanks to almost all of the paperwork being automated, thanks
Enrico!). I've processed 37 NMs, most of them between 2005 and 2008, later I
was only active as front desk and eventually Debian account manager. I've
recently picked up AMing again, which I still find quite refreshing as the AM
will always also learn new things.</p>
<p>Quality Assurance was and is the other big field. Starting by doing QA uploads
of orphaned packages, I attended some QA meetings around Germany, and picked up
maintenance of the DDPO pages, which I still maintain. The link between QA and
NM is the MIA team where I was active for some years until they kindly kicked
me out because I was MIA there myself. I'm glad they are still using some of
the scripts I was writing to automate some things.</p>
<p>My favorite MUA is mutt, of which I became co-maintainer in 2007, and later
maintainer. I'm still listed in the uploaders field, but admittedly I haven't
really done anything there lately.</p>
<p>Also in 2007 I started working at credativ, after having been a research
assistant at the university, which meant making my Debian work professional. Of
course it also meant more real work and less time for the hobby part, but I was
still very active around that time. Later in 2010 I was marrying, and we got
two kids, at which point family was of course much more important, so my Debian
involvement dropped to a minimum. (Mostly lurking on IRC ;)</p>
<p>Being a PostgreSQL consultant at work, it was natural to start looking into the
packaging, so I started submitting patches to postgresql-common in 2011, and
became a co-maintainer in 2012. Since then, I've mostly been working on
PostgreSQL-related packages, of which far too many have my (co-)maintainer
stamp on them. To link the Debian and PostgreSQL worlds together, we started an
external repository (apt.postgresql.org) that contains packages for the
PostgreSQL major releases that Debian doesn't ship. Most of my open source time
at the moment is spent on getting all PostgreSQL packages in shape for Debian
and this repository.</p>
<p>According to minechangelogs, currently 844 changelog entries in Debian mention
my name, or were authored by me. Scrolling back yields memories of packages
that are long gone again from unstable, or I passed on to other maintainers.
There are way too many people in Debian that I enjoy(ed) working with to list
them here, and many of them are my friends. Debian is really the extended
family on the internet. My last DebConf before this year had been in Mar del
Plata - I had met some people at other conferences like FOSDEM, but meeting
(almost) everyone again in Heidelberg was very nice. I even remembered all
basic Mao rules :D.</p>
<p>So, thanks to everyone out there for making Debian such a wonderful place to
be!</p>