LoRa APRS iGate

The official documentation of https://github.com/lora-aprs/LoRa_APRS_iGate uses the PlatformIO plugin of MS Visual Studio Code. Here are the commands to get it running without the GUI:

git clone https://github.com/lora-aprs/LoRa_APRS_iGate.git
cd LoRa_APRS_iGate
  • Edit data/is-cfg.json with your station info
  • Edit platformio.ini: board = ttgo-lora32-v21
pip3 install platformio

pio run
...
Building .pio/build/lora_board/firmware.bin
...

pio run --target upload
...
Uploading .pio/build/lora_board/firmware.bin
...

pio run --target uploadfs
...
Building SPIFFS image from 'data' directory to .pio/build/lora_board/spiffs.bin
/is-cfg.json
...
Uploading .pio/build/lora_board/spiffs.bin
...

LoRa APRS Tracker

The procedure for the tracker is the same, but the GPS module might need a reset first:

git clone https://github.com/lora-aprs/TTGO-T-Beam_GPS-reset.git
cd TTGO-T-Beam_GPS-reset
pio run -e ttgo-t-beam-v1
pio run --target upload -e ttgo-t-beam-v1
# screen /dev/ttyACM0 115200

... and then upload https://github.com/lora-aprs/LoRa_APRS_Tracker.git

Posted Fri Apr 1 10:49:04 2022 Tags:
Posted Wed Mar 23 18:53:16 2022
Posted Wed Mar 23 18:53:16 2022

Classic ham radio transceivers have physical connectors for morse keys and microphones. When the transceiver is a software defined radio (SDR) device, voice operation is easy by attaching a headset, but solutions to connect a morse key, be it a straight key or paddles, to a modern PC are rare. In the old times, machines had serial ports with RTS/DTR lines, but these do not exist anymore, so a new interface is needed.

I am using a LimeSDR as ground station for the QO-100 satellite, and naturally also wanted to do CW operation there. I started with SDRangel which has a built-in morse generator, but naturally wanted to connect a CW key. At first sight, all the bits are there, there's a tune button that could be used as a straight key, as well as keyboard bindings for dots and dashes. But the delay key->local audio is almost a full second, so that's a no-go. I then went to hack my K3NG keyer to output ^ (high) _ (low) signals on the USB interface, and have a smallish Python program read that and send SDRangel REST API requests. Works, but that solution always felt "too big" to me, plus the sidetone from the buzzer inside the Arduino case could be heard in the whole house. And the total TX-RX delay was well over a second.

Next I tried building some GNU Radio flowcharts to solve the same problem but which all had the same trouble that the buffers grew way too big to allow the sidetone to be used for keying. At the same time, I switched the transceiver from SDRangel to another GR flowchart which reduced the overall TX-RX delay to something much shorter, but the local audio delay was still too slow for CW.

So after some back and forth, I came up with this solution: the external interface from the CW paddles to the PC is a small DigiSpark board programmed to output MIDI signals, and on the (Linux) PC side, there is a Python program listening for MIDI and acting as a iambic CW keyer. The morse dots and dashes are uploaded as "samples" to PulseAudio, where they are played both on the local sidetone channel (usually headphones) and on the audio channel driving the SDR transceiver. There is no delay. :)

DigiSpark hardware

The DigiSpark is a very small embedded computer that can be programmed using the Arduino toolchain.

Of the 6 IO pins, two are used for the USB bus, two connect the dit and dah lines of the CW paddle, one connects to a potentiometer for adjusting the keying speed, and the last one is unconnected in this design, but could be used for keying a physical transceiver. (The onboard LED uses the this pin.)

            +---------------+
            |            P5 o  -- 10k potentiometer middle pin
        =====  Attiny85  P4 o  -- USB (internal)
   USB  -----            P3 o  -- USB (internal)
        -----            P2 o  -- dah paddle
        =====   78M05    P1 o  -- (LED/TRX)
            |            P0 o  -- dit paddle
            +---o-o-o-------+

There is an extra 27 kΩ resistor in the ground connection of the potentiometer to keep the P5 voltage > 2.5 V, or else the DigiSpark resets. (This could be changed by blowing some fuses, but is not necessary.)

DigiSpark keyer

The Arduino sketch for the keyer uses the DigisparkMIDI library. The code is quite simple: if the paddles are pressed, send a MIDI note_on event (dit = note 1, dah = note 2), when released, send note_off. When the potentiometer is changed, send a control_change event (control 3), the value read is conveniently scaled to wpm speed values between 8 and 40.

    if (dit)
      midi.sendNoteOn(NOTE_DIT, 1);
    else
      midi.sendNoteOff(NOTE_DIT, 0);

    if (dah)
      midi.sendNoteOn(NOTE_DAH, 1);
    else
      midi.sendNoteOff(NOTE_DAH, 0);

    if (new_speed != old_speed)
      midi.sendControlChange(CHANNEL_SPEED, new_speed);

The device uses a generic USB id that is recognized by Linux as a MIDI device:

$ lsusb
Bus 001 Device 008: ID 16c0:05e4 Van Ooijen Technische Informatica Free shared USB VID/PID pair for MIDI devices

$ amidi -l
Dir Device    Name
IO  hw:2,0,0  MidiStomp MIDI 1

$ aseqdump -l
 Port    Client name                      Port name
 24:0    MidiStomp                        MidiStomp MIDI 1

$ aseqdump --port MidiStomp
Source  Event                  Ch  Data
 24:0   Control change          0, controller 3, value 24
 24:0   Note on                 0, note 1, velocity 1
 24:0   Note on                 0, note 2, velocity 1
 24:0   Note off                0, note 1, velocity 0
 24:0   Note off                0, note 2, velocity 0
 24:0   Control change          0, controller 3, value 25
 24:0   Control change          0, controller 3, value 26

Python and PulseAudio software

On the Linux host side, a Python program is listening for MIDI events and acts as a iambic CW keyer that converts the stream of note on/off into CW signals.

Instead of providing a full audio stream, dit and dah "samples" are uploaded to PulseAudio, and triggered via the pulsectl library. On speed changes, new samples are uploaded. The samples are played on two channels, one for the sidetone on the operator headphones, and one on the audio input device for the SDR transmitter.

24 wpm dit (50 ms)

The virtual "tx0" audio device can be created on boot using this systemd config snippet:

# $HOME/.config/systemd/user/pulseaudio.service.d/override.conf
[Service]
ExecStartPost=/usr/bin/pacmd load-module module-null-sink sink_name=tx0 sink_properties=device.description=tx0

The CW text sent is printed on stdout:

$ ./midicwkeyer.py
TX port is tx0 (3)
Sidetone port is Plantronics Blackwire 3225 Series Analog Stereo (7)
 CQ CQ DF7CB

Download

Needless to say, this is open source: https://github.com/df7cb/df7cb-shack/tree/master/midicwkeyer

Posted Wed Mar 23 18:53:16 2022 Tags:

pg_dirtyread

Earlier this week, I updated pg_dirtyread to work with PostgreSQL 14. pg_dirtyread is a PostgreSQL extension that allows reading "dead" rows from tables, i.e. rows that have already been deleted, or updated. Of course that works only if the table has not been cleaned-up yet by a VACUUM command or autovacuum, which is PostgreSQL's garbage collection machinery.

Here's an example of pg_dirtyread in action:

# create table foo (id int, t text);
CREATE TABLE
# insert into foo values (1, 'Doc1');
INSERT 0 1
# insert into foo values (2, 'Doc2');
INSERT 0 1
# insert into foo values (3, 'Doc3');
INSERT 0 1

# select * from foo;
 id │  t
────┼──────
  1 │ Doc1
  2 │ Doc2
  3 │ Doc3
(3 rows)

# delete from foo where id < 3;
DELETE 2

# select * from foo;
 id │  t
────┼──────
  3 │ Doc3
(1 row)

Oops! The first two documents have disappeared.

Now let's use pg_dirtyread to look at the table:

# create extension pg_dirtyread;
CREATE EXTENSION

# select * from pg_dirtyread('foo') t(id int, t text);
 id │  t
────┼──────
  1 │ Doc1
  2 │ Doc2
  3 │ Doc3

All three documents are still there, but only one of them is visible.

pg_dirtyread can also show PostgreSQL's system colums with the row location and visibility information. For the first two documents, xmax is set, which means the row has been deleted:

# select * from pg_dirtyread('foo') t(ctid tid, xmin xid, xmax xid, id int, t text);
 ctid  │ xmin │ xmax │ id │  t
───────┼──────┼──────┼────┼──────
 (0,1) │ 1577 │ 1580 │  1 │ Doc1
 (0,2) │ 1578 │ 1580 │  2 │ Doc2
 (0,3) │ 1579 │    0 │  3 │ Doc3
(3 rows)

Undelete

Caveat: I'm not promising any of the ideas quoted below will actually work in practice. There are a few caveats and a good portion of intricate knowledge about the PostgreSQL internals might be required to succeed properly. Consider consulting your favorite PostgreSQL support channel for advice if you need to recover data on any production system. Don't try this at work.

I always had plans to extend pg_dirtyread to include some "undelete" command to make deleted rows reappear, but never got around to trying that. But rows can already be restored by using the output of pg_dirtyread itself:

# insert into foo select * from pg_dirtyread('foo') t(id int, t text) where id = 1;

This is not a true "undelete", though - it just inserts new rows from the data read from the table.

pg_surgery

Enter pg_surgery, which is a new PostgreSQL extension supplied with PostgreSQL 14. It contains two functions to "perform surgery on a damaged relation". As a side-effect, they can also make delete tuples reappear.

As I discovered now, one of the functions, heap_force_freeze(), works nicely with pg_dirtyread. It takes a list of ctids (row locations) that it marks "frozen", but at the same time as "not deleted".

Let's apply it to our test table, using the ctids that pg_dirtyread can read:

# create extension pg_surgery;
CREATE EXTENSION

# select heap_force_freeze('foo', array_agg(ctid))
    from pg_dirtyread('foo') t(ctid tid, xmin xid, xmax xid, id int, t text) where id = 1;
 heap_force_freeze
───────────────────

(1 row)

Et voilà, our deleted document is back:

# select * from foo;
 id │  t
────┼──────
  1 │ Doc1
  3 │ Doc3
(2 rows)

# select * from pg_dirtyread('foo') t(ctid tid, xmin xid, xmax xid, id int, t text);
 ctid  │ xmin │ xmax │ id │  t
───────┼──────┼──────┼────┼──────
 (0,1) │    2 │    0 │  1 │ Doc1
 (0,2) │ 1578 │ 1580 │  2 │ Doc2
 (0,3) │ 1579 │    0 │  3 │ Doc3
(3 rows)

Disclaimer

Most importantly, none of the above methods will work if the data you just deleted has already been purged by VACUUM or autovacuum. These actively zero out reclaimed space. Restore from backup to get your data back.

Since both pg_dirtyread and pg_surgery operate outside the normal PostgreSQL MVCC machinery, it's easy to create corrupt data using them. This includes duplicated rows, duplicated primary key values, indexes being out of sync with tables, broken foreign key constraints, and others. You have been warned.

pg_dirtyread does not work (yet) if the deleted rows contain any toasted values. Possible other approaches include using pageinspect and pg_filedump to retrieve the ctids of deleted rows.

Please make sure you have working backups and don't need any of the above.

Posted Wed Nov 17 16:46:51 2021 Tags:

The apt.postgresql.org repository has been extended to cover the arm64 architecture.

We had occasionally received user request to add "arm" in the past, but it was never really clear which kind of "arm" made sense to target for PostgreSQL. In terms of Debian architectures, there's (at least) armel, armhf, and arm64. Furthermore, Raspberry Pis are very popular (and indeed what most users seemed to were asking about), but the raspbian "armhf" port is incompatible with the Debian "armhf" port.

Now that most hardware has moved to 64-bit, it was becoming clear that "arm64" was the way to go. Amit Khandekar made it happen that HUAWEI Cloud Services donated a arm64 build host with enough resources to build the arm64 packages at the same speed as the existing amd64, i386, and ppc64el architectures. A few days later, all the build jobs were done, including passing all test-suites. Very few arm-specific issues were encountered which makes me confident that arm64 is a solid architecture to run PostgreSQL on.

We are targeting Debian buster (stable), bullseye (testing), and sid (unstable), and Ubuntu bionic (18.04) and focal (20.04). To use the arm64 archive, just add the normal sources.list entry:

deb https://apt.postgresql.org/pub/repos/apt buster-pgdg main

Ubuntu focal

At the same time, I've added the next Ubuntu LTS release to apt.postgresql.org: focal (20.04). It ships amd64, arm64, and ppc64el binaries.

deb https://apt.postgresql.org/pub/repos/apt focal-pgdg main

Old PostgreSQL versions

Many PostgreSQL extensions are still supporting older server versions that are EOL. For testing these extension, server packages need to be available. I've built packages for PostgreSQL 9.2+ on all Debian distributions, and all Ubuntu LTS distributions. 9.1 will follow shortly.

This means people can move to newer base distributions in their .travis.yml, .gitlab-ci.yml, and other CI files.

Posted Mon May 4 11:20:28 2020 Tags:

Users had often asked where they could find older versions of packages from apt.postgresql.org. I had been collecting these since about April 2013, and in July 2016, I made the packages available via an ad-hoc URL on the repository master host, called "the morgue". There was little repository structure, all files belonging to a source package were stuffed into a single directory, no matter what distribution they belonged to. Besides this not being particularly accessible for users, the main problem was the ever-increasing need for more disk space on the repository host. We are now at 175 GB for the archive, of which 152 GB is for the morgue.

Our friends from yum.postgresql.org have had a proper archive host (yum-archive.postgresql.org) for some time already, so it was about time to follow suit and implement a proper archive for apt.postgresql.org as well, usable from apt.

So here it is: apt-archive.postgresql.org

The archive covers all past and current Debian and Ubuntu distributions. The apt sources.lists entries are similar to the main repository, just with "-archive" appended to the host name and the distribution:

deb https://apt-archive.postgresql.org/pub/repos/apt DIST-pgdg-archive main
deb-src https://apt-archive.postgresql.org/pub/repos/apt DIST-pgdg-archive main

The oldest PostgreSQL server versions covered there are 8.2.23, 8.3.23, 8.4.17, 9.0.13, 9.1.9, 9.2.4, 9.3beta1, and everything newer.

Some example:

$ apt-cache policy postgresql-12
postgresql-12:
  Installed: 12.2-2.pgdg+1+b1
  Candidate: 12.2-2.pgdg+1+b1
  Version table:
 *** 12.2-2.pgdg+1+b1 900
        500 http://apt.postgresql.org/pub/repos/apt sid-pgdg/main amd64 Packages
        500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
        100 /var/lib/dpkg/status
     12.2-2.pgdg+1 500
        500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
     12.2-1.pgdg+1 500
        500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
     12.1-2.pgdg+1 500
        500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
     12.1-1.pgdg+1 500
        500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
     12.0-2.pgdg+1 500
        500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
     12.0-1.pgdg+1 500
        500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
     12~rc1-1.pgdg+1 500
        500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
     12~beta4-1.pgdg+1 500
        500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
     12~beta3-1.pgdg+1 500
        500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
     12~beta2-1.pgdg+1 500
        500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
     12~beta1-1.pgdg+1 500
        500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages

Because this is hosted on S3, browsing directories is only supported indirectly by static index.html files, so if you want to look at some specific URL, append "/index.html" to see it.

The archive is powered by a PostgreSQL database and a bunch of python/shell scripts, from which the apt index files are built.

Archiving old distributions

I'm also using the opportunity to remove some long-retired distributions from the main repository host. The following distributions have been moved over:

  • Debian etch (4.0)
  • Debian lenny (5.0)
  • Debian squeeze (6.0)
  • Ubuntu lucid (10.04)
  • Ubuntu saucy (13.10)
  • Ubuntu utopic (14.10)
  • Ubuntu wily (15.10)
  • Ubuntu zesty (17.04)
  • Ubuntu cosmic (18.10)

They are available as "DIST-pgdg" from the archive, e.g. squeeze:

deb https://apt-archive.postgresql.org/pub/repos/apt squeeze-pgdg main
deb-src https://apt-archive.postgresql.org/pub/repos/apt squeeze-pgdg main
Posted Tue Mar 24 12:08:48 2020 Tags:

paste is one of those tools nobody uses [1]. It puts two file side by side, line by line.

One application for this came up today where some tool was called for several files at once and would spit out one line by file, but unfortunately not including the filename.

$ paste <(ls *.rpm) <(ls *.rpm | xargs -r rpm -q --queryformat '%{name} \n' -p)

[1] See "J" in The ABCs of Unix

[PS: I meant to blog this in 2011, but apparently never committed the file...]

Posted Fri Mar 9 10:06:21 2018 Tags:

After quite some time (years actually) of inactivity as Debian Account Manager, I finally decided to give back that Debian hat. I'm stepping down as DAM. I will still be around for the occasional comment from the peanut gallery, or to provide input if anyone actually cares to ask me about the old times.

Thanks for the fish!

Posted Fri Mar 9 09:58:06 2018 Tags:

Now that Salsa is in beta, it's time to import projects (= GitLab speak for "repository"). This is probably best done automated. Head to Access Tokens and generate a token with "api" scope, which you can then use with curl:

$ cat salsa-import
#!/bin/sh

set -eux

PROJECT="${1%.git}"
DESCRIPTION="$PROJECT packaging"
ALIOTH_URL="https://anonscm.debian.org/git"
ALIOTH_GROUP="collab-maint"
SALSA_URL="https://salsa.debian.org/api/v4"
SALSA_GROUP="debian" # "debian" has id 2
SALSA_TOKEN="yourcryptictokenhere"

# map group name to namespace id (this is slow on large groups, see https://gitlab.com/gitlab-org/gitlab-ce/issues/42415)
SALSA_NAMESPACE=$(curl -s https://salsa.debian.org/api/v4/groups/$SALSA_GROUP | jq '.id')

# trigger import
curl -f "$SALSA_URL/projects?private_token=$SALSA_TOKEN" \
  --data "path=$PROJECT&namespace_id=$SALSA_NAMESPACE&description=$DESCRIPTION&import_url=$ALIOTH_URL/$ALIOTH_GROUP/$PROJECT&visibility=public"

This will create the GitLab project in the chosen namespace, and import the repository from Alioth.

Pro tip: To import a whole Alioth group to GitLab, run this on Alioth:

for f in *.git; do sh salsa-import $f; done

(Update 2018-02-04: Query namespace ID via the API)

Posted Mon Dec 25 16:43:30 2017 Tags:

This blog is powered by ikiwiki.