Reading view

There are new articles available, click to refresh the page.

Server moves

Up until now I have been running 3 Raspberry Pi 4 systems all held in a metal frame with fans which makes a nice neat setup. One Pi does the home automation, one runs pi-hole (really useful!), and one is a server and has an SSD attached. Not long ago while we were out of the country (of course) the website hosted by the server failed. I did not have remote ssh access set up nor a VPN for access. When we got back home the pi had lost the filesystem on the SSD. The disk was still mounted, but not accessible. Being a server all logging was on the SSD so no errors were caught. A reboot was the only way to cure it.

I thought it was a one off until it happened again, this time while I was nearby. After that our broadband was upgraded to FTTP and with PlusNet giving a fixed IP and no blocks I moved my production websites and email server across to the Pi, saving the rental of the VPS I had been using up until then. The cost of the VPS covered the annual cost of the broadband so worked out well.

Then the Pi lost the SSD twice in three days. This time had got worse because on reboot it did not start Apache or fail2ban, even though they both started fine by hand once I had realised. So something is wrong in the setup and I cannot find out what.

So I pressed an old Lenovo miniPC into action and rebuilt the server onto it. One thing I learned ages ago was to document everything that goes onto the server (and indeed all my other systems) so I can simply run through the list, add everything back in and copy /home across. Relatively QED.

But this time round something caught me out. After everything was moved across to the Lenovo and IP addresses swapped so it became the server web access was still going to the Pi. It transpired that the PlusNet broadband router associates the port forwarding with a physical device, not a IP address. Easy enough to sort out via web access to the broadband hub, but one more thing to remember (and duly documented!)

RTTY Contest Operation and Messages

In 1985, I built a home-brew decoder and experimented with RTTY, but I never got it to work. I've since decided that I didn't know how to tune RTTY properly. Things changed in 2005 when I downloaded CocoaModem made my first RTTY contacts. 

Since I was involved in contesting, I naturally turned to RTTY contesting. Today, it is unusual to hear RTTY signals on the bands except during contests. Thirty or more years ago, RTTY was commonly heard on 80 and 20m. 

Characteristics

Several characteristics of RTTY must be understood in order to communicate effectively: 
  • RTTY has no error correction or detection -- unlike AMTOR, Packet, FT4 or FT8. This means whatever that prints might be wrong. And if it is wrong, you will not know. 
  • RTTY prints garbage. Without a signal, random characters print. This further complicates determining what is correct and what is not. 
  • RTTY does not handle multiple signals well. When two or more stations call at the same time, RTTY will not print reliably. Certain decoders may print the strongest signal, if you are lucky.
  • RTTY text comes in a continuous stream. Long lines wrap to the next, or one can force a new line by sending a carriage return / line feed combination. Wrapped lines are often difficult to read.
  • RTTY has two shift states, LETTERS and FIGURES in the Baudot encoding. RTTY rests in the LETTERS state. An unprinted FIGURES character is transmitted to shift to the FIGURES state. A similar LETTERS unprinted character can be sent to shift back, or one can automatically unshift on a space character. 

Principles

For effective RTTY contest communication, several principles apply. 

  • Brevity - every character sent must have a purpose. There should be no wasted characters.
  • Duplication - every important element should be sent twice. This contradicts the brevity principle. Because RTTY prints incorrect characters, sending important elements twice helps ensure correct reception.
  • Scrolling - each message starts a new line, but ends with a space. This technique keeps lines from wrapping, and avoids the end of message being confused by garbage characters when the signal drops. 
  • Shifts - avoid needless shifts. Any sequence involving the unprinted FIGURES or LETTERS characters takes longer to send. 

Messages

(I'm using N1MM messages for my examples. Other software may have different macro names and techniques, but the same principles apply)

Every message starts with a {TX} and ends with {RX}. This transitions the software to transmit and back to receive. 

S & P

Let's say you want to answer someone's CQ. This means you need to send your call. For that, you'd use a macro like this:

{TX}{ENTERLF}{MYCALL} {MYCALL} {RX}

or

{TX}{ENTERLF}* * {RX}

(For N1MM, the asterisk and {MYCALL} macros are the same)

Notice the message starts with {TX}, performs a carriage return / line feed with {ENTERLF}, sends the call twice, ends with a space and then {RX} to go back to receive. Sending the call twice helps to ensure the recipient receives it correctly.

If you are lucky enough to get a response, you'll have to send the exchange. The exchange will vary by contest, but it could be a message like this:

{TX}{ENTERLF}! 599 GA GA DE {MYCALL} {RX}

This is what I send in the RTTY Roundup. First is the recipient's call (!). Then 599 -- don't use 5NN, because that actually takes longer to send in RTTY -- and send it only once, because it isn't important. Then the exchange is sent twice, followed by the prosign DE and my call, followed by a space. 

N1MM's authors recommend you use the ! character rather than the {CALL} macro. The reason is that {CALL} isn't subject to correction -- it sends the contents of the Call field at the start of the message. The ! character will send the Call field as it is being corrected in real time. As a practical matter, most RTTY contest contacts involve pointing and clicking on callsigns, so there's less typing, and therefore fewer corrections involved.

A couple of things here. Notice I did not use the {EXCH} macro above. When there are multiple elements to the exchange, I put the repetitions together. So, I tend put the exchange information into the macro directly. For example, here's an S & P exchange for CQWW RTTY:

{TX}{ENTERLF}! 599 GA GA 5 5 DE {MYCALL} {RX}

GA for Georgia, and 5 for zone 5. For NAQP RTTY, it would be:

{TX}{ENTERLF}! 599 BILL BILL GA GA DE {MYCALL} {RX}

Some might balk at the use of the DE prosign, particularly for exchanges that involve a state or section, since DE might be confused with Delaware. However, I think this prosign is useful, as it establishes the callsign is of the answering station, and not the CQing station.

CQing

Calling CQ in a contest is the most-used message:

{TX}{ENTERLF}CQ RU {MYCALL} {MYCALL} CQ {RX}

Note that the important information -- the callsign -- is repeated. The other curious thing is the "CQ" at the end. This indicates I finished a CQ message. This is important because one cannot tell when potential callers tune in to your signal. If they do so during the first callsign, the can't tell if you are calling or answering a CQ. Putting "CQ" at the end establishes you are calling CQ. And it is shorter than "QRZ?".

Naturally, one indicates the contest in the CQ message. Here it is "RU" for Round Up. Use whatever is appropriate for the contest, or simply "TEST".

When someone answers your call, you send an exchange message:

{TX}{ENTERLF}! 599 GA GA ! {RX}

Note that the exchange is sent twice, and if there were more than one element to the exchange, I'd send those twice as well:

{TX}{ENTERLF}! 599 BILL BILL GA GA ! {RX}

Another item to notice is there is no {MYCALL} macro in this message. Instead, the caller's callsign (!) is sent twice, once at the beginning and once at the end. There are two reasons for this. First, it follows the principle of sending important information twice. It could be the caller's callsign printed incorrectly to me, or perhaps it will print incorrectly when I send the message back. If I only send the callsign once, the caller might or might not correct it if is wrong, or they may correct it if it printed incorrectly to them. 

Unnecessary corrections are a waste of time, but necessary corrections are desired. 

Second, it may be that during the response with the exchange, other stations may also be calling. This, creates a good chance that the initial callsign in the response will print incorrectly. If you don't send the callsign again at the end, it could be unclear who you responded to. 

Once you've received the exchange from the caller, one sends an acknowledgement:

{TX}{ENTERLF}! TU DE {MYCALL} CQ {RX}

Short and simple. Two features here. One is the DE prosign, to indicate this is the transmitting station's call, and ending with "CQ" to invite new callers.

Turnaround

Occasionally, multiple callsigns will print in response to a CQ. You can only respond to one at time.  Since you can only respond to one at a time, this leaves someone waiting. Rather than have them call again, you can use a turnaround message which acknowledges a completed contact and starts a new one:

{TX}{ENTERLF}! TU {LOGTHENGRAB}NOW..{ENTERLF}{F5} 599 GA GA {F5} {RX}

This message omits {MYCALL}, and uses the {LOGTHENGRAB} macro to first log, then grab the callsign off the automatic decode stack, then it follows with the normal exchange. If you use Single Operator Call Stacking, you can use {LOGTHENPOP} instead. See the N1MM manual.

Note that instead of using the exclamation point (!), we use the {F5} macro. Both the exclamation point and the {CALL} macro won't be updated by the {LOGTHENGRAB} macro, but {F5} will.

Short

When signals are strong, and the bands are quiet, perhaps the principle of sending information twice doesn't apply. Most RTTY contests allows contacts on multiple bands, and the exchange doesn't change. In these cases, you may want to have short messages handy. Here are some examples:

{TX}{ENTERLF}! 599 BILL GA DE {MYCALL} {RX} -- short S & P exchange

{TX}{ENTERLF}! 599 BILL GA {RX} -- short exchange for S & P or CQing

{TX}{ENTERLF}599 BILL BILL GA GA {RX} -- repeat of just the exchange 

{TX}{ENTERLF}CQ RU {MYCALL} CQ {RX} -- short CQ

{TX}{ENTERLF}TU DE {MYCALL} CQ {RX} -- short acknowledgement

All these should be used when you have solid copy, want to get back to other callers quickly, or you are fairly certain the other operator already has your exchange information from a previous contact.

Tips

Some tips I've picked up over the last decade that are helpful.
  • Use Slow AGC - Fast AGC can confuse decoders and introduce print errors
  • Use TX Filtering on AFSK - If you are using MMTTY or similar software, use the 512 tap TX Filter. It helps transmit a cleaner signal.
  • Listen with Headphones - sometimes you can hear signals that don't always print, if you listen with headphones, you can hear the stations calling you. It also helps you improve your timing in a pile.
On that last tip, turn the volume on the headphones way down. You just have to sense when signals are there, you aren't decoding them. (I believe it was the late Irv Hoff, W6FFC (SK) -- a RTTY pioneer -- who suffered hearing loss at 2125 and 2295 Hz from listening to RTTY signals)

Practical Messages 

There are a handful of other messages you may wish to have handy. Here's one I use often, when you didn't copy anything sent:

{TX}{ENTERLF}AGN AGN {RX}

Or perhaps you need a fill of one element:

{TX}{ENTERLF}STATE? STATE? {RX}

{TX}{ENTERLF}NR? NR? {RX}

{TX}{ENTERLF}NAME? NAME? {RX} 

 Before you open up with a CQ on a frequency,  this is good one:

{TX}{ENTERLF}QRL? DE {MYCALL} {RX}

 Maybe if you are not sure someone is calling you:

{TX}{ENTERLF}QRZ DE {MYCALL} {MYCALL} {RX}

Or the short version:

{TX}{ENTERLF}QRZ DE {MYCALL} {RX}

Every once and a while, directed call is useful, especially when two stations are calling CQ on top of each other:

{TX}{ENTERLF}! DE {MYCALL} {MYCALL} {RX} 

Conclusion

RTTY contests are a ton of fun. Program a set of messages and try it. You'll like it.

The Great LoTW Outage - Continues.

Update July 1, 2024. LoTW is back up! It is running slow, but it is available. Thank goodness.

--

When I wrote the article back in May, I hardly thought that LoTW would be down a month later.

Sadly, the outage continues. 

My suspicions were correct, however, that this was something more than a simple networking problem. The ARRL has since admitted their network was viciously and uniquely hacked. I can certainly understand their caution to make sure that every system linked to LoTW is given a clean bill of health before turning the system back on.

Earlier this week, on Tuesday there was apparently a brief period of time when LoTW was accessible. A couple of my ham buddies managed to upload some contacts. They'll have to wait for confirmations when the rest of us can get in.

I do hope it is soon. I'm really missing this service.

Arduino thermometer using DS18B20 and OLED display

This article describes a Arduino based thermometer using a 1-wire DS18B20 digital temperature sensor using a SDD1306 or SH1106 OLED display.

The DS18B20 is a digital sensor, used for relative noise immunity, especially given the choice of an OLED display.

This is a basis for tinkering, for modification and a vehicle for learning.

Above is the sample display.

The code is written to support multiple sensors on the 1-wire bus, it cycles through each of the sensors displaying them for 1s each.

Parasitic power

The DS18B20 can be connected using a two wire connection and using “parasitic power”.

Above is a simple scheme for parasitic power which should work with one master and one sensor at the other end of the cable for tens of metres. For longer cables and multiple sensors, see (Maxim 2014).

Note that the Vdd pin is tied to ground.

1-Wire

Above is a capture of the DQ line for search and read of a single DS18B20.

Above is a zoomed in view of the 1-wire encoding format.

I2C display

The display uses I2C.

It takes just under 25ms to paint the display using the example code.

Source code

Here is source code that compiles in the Arduino IDE v2.3.2.

#define VERSION "0.02"
#include 
#define SCREEN_WIDTH 128 // OLED display width, in pixels
#define SCREEN_HEIGHT 32 // OLED display height, in pixels
#define TEMPMIN -20
#define BARGRAPH
#define PPD 2 //pixels per degree, must be +ve integer
#define TICKMIN 5
#define TICKMAJ 10
#define SSD1306_DISPLAY
//#define SH1106G_DISPLAY
#if defined(SSD1306_DISPLAY)
#define OLED_RESET    -1 // Reset pin # (or -1 if sharing Arduino reset pin)
#include 
Adafruit_SSD1306 display=Adafruit_SSD1306(SCREEN_WIDTH, SCREEN_HEIGHT, &Wire, OLED_RESET);
#endif
#if defined(SH1106G_DISPLAY)
#include 
#define WHITE SH110X_WHITE
#define BLACK SH110X_BLACK
Adafruit_SH1106G display=Adafruit_SH1106G(SCREEN_WIDTH,SCREEN_HEIGHT,&Wire);
#endif
#include 
DS18B20 ds(2);

#if defined(__AVR_ATmega328P__)
HardwareSerial &MySerial=Serial;
#endif

int i;
int barh=SCREEN_HEIGHT/2-2;
int basey=display.height()-1;
int tickh=barh/4;

void setup(){
  float adcref;
  long adcfs;

  #if defined(__AVR_ATmega328P__)
  analogReference(INTERNAL);
  adcref=1.10;
  adcfs=1024;
  #endif
  analogRead(A2); //Read ADC2
  delay (500); // Allow ADC to settle
  float vbat=analogRead(A2); //Read ADC again
  vbat=16*(vbat + 0.5)/(float)adcfs*adcref; //Calculate battery voltage scaled by 150k & 10k
  // Display startup screen
  MySerial.begin(9600);
  MySerial.println(F("Starting..."));
  #if defined(SSD1306_DISPLAY)
  display.begin(SSD1306_SWITCHCAPVCC, 0x3C);  //Initialize with the I2C address 0x3C.
  #endif
  #if defined(SH1106G_DISPLAY)
  display.begin(0x3C, true); // Address 0x3C default
  #endif
  display.setTextColor(WHITE);
  display.clearDisplay();
  display.setTextSize(1);
  display.setCursor(0, 0);
  display.print("DS18B20 thermometer");
  display.setCursor(0, 12);
  display.print("ardds18b20 ver: ");
  display.println(VERSION);
  display.print("vbat: ");
  display.println(vbat,1);
  display.display();
  delay(1000);
}

void loop(){
  int i,j;
  float temp;
  uint8_t id[8];
  char buf[27];

  j=1;
  while (ds.selectNext()){
    //for each sensor
    ds.getAddress(id);
    sprintf(buf," %02X:%02X:%02X:%02X:%02X:%02X:%02X:%02X ",id[0],id[1],id[2],id[3],id[4],id[5],id[6],id[7]);
    temp=(ds.getTempC());
    MySerial.print(j);
    MySerial.print(buf);
    MySerial.print(temp,2);
    MySerial.println(F(" °"));
    display.clearDisplay();
    display.setCursor (0,0);
    display.setTextSize(2);
    display.print(j);
    display.print(F(": "));
    display.print(temp,1);
    display.print((char)247);
    
    #if defined(BARGRAPH)
    int w=(temp-TEMPMIN)*PPD;
    //draw bar starting from left of screen:
    display.fillRect(0,display.height()-1-barh,w,barh,WHITE);
    display.fillRect(w+1,display.height()-barh-1,display.width()-w,barh,BLACK);
    //draw tick marks
    for(int i=0;i<SCREEN_WIDTH;i=i+PPD*TICKMIN) display.fillRect(i,basey-barh+3*tickh,1,barh-3*tickh,i>w?WHITE:BLACK);
    for(int i=0;i<SCREEN_WIDTH;i=i+PPD*TICKMAJ) display.fillRect(i,basey-barh+2*tickh,1,barh-2*tickh,i>w?WHITE:BLACK);
    if(TEMPMIN<0) display.fillRect((0-TEMPMIN)*PPD,basey-barh+tickh,1,barh-tickh,i>w?WHITE:BLACK);
    #endif

    display.display();
    j++;
    delay(1000);
    }
  delay(100);
  }

This code suits a 128*32 pixel display. Changes will be needed to optimise other display resolution.

Github repository

See https://github.com/owenduffy/ardds18b20 for code updates.

See https://github.com/owenduffy/tds18b20 for a platformio implementation.

References

Maxim. 2014. Guidelines for long 1-wire networks.

Last update: 7th July, 2024, 3:58 AM

RealVNC Changes Terms, without Notice.

Just over three years ago, I figured out how to Remotely operate FT8 using a product called RealVNC. 

RealVNC had a Home plan that allowed up to 3 users and up to 5 devices for non-commercial use. Perfect for remotely controlled computers in a ham radio shack.

Today, without any notice, RealVNC disabled my Home plan, and I had to choose between paying each month for a plan, or adopting their Lite plan, which allows 1 user and up to 3 devices for non-commercial use.

That's fine. They allow me to use their secure remote access software without fees. I can understand they might want to change the terms.

The Lite plan fits my usage. I've only ever had two devices active anyway, and it's just me as the user. 

But, without notice - that is just damned inconvenient. Since I switched plans, I need to visit each device and re-configure them to be part of the new plan. Which means I can't remote into those computers until that is completed. 

And, of course, since I'm remote, I'm not there.

Quite inconvenient.



The Great LoTW Outage

May 16th, there was an issue with Logbook of The World (LoTW). I could not load the main page at all -- receiving an error indicating the server wasn't responding.

That's pretty normal stuff, actually. There are dozens of problems that can result in this kind of error, so I wasn't surprised. I figured the ARRL staff would address it quickly. But, after much of the day, I was still getting the error. 

So, I sent a message to lotw-help@arrl.org, informing them that the web site wasn't responding, kindly asking when they expected it to be back up. I mentioned I was surprised there was no notice of the outage on the ARRL.org web site.

Later that day, the ARRL put up a notice that there was a service disruption involving access to the network, and that it affected LoTW and the ARRL Learning Center. They even updated it the next day, addressing concerns users had over information privacy.

But then, nothing happened. Not until May 22nd, when they updated the notice without really adding any information. 

Now, part of this delay may be due to the fact that much of the ARRL staff were all out at the Xenia Hamvention. But, that was a week ago.

What gives? Sure, networking problems. Honestly, though, as a computer professional, networking problems generally don't take more than a week to solve. I'm beginning to suspect there's something more than the ARRL hasn't told us, but I can't be sure.

I'm really missing access to LoTW. In the last 20 years, it has really become central in my enjoyment of the hobby. I do hope I'm wrong, and that ARRL manages to fix this problem soon.

Goodbye Mr. Chip…

“Zilog has called time on the Z80 CPU.” (https://www.theregister.com/2024/04/29/opinion_z80/) Wow. Actually I had no idea (through never having checked) that it was still being produced.

And a fine chip it was too. I never built a system from wires up using the Z80 though. My first system, designed, built from chips and wire-wrap was an 8080 system, hand programmed to control al x-ray diffractometer. This was decades ago now but I still remember it, although I have no photos unfortunately. The system had a timer chip for a 1-second count and was interface to a Nuclear Engineering (I think it was!) counter that used nixies.

But I did at least use Z80s, just they came as boards. The first was a Transom Triton computer and by then I was programming in Turbo pascal – back then this was really neat as one could have procedures full of assembler code which made interfacing easy. Later I used Gemini boards and that also gave the ability to have a graphics card. By then my interfacing to the diffractometer included a stepper motor and shaft encoder to control the arc motor.

In the end there were two sets of Gemini Z80 boards, one for the x-ray diffractometer and one for an optical microdensitometer. Both gathered data and were interfaced to a mainframe computer for the processing using a suite of Algol 60 programs. Good old days…

Personally my first system was a 6502 Newbear single board, followed by the ubiquitous Nascom 1 which was, of course, Z80 based.

Farewell, Z80…

Screen moves

I now have a Raspberry Pi set up on one of the four monitors in the shack. The original layout was two screens at the top on Linux, then bottom left on Windows and bottom right – central to where I sit on the Mac as the main screen. But that layout had two major issues…

I use a program called Barrier to basically act as a KVM switch for the three systems with the Mac as server. That way the Mac mouse and keyboard controls any of the systems, although it can be awkward sometimes where Windows expects keys which Apple doesn’t have. But Barrier does not understand dual monitors and so moving the mouse up from the Mac got to the Linux box fine, but moving it down from the left hand screen would not get to Windows as the program does not see it being physically there. I could live with that, except for issue two…

The main issue was with the Linux screens being at the top and thus making me sit back or crank my neck upwards, not a good position.

So…

I got to realising that although I use both screens on the Linux box for radio stuff this tends to be with wsjt-x on the right screen and pskreporter on the left.

The solution, which somehow never occurred to me, was simple. Move all the wiring about so that the Mac is right and central, Linux is to the left at eye level so no neck ache, Windows is top right because I rarely use it anyway, and that left a dead screen top left. Enter a Pi 4B. So now I can arrange the four screens with pskreporter top right, Hamclock top left, wsjt-x bottom left and logging bottom right. QED.

QSO logging

Some time ago I wanted a logging program that would do things my way. Although there is absolutely nothing wrong with any of the various offerings they generally try to be everything for everyone and none of them really sat well with me. So I wrote my own in PHP (learning Python is high on my list of things to do, along with Mandarin, Morse, cooking…) which uses the QRZ.com logbook as the backend. Ok then, really I wrote a series of various scripts in PHP that make it all work. The advantage is it does just what I need and nothing more and can easily be modified to add functionality. The downside is I never was a coder (well, ok, I have a certification in COBOL from the 1970’s!) and it is not going anywhere other than my own server. So you can’t have it…

The way I tend to log stuff is via wsjt-x or other software that logs to a local file. I then have a script that takes the ADIF data and populates QRZ.com on a QSO-by-QSO basis. Somehow having to actually do something after each QSO feels like I am actually engaging in the process. But I am not a contester… it would simply not work for any stress situations (but then I could easily make it work if I so desired…)

With QRZ.com being the master a script then populates a local database which does all manner of stuff that I personally need. For example, it holds records of eQSL sent/received, real QSL sent/received, and various tabular data for Worked All Britain (WAB).

Scripts also modify the wsjt-x log file on all my systems such that each has a record of all QSOs. As QRZ.com is globally accessible (not tried from China mind… not that I plan to take any radio gear there anyway) and my main database is on a VPS so is also globally accessible the various scripts work from anywhere.

I do plan to move the database from the VPS to a system at home once we get FTTP broadband and use the VPS as a backup, synchronising between the two. But that will wait.

One plan which is more immediate is LoTW integration because as yet my LoTW logging is via QRZ.com which means an extra step. No biggie, I mean it’s its a few clicks and a password… but it would be nice to integrate it. The same goes for eQSL sends, but as yet I only send on receipt and I have scripts to deal with that anyway.

Pi reduction

I’ve been rationalising hardware, in particular as the PoE HAT on the Pi running the GB7RVB packet mailbox was noticeably noisy and needs replacing. I had originally moved the packet mailbox off of my AMPRnet router Pi as I needed to install a VPN and the networking was becoming a bit too complex for my liking. In the end I had no use for the VPN, so GB7RVB has gone back, removing one Pi.

Linbpq went across just fine – there is an apt for it (https://wiki.oarc.uk/packet:linbpq-apt-installation) so installation is easy. Just install and copy the config across and the files under /opt/oarc/bpq (there are neater ways but this sledgehammer method works). With the node running I could access via the web interface as expected, but then the axudp route disappeared.

Then I realised that our broadband router had a NAT rule for the UDP port needed for axudp and that was still pushing it to the now switched off Pi. And I’m sure I’ve forgotten this same thing before! So now I have a note as a reminder, assuming I bother to check the note…

Now having removed one Pi with a noisy fan the NTP server Pi is also whining. Grumble.

How 1984 wasn't like "1984."

In 1984, I was working at Hayes Microcomputer Products. They were the premiere modem manufacturer for small computers, back in the days when modems over telephone lines were a primary means of computer to computer and user to computer communications. 

In my job, I created communications software to talk to the modems. The software dialed the modem, established connection, provided terminal emulation (my specialty), allowed for the capture of the data stream to files, printing, file transfer with the remote computer (using protocols like XMODEM and YMODEM), and other features. 

These were the early days of personal computing. IBM introduced the PC in 1981, and it had rapidly evolved into a defacto standard computer, shoving out various CP/M designs from the previous decade. Personal computers were so new, people were trying to figure out what to do with them. Word processing, spreadsheets and other office applications had just been introduced. 

Hayes was trying to stay at the forefront. We had a laboratory filled with pretty much one of every personal computer, and when new ones came out, we would buy one. In late 1983, we got an Apple Lisa. It was a very different kind of computing experience. It was a curiosity to us, and as there was no programming environment available, we didn't see how we could build software to talk to a modem. Plus, at the price point, there were few buyers.

The Macintosh

Though the Macintosh was introduced in January of 1984, I didn't get hands on one until the late spring of 1984. Yes, we brought one into the lab, and it immediately garnered a lot of attention. 

While there were similarities to the Apple Lisa, the small screen with square pixels just seemed sharper and more distinct. The whole interface was friendly and approachable. We messed with MacWrite, MacPaint, and MacDraw. We printed on an ImageWriter, making appreciably decent images unlike anything we could do on another type of computer. There were several of us hooked and enthusiastic.

It's hard to describe those days. At this point, everyone has had decades to become familiar with computers that use a graphical user interface and a mouse or other pointing device to interact. Back then, it was a revelation. It was much more approachable than the command-line interfaces of the day. 

As I described it to someone in the early 90s -- other computer interfaces required one to reach toward the computer. You had to learn the special language and commands of that computer. The Macintosh was the first computer that reached back toward you -- the user.

The Machine

The Macintosh was based on a 16-bit Motorola MC68000 processor, running at 8 MHz. This was more than competitive with the Intel-based IBM clones circulating at the time. This processor was a great choices by Apple. It had many registers and powerful instructions for manipulating the bit-mapped screen.

Biggest constraint was memory. The 128 KB in the Macintosh was shared with 24 KB used for the screen, several more KB for operating system usage, leaving about 90 KB to run your program. Most of the critical operating system routines were in the Macintosh ROMs, which saved space. Building a program of any sophistication was difficult -- It was very tight to work with.

The single 400 KB floppy disk drive was also a limitation. Trying to save a file to another diskette could produce an endless amount of swapping. It was the lack of addition storage that kept me from buying a Mac until the Mac SE/20 was introduced in 1987. 

Next Steps

By summer, Hayes hired some consultants to look into the feasibility of developing communications software for the Macintosh. In just a few weeks, they had some rudimentary software going and concluded that it was quite feasible. 

We were soon green lighted to create a product for the Macintosh.

Forty Years of Personal Computing - Gimix 256 KB Static RAM

256 KB Gimix Static RAM board, sans battery.
In 1991, my employer moved to a new building. Before the move, we cleaned out storage closets containing old equipment. Much of this was obsolete gear. Things like pairs of "twiggy" disk drives removed from early Apple Lisa systems upgraded to 3 1/2" disks in 1985.

In one closet, we discovered something unusual. It was a complete Gimix III "Ghost" system. This was a  2 MHz 6809 system sporting a fifteen-slot SS-50 motherboard and eight SS-30 slots and floppy disks: a top-of-the-line 6809 system from the early 1980s. 

By 1991, the company had no use for this equipment. I had the impulse to take the entire system home, but I didn't have room. My wife and I were living in a small house and the garage was already packed. She would not have been happy if I brought home a bunch of equipment. 

Instead, I salvaged exactly one board -- a Gimix 256K CMOS Static RAM board. It sported 256 KB of memory, with several options, including battery backup. The rest was scrapped by an electronics recycler. 

Obtaining the board, I tried it out in my system. I was able to map in 4 KB blocks of memory and test them. They all worked. I might use the additional memory as part of a virtual disk drive. 

In 1994, I moved, and the entire system was stored away for over 25 years. Looking at it recently, I found it needed repair. Over the years, the backup battery failed and leaked electrolyte on the board and motherboard. Several Molex connectors are damaged, and need to be replaced. Some of the components show signs of corrosion from the battery electrolyte. 

I removed the failed battery. I do hope the rest of the board still works once the repairs are complete. Perhaps I'll fix it in my retirement.

Remembering the old school dial-up BBS

All this packet radio progressing around the place reminds me of a time long ago, pre-Internet where dial-up BBSs became the new thing in town. Back then I had a BBC Micro and a modem that ran at two speeds – I forget which now (will edit later!) and I persuaded my mum to get BT in to fit a socket rather than the hard-wired phone we had then. This let me plug the modem in. I used to use a BBS called ‘More Summer Wine’ plus one other but I forgot the name. Much of the activity back then is lost in the mist of time (or rather I just can’t remember) but sending and receiving mail was fun. BBS systems were all a part of the wider FidoNet. Mail would be routed between the various BBS systems, many of which only had the one telephone line and so would be inaccessible while that was happening. Indeed, they were mostly single user anyway, although if the sysop was there you could message them via the console of the BBS which was probably sitting in someone’s bedroom. I am reminded of the many times I would set the BBC and modem up on the hall floor because we only had the one telephone socket. In fact, it would be quite some time between then and when we finally got broadband Internet which for us was not until the later 1990s in our new home.

During that time and working in academia I had routine access to networks and mail and so interest in the BBS systems dwindled. There was a time before the winder Internet became available where we could gain network access to remote systems, all typically mini- or mainframe computers. One such system ran a MUD – Multi-user Dungeons and Dragons – another angle to remote access but this time for gaming rather than BBS. That provided an introduction to online chatrooms because the MUD we used to play on had that feature. One could not only progress through the game but also exchange messages online, the latter becoming the wanted feature vice the game itself.

And here we are. I was never involved in packet radio when it first came to be, but now it has reminded me a lot of those old days of the dial-up BBS.

And FidoNet? It is still there https://www.fidonet.org/

See: https://spectrum.ieee.org/social-medias-dialup-ancestor-the-bulletin-board-system

Forty Years of Personal Computing - MC6809 V2

MC6809 CPU card, version 2.
By March 1988, the MC6809E V1 card I designed in 1983 needed updates. I built an entirely new card with new features intended to run OS-9 more effectively. 

CPU

A MC6809 chip simplified things with the on-chip clock oscillator. The chip handled M.RDY without extra logic, and the rising edge of the Q clock did not need delay.

Memory

The MC6809E V1 card had no on-board RAM. There wasn't room. By 1988, a number of manufacturers had 32 KB static RAMs in 28-pin packages. 64 KB of memory is realized with a couple of chips. 

For the V2 board, I allowed for eight chips, totaling 256 KB of memory. This was a good compromise between cost and the space available. The memory is logically separate from the rest of the card -- decoding from the physical address and data bus, using appropriate buffers. In this way, the memory can be accessed by a bus master other than the CPU. It responds to physical addresses C0000-FDFFF or FEFFF, jumper selectable. For years, it held two chips -- 64 KB on the board -- with only 56 KB accessible. The six remaining chips were added recently, making 248 KB or 252 KB accessible. 

Buffering

20-pin bus driver chips reduced the chip count, even with two sets of bus drivers, one for the CPU, and one for the memory array.

Program ROM

The design allows for a much larger ROM. The MC6809E V1 card originally had two 2KB 2716-compatible sockets -- one for a ROM and another for ROM or RAM. To make swapping OS-9 and BBUG easier, I changed this to a single 4 KB 2732-compatible ROM socket

For the MC6809 V2 board, the ROM can be a 2764, 27128 or 27256-compatible device, holding 8 KB, 16 KB or 32 KB, respectively. The larger ROM permitted more OS-9 modules to reside there, if desired. 

As built, a 2764-compatible EPROM is used, containing a BBUG image in one 4 KB half, and the OS-9 ROM image in the other 4 KB half. A jumper selects which half is active. This is much easier than swapping chips to go between BBUG and OS-9.

Accessing the correct amount of the ROM requires clever decoding. 

Decoder

A hard-wired decoder would limit the flexibility of the system, and it would be complex and difficult to change. Rather than discrete logic, the decoder consists of a Cypress Semiconductor CY2C291 2Kx8 EPROM. This is a fast device with a 70ns access time. The CPU address lines A5 to A15 are connected directly to A0 to A10 on the chip. The decoder is enabled with the logical OR of E and Q, which asserts during three quarters of the memory cycle. This way, the eight data output pins can be used as decoder selects programmable on every 32-byte segment of memory.

Three select lines are used: one for bus access (including the on-board memory array), one for the program ROM, and one for the DAT. Each select line is pulled up to +5v. Placing a 0 bit in the decoder ROM data array makes the select line active for that 32-byte memory segment. 

Modifying the memory map becomes a simple matter of programming the decoder ROM. I programmed the following logical memory map:
  • 0000-EFFF - Bus
  • F000-F77F - Program ROM
  • F780-F7FF - Bus
  • F800-FFFF - Program ROM
  • FFE0-FFFF - DAT (writes only)
This configuration is compatible with the existing ROMs for BBUG and OS-9, which require I/O at E000-E07F. It has 4KB of program ROM, except for the hole at F780-F7FF. This hole deserves a bit of explanation. 

I/O Port Address Migration

BBUG occupies the top 2 KB of ROM. The OS-9 ROMs take up nearly 4KB. However OS9p2 doesn't use the last 128 bytes of that space. This unused space became an alternate location for the I/O ports. If the I/O ports moved from E000-E07F to F780-F7FF, the MC6809 could use RAM in the logical E block (E000-EFFF), for a total of 60 KB of RAM, up from 56 KB. 

Moving the I/O address requires motherboard decoder changes and software changes to the BBUG and OS-9 ROMs, as well as revision to Flex09 and OS-9 I/O configurations. The V2 board decoder ROM would work with the existing motherboard, or with the motherboard and ROMs altered for the new I/O addresses.

Larger ROM

Once the I/O addresses are moved, the decoder can be reprogrammed to allow for more ROM space. This opens the option of moving OS-9 modules into ROM. The decoder allows the lower limit of the ROM to be changed in 32-byte increments. This allows an OS-9 system to be entirely in ROM. OS-9 would start from the reset button without requiring a boot disk.

DAT

Back side of MC6809 V2 card.

The DAT configuration is similar to the MC6809E V1 board, with one important difference. In the SWTPc MP-09 board, as well as my V1 board, the outputs of the DAT are inverted on the lower four bits (A12-A15), but non-inverted on the higher four bits (S0-S3). 

This means that values programmed into the DAT must be one's complemented on the lower four bits (A12-A15), with the higher four bits (S0-S3) not complemented. 

For the V2 board, all eight bits of the DAT are inverted on the bus. Thus, the value programmed into the DAT is the one's compliment of the highest eight physical address bits (A12-A15, S0-S3). 

Which makes programming correct DAT values simpler, since the entire byte is complemented.

I introduced a hardware bug in the DAT decoder. More on this later.

Building

Rather than wirewrap, I opted to try something new. A technician from work gave me a couple of 3M Scotchflex Breadboarding kits. This breadboarding system was brilliant. Chip sockets connected to IDC pins. Wiring is accomplished by forcing wire-wrap wire between the IDC pins with a special tool. 

It is way  easier than wire-wrap, because there's no tedious cutting, stripping, threading and winding of wire. One lays the wire down and pushes it on to the pins. Wiring several connections in succession, such as with a bus, is a breeze. The results also look great. The IDC pins are low profile, so there's less chance of shorting a connection than with wire-wrap.

It's sad 3M discontinued this product. It was great. 3M has since re-used the Scotchflex brand on three other products.

Fixing the Bug

The MC6809 V2 board worked great. There were no wiring errors. I did find a problem with the DAT.

In the default BBUG and OS-9 configuration, the DAT is written once during reset and never touched. And that seemed to work just fine.

Then I started playing with an OS-9 driver called VDisk. It created a virtual disk from selected extended memory blocks. At the time, I had 56 KB of memory from the MC6809 V2 card, plus another 60 KB from the Digital Research Computers / Tanner card. That made possible a 60 KB virtual disk.

Every time I tried to access the virtual disk, the computer would crash. This took a while to track down. 

I eventually realized the new decoder did not take into account the clock cycle when accessing the DAT. Transients on the R/W* line early in the clock cycle could cause bad data to be written to the DAT. After I added the missing gate, the Disk driver worked perfectly. 

Usage

Like the MC6809E V1 board, this V2 board was exactly how I wanted it. There are only two jumpers. 

The jumper at the top edge of the board selects the 4KB portion of the EPROM. This makes it easy to switch between OS-9 and BBUG. No more hassle of changing out chips - just move a jumper.

The jumper in the middle of the board, just above the decoder ROM enables the FE000-FEFFF block of on-board memory. This would be installed once the motherboard I/O addresses are moved out of the E-block of memory and would allow 60 KB of RAM to be used.

Future

Moving the I/O addresses out of the E-block gains 4KB more usable memory for OS-9. Perhaps I'll try that in my retirement.

Another fun project would be to put a full OS-9 Level I system into ROM. Unfortunately, all of the essential modules take up just over 16 KB of memory, so the division doesn't fall on a natural 4 KB boundary. This might cause a conflict accessing extended memory with the DAT.  I'd also have to figure out how to program the decoder ROM. There are not many EPROM programmers that can program the Cypress Semiconductor CY2C291 devices, and I no longer have access to the ones I originally used. 

OS-9 Level II

This design works well for OS-9 Level I. To run OS-9 Level II, which allows each process to have a full 64 KB address space, requires more hardware. First, a second set of DAT memory chips allows the user and supervisors states to have separate memory maps. Second, a means of switching between those maps automatically -- like when servicing and returning from interrupts. Third, would require ROM to be accessible from an extended memory address, and then mapped into the supervisor space. 

Those requirements go beyond the scope of this design. Perhaps there's room for a V3 board. All of this assumes access to a copy of OS-9 Level II, which may be difficult to find. 

    Forty Years of Personal Computing - 5 1/4" WD2797 Disk Controller

    WD2797 controller card for 5 1/4" drives
    To work on OS-9, I borrowed some 5 1/4" drives, and used the SWTPc DC-2 controller. This allowed me to boot up OS-9. Single-sided, single-density, 40-track diskettes hold about 100 KB -- they were quite limited on space.

    Running OS-9 on single-sided, single-density 8" disks, the situation was a little better, as each drive has about 300 KB of storage. But my two-drive system was limited. Plus, I was something of an island. None of my friends using OS-9 had 8" disks, so I couldn't exchange data with them. It was time to consider 5 1/4" drives.

    5 1/4" disk drives went through considerable evolution since their 1976 introduction. The early drives were single-sided, single-density with only 35 tracks. By 1987, double-sided, double-density drives sporting 80 tracks were common. These disks could hold about 640 KB, more than twice what my single-sided, single-density 8" drives held. (And more than single-sided, double-density 8" drives could as well)

    Disk Controller

    In August 1987, I designed a 5 1/4" floppy disk controller. The 5 1/4" controller is very similar to the 8" design, with appropriate changes for the disk interface. 

    A MOTOR ON* signal is generated any time the WD2797 is accessed, with a one-shot multivibrator holding that signal for 10 seconds. Another one-shot asserts the READY signal on the WD2797 after a second of MOTOR ON*. 5 1/4" disks always have the heads loaded, so HLD is tied to HLT.
    Back side of 5 1/4" controller

    Double-density is jumper-selectable to either follow drive select bit 7, or the SSO output. Side selection is controlled by drive select bit 6. Write pre-compensation isn't used, as it was unnecessary for 5 1/4" disks. 

    I built the controller the same piece of 0.1" perfboard that originally held the FD1771 disk controller for 8" disks. The board is a little bit smaller than the WD2797 controller for 8" disks, so it appears more densely packed. Wire-wrap techniques are used for the wiring, and a handful of connectors and discrete parts are soldered.

    Drives

    For initial troubleshooting, I borrowed the two drives and power supply from a Sage II computer from work, which I had to return. I needed my own drives.

    How many drives did I need?  I decided three drives would be sufficient -- one boot disk, and two working disks. This would allow me to copy disk to disk, while still having the boot disk with commands in place. (and no crazy disk-swapping for copies like the original Macintosh that had one disk drive!)

    I bought two Tandon TM100-4 drives at a local hamfest. These were common surplus from Lanier word processing units at that time. When I went to buy a third drive, I could no longer find any. I ended up with a Mitsubishi M4853 drive. The specs of the drives are virtually identical, except the Mitsubishi is a half-height drive.  

    Drive Cabinet

    5 1/4" Drive Cabinet
    Finding a cabinet to house three drives was a problem. New metal cabinets are very expensive, particularly in larger sizes, and I couldn't find anything suitable on the surplus market. 

    September 1987, I built a wooden cabinet to proper dimensions for three TM100-4 drives. I used 1/4" plywood, reinforced at the corners with 1x1/2 strips. The bottom, back, sides and one quarter front panel are all glued together as one unit. The top screws on to the four corner posts. The finished unit is quite sturdy. 

    As originally built, the cabinet was plain unfinished plywood. I recently sanded and finished it with a couple of coats of polyurethane.
    Inside the box, plenty of room.

    Power comes from a 12 volt, 5 amp supply. 5 volts is provided from a single LM7805 regulator mounted to that supply. In retrospect, the LM7805 might be a bit over-taxed. I suspect the drives draw less power than their maximum specifications. Heat is removed from the cabinet by a small (but noisy) muffin fan on the back panel.

    A power switch and neon pilot light round out the front panel, giving a clear indication the unit is on.

    The controller and drives work great, easily formatting  double-side, double-density disks using 80 tracks. 

    Drives & Software

    In April of 1989, I revised all the disk drivers to handle double-density, double-sided drives. The BBUG monitor "D" command code was updated to look for double-density sectors, and the boot loader for Flex09 updated to read double-density, double-sided disks.

    For OS-9, I modified an existing driver (FD2) for the Processor Technology PT69 to work with my disk controller and created a new boot disk with several drive descriptors. The drivers and descriptors allowed for 40-track disks (which required double-stepping of tracks, and adjusting the track register), and SWTPc format, where track 0 is formatted single-density -- as well as the standard, double-density, double-sided, 80-track format.

    I updated the Boot module to handle double-density, double-sided disks and burned a new OS-9 ROM. 

    The result is a smart, efficient unit roughly the same size as the SWTPc 6800 Computer System cabinet. The fan is a little noisy, but was typical for the day. 

    Future

    The Tandon and Mitsubishi drives only require 250 ms to get up to speed after MOTOR ON*. I can shorten the timing on the one-shot driving the READY signal.

    If I can manage to find a second Mitsubishi M4853 drive, four drives would fit into the cabinet. I'd need to add a second LM7805 regulator for the 5-volt supply, and split the 5-volt output across two drives for each.

    One limitation of the WD2797 is the track to track and head settling time. These drives can move track to track in 3 ms and need 15 ms for the head to settle. The WD2797, using a 1 MHz clock for 5 1/4" drives, can only do 6 ms and 30 ms, respectively.

    Western Digital did manufacture another device, the WD1772-00. This was a 28-pin floppy disk controller for 5 1/4" drives that is software compatible with the WD179x and WD279x devices. The WD1772-00 allows faster track to track and head settling times -- up to 2 ms and 15 ms. 

    The biggest problem is finding one, as the WD1772-00 wasn't used in a lot of designs, and Western Digital stopped manufacturing them over a decade ago. Might be interesting for a V3 floppy disk controller card.




    Cariboulite success

    Well. Further to my previous post where all hope seemed to have gone out of the window I finally made progress today, but not the way I set out to.

    First off, I pulled a Raspberry Pi 4 from another project and sat the CaribouLite HAT on it.

    Next was a fresh installation of DragonOS. But this time it did open the ssh port – I’ve no idea why it did not before and note I am being unscientific here as I changed the Pi, but I am not going backwards.

    Then, time for install.sh…

    All seemed to go well but the software failed to compile completely. Searching on the errors I added #include <memory> to two source files, cariboulite/software/libcariboulite/src/CaribouLiteCpp.cpp and cariboulite/software/libcariboulite/src/CaribouLite.hpp. Stripping out all the ‘apt’s and ‘depmod’ from install.sh and running it again and the software compilation completed! I had already added and commented out the necessary lines in /boot/firmware/config.txt so a reboot was all that was needed to kick it into life. The driver was loaded – lsmod|smi showed this and also /proc/device-tree/hat now exists, both precursors to success according to the notes and YouTube videos.

    Running sudo SoapySDRUtils –find showed the card, and, finally (!), running SoapySDRServer –bind and CubicSDR on the Mac mini finds the server on the Pi and I can tune to the local radio station.

    Success, but that was a struggle. Mind you, I learned stuff at least!

    ❌