Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

More F2 Skip Cesium Development Notes

 Here's the query I'm using for Burr Trail



============================================

select

  tx_lng,

  tx_lat,

  rx_lng,

  rx_lat,

  rm_rnb_history_pres.timestamp,

  rm_rnb_history_pres.dB,

  rm_rnb_history_pres.Spotter,

  haversine(tx_lat, tx_lng, rx_lat, rx_lng) as total_path,

  gis_partial_path_lat(tx_lat, tx_lng, rx_lat, rx_lng, 200) as el_lat,

  gis_partial_path_lng(tx_lat, tx_lng, rx_lat, rx_lng, 200) as el_lng,

  id,

  strftime('%Y%m%d', timestamp) as date,

  strftime('%H%M', timestamp) as time,

  'US-4399' as park,

  'KD0FNR' as call,

  1770.829467773438 as elev_tx

from

  rm_rnb_history_pres

where

  dB > 100

  and timestamp > '2024-05-27'

  and timestamp < '2024-05-28'

order by

  rm_rnb_history_pres.timestamp desc

=======================================================


At some point I'll add an ionosonde field using this map for each QSO. For the Boulder, CO ionosonde, the query will look like

=========================================================

select

  tx_lng,

  tx_lat,

  rx_lng,

  rx_lat,

  rm_rnb_history_pres.timestamp,

  rm_rnb_history_pres.dB,

  rm_rnb_history_pres.Spotter,

  haversine(tx_lat, tx_lng, rx_lat, rx_lng) as total_path,

  gis_partial_path_lat(tx_lat, tx_lng, rx_lat, rx_lng, 200) as el_lat,

  gis_partial_path_lng(tx_lat, tx_lng, rx_lat, rx_lng, 200) as el_lng,

  id,

  strftime('%Y%m%d', timestamp) as date,

  strftime('%H%M', timestamp) as time,

  'US-4399' as park,

  'KD0FNR' as call,

  'BC840' as ionosonde,

  1770.829467773438 as elev_tx

from

  rm_rnb_history_pres

where

  dB > 100

  and timestamp > '2024-05-27'

  and timestamp < '2024-05-28'

order by

  rm_rnb_history_pres.timestamp desc

limit 1

====================================================================

The extra ionosonde field works!!! 

References

Ionosonde Data

This was made possible with data collected by 


Things I (didn't) Learn (yet): Setting Elevation of Line Endpoints in Cesium CZML

 The F2 skip traces in Cesium have looked a bit odd:


Whyyyyy?????

It's because they're starting from sea level, not the altitude of the station.... I think... What follows are somewhat random development notes.

Relative to Ground: True, Alt: 0


Relative to Ground: True, Alt: 3600 meters



Relative to Ground: True, Alt: 1772.717



Almost!!!

Relative to Ground: true, elevation: 1770.829467773438



I got the more accurate elevation from the elevation plugin using:
{{ tx_lat }},{{ tx_lng }}|{{ rx_lat }},{{ rx_lng }}



Not quite there yet....
To get this close, I had to run with 



But the CZML schema says:

So, it seems like the correct altitude in this case should be 0. (Start the line on the ground.) But, if I put in zero for altitude, I get the result shown above


Found a reference to quaternions. Ahhhh, quaternions.




References

Ionosonde Data

This was made possible with data collected by 


Cesium Maps Embedded in Datasette for the Ham Radio QSO Logger

 KO6BTY and I attended office hours with Simon Willison yesterday.


At the very end of our call, Simon showed us how we could open a czml directly from our localhost Datasette server in Cesium. After about an hour this morning, the video below shows the results.

Here's what you're seeing. There's an SQL query in our Datasette instance that grabs only the QSOs from our Cibola National Forest POTA back in March. That query resulted in 9 QSOs shown in the table. Our additional CZML (and now Cesium viewer) plugin creates a CZML map that is loaded into a Cesium Ion viewer at the bottom of the page.

I'm not releasing the code that loads up the viewer quite yet because I have to get the access token squirreled away and all that good stuff. Meanwhile the CZML plugin has been released for a few days now.



By the way, we need a way to pull adif files from POTA activations into Datasette. Keep an eye out :)

Things I Learned: Datasette and Windows Paths

 I mentioned that I'd made a pull request for datasette and Windows path handling, and then forgot to talk about what my fix involved.

Basically, Datasette as it stands feels that colons have no business in file paths. Windows on the other hand?

C:\

Consequently my pull request revolved around changing datasette/utils/__init__.py by adding the following code:


The code checks for a Windows style path specification. If it finds it, it gently moves the rest of the code around that first offending colon.


Week Notes: Elevation Specified and Mostly Solved; Datasette Spatialite Windows Fix;

 It's been a busy and fun week again!

Most of my time was spenting finalizing what the elevation profile Datasette enrichment would look like. I finally found the last piece of the puzzle laying around in a pile of Jinja template use practice. One of the goals this week will be to release the elevation package.

I made my first pull request for an open source project, datasette, in this case. (I make them at workr, but this is my first foray into the wider world.) 

ChatGPT provided the final kick to get the elevation maps plotted really eaisly.

Subsequently, I learned about the Pyplot package and how to add labels to charts.

I got a start back towards writing about dadding. I spruced up some writing I had lying around about cleaning the kitchen with the gang. I found another older piece that I love while checking out old blog entries here: Meerktas and Ravens. Oh, and then this one about what unschooling with toddlers lools like.

Goals

  • Release the elevation package
  • Release a QRZ QSL enrichment for Datasette
  • Write more about dadding and unschooling
  • Release the KML QSO mapping plug-in for Datasette


My First GitHub Pull Request; Datasette Windows and Spatialite

 I made my first ever pull request on GitHub this morning!

The gang and I still mostly develop code on Windows boxes. It has more to do with the machines we have available to us on any given day than anything else. (The just-sit-down-and-type machine in our house is also for playing video games, so... Windows.)

This has led to issues when using our Datasette enabled QSO logger and mapping applications


Datasette on Windows has an issue with finding the SpatiaLite DLL file. The gang and I worked through the issues and documented our really clunky, blunt force, fix. The fix, however, amounted to changing the source code of Datasette itself on a per installation basis. That meant that if I moved to a new Windows machine I had to find the fix and re-implement it. Even worse, that meant that if I wanted to try something that worked only in a newer version of Datasette, I also had to reimplement the fix.

That... Was a Lot.

So, I was totally psyched when I got a little bit of time yesterday to crank out a fix that should work with the existing install and be able to move forward with it. Towards that end though, I need to get my fix added to the Datasette project if at all possible, and that's where the pull request came in. It  turned out to be far easier than I might have thought. 

First, I forked the datasette repo and moved changes to my clone of the new fork. 

I'm a big Simon Wilison  fan as well as a big agile development fan as well as a big Datasette fan so I already knew I'd need test cases to keep from breaking existing code. (Also, I work in silicon functional verification, so, yeah,  test cases are good.)


The next step was to review the documentation for how to contribute to Datasette. I'd forgotten the documentation step. Sure, I'd documented my fix in the original issue, but did I also need to update the Datasette documentation? I decided I needed to add a small comment about using Spatialite with Datasette for Windows there. I modeled my documentation add after the similar subject on Django

Then, I made my pull request using the github site. The process was simple. And then, the pull request action for the Datsette repo kicked off automated checks of my pull request! So cool! So far,  so good!




Things I Learned: Adding new DLL locations to Windows via the PATH Envrionment Variable

 This is a note for myself for later when I try to construct a better set of instructions for adding Spatialite to Windows for use with Datasette.


You can instruct Windows to look for DLLs in a new directory simply by adding that directory to your PATH envrionment variable via 


set PATH=C:\path to your dll;%PATH%

via.


Things I Learned: Datasette-Enrichments: The Template Can Be The Optional Arguments

 This is a small thing, but it's going to give me a lot of utility with the enrichments I'm writing.

More than once, I've run into the need for an input argument to the enrichments I'm creating. Take the elevation enrichment for example. I'd like the user to be able to specify how long their elevation profile path should be in meters. My slop and drop method for adding an argument to rows of table data would be to simply specify a constant in my SQL query ala 

select tx_lat, tx_lng, rx_lat, rx_lng, 200 as optional_meters from qso_table

That would result in an extra column containing the number 200 on every row.

That also won't work with enrichments for the moment because the operate on filtered tables, not generic SQL queries. That was irksome. The answer though, is simple, elegant even.

For each enrichment, the user specifies their input using a template. For the elevation enrichment, the template looks like this:


but I could have just as easily typed


and now, that ',200' will give me the 200 argument I wanted to include on every row! It's that easy. I just hadn't thought through all of the tools that I had available.



Things I Learned... Again: Spatialite with Datasette on Windows

 I'm mostly writing this so that the next time I run into this, I'll have a handy set of notes.


Many months ago, I found out that Datasette on Windows doesn't like this:


            for extension in self.sqlite_extensions:

                # "extension" is either a string path to the extension

                 or a 2-item tuple that specifies which entrypoint to load.

                if isinstance(extension, tuple):

                    path, entrypoint = extension

                    conn.execute("SELECT load_extension(?, ?)", [path, entrypoint])

                else:
                    conn.execute("SELECT load_extension(?)", [extension])


instead, it wants to see 

            for extension in self.sqlite_extensions:
                # "extension" is either a string path to the extension
                # or a 2-item tuple that specifies which entrypoint to load.
                #if isinstance(extension, tuple):
                #    path, entrypoint = extension
                #    conn.execute("SELECT load_extension(?, ?)", [path, entrypoint])
                #else:
                conn.execute("SELECT load_extension('C:\Windows\System32\mod_spatialite.dll')")

The convenient thing about all this is that datasette will tell you where to find the code to change:

  File "AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\datasette\app.py", line 713, in _prepare_connection
    conn.execute("SELECT load_extension(?)", [extension])
sqlite3.OperationalError: The specified module could not be found.

NOTE: pip show datasette will also show you the path.

One day, one day, I will work on debugging the root cause and maybe making a pull request!


ChatGPT, Datasette-Extract, and the US Ham Radio General Exam Question Pool

 I started a project, ahem, yesterday to 'quickly' see if ChatGPT could read the entire United States general class amateur radio exam question pool into a Datasette instance using the datasette-extract plugin. As of this morning, I haven't been able to coax ChatGPT using the gpt-4-turbo model. I capture my rather raw notes below. The short version of this is that I was never able to get the AI to capture more than 19 questions at a time. I'm hopeful that the pool could be moved into a database table using iterative processes, but for now, I've run out of time for this quick project :) 

Occasionally ChatGPT seemed to hallucinate out part of its process into the table


Notes Follow

I'm going to track how easy it is to get the general exam question pool into a database using the Datasette Plugin. I started this endeavor at 20:37 UTC.


Get my already existent OpenAI API key ready to go

20:43: Done. As usual with OpenAI, the hardest part was finding login screens and then the API. Finally did a Google search to find the API.


Install the datasette-extract plugin

I've run into an issue here. I think I have too old of a version of Datasette and Windows can't figure out how to uninstall it

Using cached datasette_extract-0.1a6-py3-none-any.whl (815 kB)

Using cached datasette-1.0a13-py3-none-any.whl (302 kB)

Using cached datasette_secrets-0.1a4-py3-none-any.whl (12 kB)

Installing collected packages: datasette, datasette-secrets, datasette-extract

  Attempting uninstall: datasette

    Found existing installation: datasette 1.0a3

    Uninstalling datasette-1.0a3:

ERROR: Could not install packages due to an OSError: [WinError 32] The process cannot access the file because it is being used by another process: 'c:\\users\\m3n7es\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\\localcache\\local-packages\\python311\\scripts\\datasette.exe'

Check the permissions.

I'll clone a dev environment for the plugin and then run in venv. Time now 21:00.

Still Installing

21:05 OK! pytest passes!

Adding Table Column Names

This is easy since I've already got a table for the general exam pool. The headings are:

id question class subelement group_index group_number answer answer_a answer_b answer_c answer_d 

21:21 The column names have been defined with hints. 

id primary key

question follows a line starting with G ends with '?'

class Defaults to G for every question

subelement A number following G before a second letter

group_index The letter following subelement's number (G)(\d)(A-Z)(\d\d) Use \$3

group_number two digit number following group_index (G)(\d)(A-Z)(\d\d) use \$4

answer A single letter between parentheses that indicates the correct answer choice

answer_a next line starting with 'A.'

answer_b next line starting with 'B.'

answer_c next line starting with 'C.'

answer_d next line starting with 'D.'

I added the additional instructions

The questions and answers are in line sorted by headings that contain class (always G), then subelement (a single digit following G), then group_index (a single letter following the subelement), then group_number (a question number within the group_index), then the single letter correct answer enclosed in parentheses. The next line contains the entire question text for the question field. The next four lines in each question contain the four possible answers. The end of each question is denoted by '~~'.

I've copied the entire question pool starting at 





and ending at 



into the tool. Now, I'll press 'Extract'



Time is 21:26 UTC

Extracting to Table

Got back this error message:

Error: Error code: 404 - {'error': {'message': 'The model `gpt-4-turbo` does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}

Extraction failed


OK. Looking at My OpenAI account I see:


No gpt-4-turbo. So, that's a bit of a challenge.

OK! The API is like using a Clipper Card on BART. You have to pay up front


I put some money in the account. 

I'll try to extract again. It's 21:43.

It's Working!!!

[
  {
    "id": 1,
    "question": "On which HF and/or MF amateur bands are there portions where General class licensees cannot transmit?",
    "class": "G",
    "subelement": "G1",
    "group_index": "A",
    "group_number": "01",
    "answer": "C",
    "answer_a": "60 meters, 30 meters, 17 meters, and 12 meters",
    "answer_b": "160 meters, 60 meters, 15 meters, and 12 meters",
    "answer_c": "80 meters, 40 meters, 20 meters, and 15 meters",
    "answer_d": "80 meters, 20 meters, 15 meters, and 10 meters"
  },
  {
    "id": 2,
    "question": "On which of the following bands is phone operation prohibited?",

The engine is still cranking along at 21:47.

And Then </exceeds>

  {
    "id": 19,
    "question": "When is it permissible to communicate with amateur stations in countries outside the areas administered by the Federal Communications Commission?",
    "class": "G",
    "subelement": "G1",
    "group_index": "B",
    "group_number": "08",
    "answer": "B",
    "answer_a": "Only when the foreign country has a formal third-party agreement filed with the FCC",
    "answer_b": "When the contact is with amateurs in any country except those whose administrations have notified the ITU that they object to such communications",
    "answer_c": "Only when the contact is with amateurs licensed by a country whic...  Click to expand ... <exceeds maximum number of characters> ,,groupId,,quizzes,,element,,data,,result,,direct,,[]}]}]}</exceeds>}]}]}</exceeds>}]}]}</exceeds>}]}]}</exceeds>}]}]}</exceeds>}]}]}</exceeds>}]}]}</exceeds>}]}]}</exceeds>}]}]},"
  }
]

Did I hit the end of my billing envelope?

21:51 No, billing seems fine. I wonder is I need to add the file in as a pdf because of this message:

exceeds maximum number of characters

Trying again with a pdf file

21:59 Dropping in a pdf fil resulted in 'Processing...' message for the last 8 minutes. Trying this a subelement (subelement_group? since it didn't complete a subelement) at a time.

Full Subelement at a time

Back up and running at 22:01.
 
Well, shucks, that time it only pulled out two questions. Also, it didn't create the table even though it said it did:







Error 404

Table not found: ham_exam_general_question_pool


I'll try a db that doesn't revolve around a memory table next.

No Memory Table DBs

What could have been really bothersome was a breeze. The table columns auto-populated for me!

'Additional instructions' was not auto-populated, so WooooHooooo!!! blogging. Meaning, I'm really happy I documented my instructions a few paragraphs back.

22:11 Pushed the 'Extract' button. Results started coming in a few seconds later.

Nuts! It got three questions out this time, but that's it! What's the difference in setups???

Adding Remaining SubElement Group by Hand 


Starting at 22:22

22:24 That worked. The entire G1A subelement group is in the table.

Can it do two subelement groups?

22:26 Input subelement group B and C

22:27 Both subelement groups have been successfully added.

The rest of the groups in the subelement?

Again that's two subelemetn groups, D and E, but it only pulled out one question: the last one in the C group that I accidentally copied back in. Nuts! 

Removed the row, removed the input, trying again at 22:32

Made it through the D subelement group and then stopped on 

"G1E – Control categories; repeater regulations; third-party rules; ITU regions; automatically controlled digital station"

I think I see the game. I'll take out the group descriptions and add all the text in to see if I can be deon with this. 23:34

Descriptions Removed

23:42 back up and running with all the descriptions removed. We'll see how this goes.

It's taking about four seconds per exam question to figure out the correct extraction.

After 
 "id": "G1E12",
decided it was done

Remember how the ids started out as numbers? Weird.
Note: Updating the following morning. Not weird. I forgot to set the field type to integer.

More Instructions

22:49
Added these additional instructions:

"When the subelement changes, or the subelement group changeds, keep going please. The end of the question pool is deonted by '~~~end of question pool text~~~' You're doing a great job, but please get every additional question this time."

and trying again.

22:49 Three questions have come back. It seems to be thinking now?

22:50 (Yes, I know it's not actually thinking.)

22:51 Calling this. Still at three additional quesitons.

Don't give away the ending

I took away the instruction about how to find the end of the pool. as well as the line about 'every additional question'

22:54 Successfully crossed from G2A12 to G2B01

22:54 And now from G2B11 to G2C01

22:55 Stopped at G2C08. Why???

Did ChatGPT read the question? 'What prosign is sent to indicate the end of a formal message when using CW?'

22:59 Made the hop to G3A01 and then promptly decided it was done again.

There were two blank lines above that question rather than one. Is that why?

23:02 started it back up.

23:02 Stopped atain at G3A14.

Again, there are three blank lines after this question rather than one.

23:05 Added 'The number of blank lines between questions is NOT significant.' to the Additional instructions.

Stopped two questions later at G3B02.

23:06 Starting again.

Two questions again. Taking away the last instruction.


23:38 So Tired
Got this error a few rows in 


After changing 'Additional instructions' to 

"IGNORE ALL BLANK LINES in content. Extract all data from content according to the following instructions. Rows will always begin with the pattern (G)(\d)([A-J])(\d\d)(\s*)([A-D]) and end with a line containing '~~' The questions and answers are in line sorted by headings that contain class (always G), then subelement (a single digit following G), then group_index (a single letter following the subelement), then group_number (a question number within the group_index), then the single letter correct answer enclosed in parentheses. The next line contains the entire question text for the question field. The next four lines in each question contain the four possible answers. The end of each question is denoted by '~~'"

Let's flush the state and start over

Looking above, the plug-in did as well as it ever did before I tried all the above experiments. One thing I hadn't realized, (although I'd documented it), was that I accidentally changed the key to be text on my second try. I'm moving back to the original material copied in and the original instructinos with a numeric key.

First, I tried without a new key and wound up only getting two questions back. Just as bad as ever. 
Changing all the fields with numbers to integer resulted in one question.

I'm going to create a new OpenAI key and start on a clean database.

New database, new key, new table name wound up with 13 questions on the first try. I don't think I'

Things I'm Learning: Adding Text Encoded Images to Google Earth (Pro and Web)

 This project has been going on for a while, but remember, the point of the elevation profile project was to add the elevation profile between our ham radio—Project TouCans—and the receiving station at the other end of a QSO to the map of the QSO itself. This week, I finally got to start work on the maps themselves, and I've found a few things.

First, an image added to a Placemark description without a 

width="100%"

tag will overflow the pane that Google Earth Web creates for the Placemark


A user can scroll the chart horizontally, but theOh  information isn't all visible at once.

Adding the width tag back in fixes the issue on Google Earth Web, but leaves Google Earth Pro with a very small chart



I thought a good compromise between the two might be to house the image in its own div. For Google Earth Web we get


but on Google Earth Pro we still get the tiny chart:


It seems the only way to get this done will be to learn the new Google Earth kml 'Carousel' extension shown here from the Jane Goodall example:


The dream would be to get something like the following from the example



But! Oh No! When I open the Jane Goodall example in Google Earth Pro, instead of seeing the above, I see


The pictures are gone!!!

So! It looks like a Datasette plugin for Google Earth Pro and a separate plugin (or at least a selectable option) for Google Earth Web are what I'm going to need to do to be able to function on both platforms. 


Finalizing Thoughts on the Elevation API Datasette Enrichment for Ham Radio QSO Paths

 I've outlined the work on the ham radio QSO Google Maps Elevation API Datasette Enrichment  (try saying that 3 times fast) over the last several days.  What I haven't talked about much, but thought about pretty often is what the released elevation enrichment should do. Here goes.

The Datasette enrichment will store the JSON data returned from the Elevation format in a database column named 'elevation'. Remember, the enrichment is taking two coordinate pairs that form a QSO like the brown line shown between Villa Nueva, NM and Idaho (KJ7DT's QTH).


It then determines the endpoint of a line that only covers the first 200 meters of that path, shown as the blue line below:


(Yes, 8 year-old Tawnse and I were sitting on the side of the 'cliff' on the other side of the Pecos River from the campground during that QSO. Yes, Tawnse does have a lower center of gravity than I do and led me up that path. Yes, I did have a heck of a time getting back down once I turned around and saw where I was. Please reference Tigger. Up the Tree. With Roo. for a literary example.)


Using that shorter path segment, the enrichment calls the Google Maps Elevation API for a 'path', asking for 200 samples to be returned, and winds up with JSON data formatted as shown.

{
   "results" : 
   [
      {
         "elevation" : 123.6165237426758,
         "location" : 
         {
            "lat" : 37.72489522009444,
            "lng" : -122.4229361744053
         },
         "resolution" : 9.543951988220215
      },
      {
         "elevation" : 123.4337463378906,
         "location" : 
         {
            "lat" : 37.72489438728456,
            "lng" : -122.4229250613661
         },
         "resolution" : 9.543951988220215
      },...

From there, I've written at length over the last few days about how to make an elevation chart like the one shown below. 






Only the data labeled as "elevation" is used in the chart even though there's more information to be had in the JSON response. At the moment, that data is simply being discarded. Other users of the enrichment might have a need for it. Other users might also have different uses for the data altogether than an elevation chart in base64 text encoded png format, (the sole output of the enrichment at this time.) 

So! The enrichment will be modified to simply store the JSON data. I'll make a second enrichment that will create the charts I require. The inputs to datasette-enrichments-gmap-elevation will be:

latitude1,longitude1,latitude2,longitude2,optional_sub_path_length_default_200_m

Where the first four arguments are the start and end coordinates of the entire path, and final, optional argument specifies how many meters long the path segment should be and defaults to 200 m.

The output will simply be the returned JSON data stored in a column titled 'elevation'.

I'll create a second enrichment datasette-enrichments-elevation-chart that consumes JSON elevation data and produces text encoded png charts.


Fun References:

KML/KMZ revelation inspired by the POTA from the above map.

UAP siting on the same trip




Things I Learned: Text Alignment in the PyPlot text Method

 I learned how to arrange labels on PyPlot charts yesterday. 

I've mentioned that the scales on the ham radio QSO elevation profiles weren't entirely comparable to one another. Due to the variety off hills, mountain ridges, and coasts the kids and I make POTA activations on, the vertical scales on the elevation chart y axes range from three meters in total to dozens of meters. What's important (and interesting) about the various sites though isn't the sheer number of meters of descent or ascent, but rather, the angle of the terrain with respect to level. That angle gets subtracted from the natural radiation angle of the dipole that houses Project TouCans and, theoretically, extends the distance that TouCans can transmit before bouncing off the F2 layer of the ionosphere.

Consequently, rather than determining a scaling algorithm to make all the elevation profile graphs returned for the various QSOs comparable, I elected to simply calculate the slope of the terrain over the first two wavelengths, (using 2 20 meter wavelengths in these examples since Project TouCans is a 20 meter ham radio.) That just gave me the job of adding a label to each graph (after using scipy.stats to calculate the slope.) Getting the label to the correct spot on the graph took a little more work than I initially expected.

ChatGPT's original suggestion was to use:

plt.text(0.5, 50, slope_st, fontsize=12, ha='center')

That gave back graphs with a slope message that scootched around.

It was centered, but centered on what? Turns out, the text was centered on the coordinates specified in the argument, and those coordinates were literally tied to the data values. So, centering wasn't the easy way for me. I moved to 'left' for the ha argument. Sometimes things looked good:

But also? Sometimes, the label just disappeared:


After a bit of puzzling, I figured out that the data coordinate for 50, specified in the line of code above, for the y coordinate was well off the screen in the plot above.

Ultimately I arrived at the correct—for me—answer. I set ha, the horizontal alignment field, to 'left' rather than center. That caused the text to place it's left-most point at the coordinates specified in the method call: two tenths of the way to the right across the plot. I also used the transform argument:

transform=ax.transAxes

Which made the coordinates relative to the graph's axes, and scaled from 0,0 (lower left corner), to 1,1, (upper right corner.) I had to add a line of code to get the axes object before sending in the transform argument. So I wound up with:

ax = plt.gca()

Followed by 

plt.text(0.2, 0.1, slope_st, fontsize=12, ha='left', transform=ax.transAxes)

Which gave me a label at the same spot on every plot independent of the length of the label or the values on either data axis:






Ham Radio QSO Elevation Profiles Up and Running or ChatGPT to the Rescue

 The initial release of the ham radio QSO elevation profiles Datasette enrichment is up and running! 

(In the next few days, I expect there to be several posts on this.)

First, let's talk about the beauty of ChatGPT for personal coding challenges, my personal coding challenges. Over the course of the last week, I've tugged back and forth at the the problem of moving data from the Google Maps Elevation API into a png chart that I thought was going to be provided by the Google Chart API. All of this work taught me a lot, but did not produce useful results.

Yesterday though, I had ChatGPT open seeing what sort of job it could do creating electronic circuit schematics (spoiler: the results aren't pragmatically useful yet, although, they are pretty.) Since I had the app open anyway, I popped over to my ongoing conversation with ChatGPT about my fictional friend Penelope. (You can see her prompt below.) Here's what I jotted down to see if Penelope could solve my charting issues:

How would penelope make a line chart of two dimensional data (distance on the x axis , elevation on the y axis) in python, then create a png file of the chart, then dump out the PNG encoding of the chart so that it could be included in a data:image/png;base64 html tag

ChatGPT came back with an answer a few seconds later that was unlike anything I'd tried so far.

import matplotlib.pyplot as plt # Sample data (distance and elevation) distance = [0, 1, 2, 3, 4, 5] elevation = [100, 120, 110, 130, 125, 140] # Create line chart plt.plot(distance, elevation) plt.xlabel('Distance') plt.ylabel('Elevation') plt.title('Elevation Profile') plt.grid(True) # Save the chart as a PNG file plt.savefig('elevation_chart.png') # Show the chart (optional) # plt.show()

That would get me a chart. But how to get the chart back as png encoded text? ChatGPT wasn't done yet...

import base64 # Read the PNG file as binary with open('elevation_chart.png', 'rb') as file: png_data = file.read() # Encode the PNG image as base64 string encoded_png = base64.b64encode(png_data).decode('utf-8')

and that was that!

It took some tinkering, but I wound up with an enrichment that returned text based png images per row in our Datasette QSO log like the following:


Corresponding to transmitting from Lands End just north of the Legion of Honor towards Idaho.


There are definitely scaling issues what could be worked out, but I think the most useful thing would be to simply print a caption on the figure that indicates the angle of the slope over the first two wavelengths, (so in this case, the first 40 meters.)

To wrap up the post, I was curious about the size of png text encoded data vs the actual charts. It turns out that the chart shown above when stored as an image consumes 23kb of disc space. The encoded text when stored in a text file consumes 30kb of disc space, so not a whole lot of overhead. Awesome!

Penelope's Prompt

Let's talk about my friend, no, my hero, whose name is Penelope. Penelope learned to program when she was 12. Back then, there were lots of data feeds served over telnet. As she grew in her chosen profession, Penelope became an expert at Python.




Week Notes: Time Stamps Restored; Elevation Elusive

 Eeeeeep! It's been, well, 11 days in since my last week notes. Sigh....

Here's what's been going on. I spent the first half of that ime working on getting the time stamps back into the ham radio qso kml maps, and I did it!




While working on this, I discovered that I didn't quite know how to include jinja templates inside other jinja templates. I figured it out though.

Meanwhile, I've been methodically trying things as far as adding elevation plots to the kml maps. Things are moving along slowly... and!!! ChatGPT may have just told me how to do the whole thing??? Anyway...

Also! Data (and therefore Datasette) is good!

Data, Debug, and Past Ham Radio POTA Activations

 It's kinda awesome to have data!

Earlier today I was working on the elevation profile project I've been writing about throughout the week. I've moved the project up to the point where an enrichment provides the elevation at the transmit location of the QSO, (my end of the path.) Looking at the results, at first I thought I must have introduced a bug at some point. The first two QSO elevations were meters and 3 meters high. That seemed awfully close to sea level, especially considering that most of my POTAs are made from the sides of various mountains. Looking at the table row for the 3 meter QSO I inspected the date: March 14th of 2024. I checked my photos to see where the gang and I were on that day and found this


So! We were at just about sea level, that being the sea just to the right-hand side of the picture.

What about the two meter high QSO? At this point it occurred to me that I didn't need to look back at pictures to see where we'd been on the 23rd of March of last year. Instead I could simply look at the QSO on the maps produced by the kml plugin for Datasette. When I did, I found this:


I was at Robert W. Crown Memorial State Beach on Alameda just south of Oakland. 


Automated elevation profiles aren't a part of the QSO logging package quite yet, but being able to easily visualize data that was easy to capture and store from over a year ago has made debug a breeze in this case! After reviewing the data above, I video from that POTA site as well.

Paradigm Shift: Seaparating Data from Views aka Elevation Profiles aren't Ham Radio QSO Database Material

 I've been slowly but surely working through this week's ham radio QSO elevation profile project. The way I want to use the tool (Datasette) doesn't feel like a good fit for the tool. That's made things more difficult. In short, here's what I hope to achieve:

  1. Plot a partial QSO path for each contact from the transmitting station (me), about 10 wavelenghts or so out towards the receiving station, so about 200 meters or so in this case. This path should be on the kml map of the QSO This is done.
  2. Place a png picture of the elevation profile into the Datasette row for that QSO.
    1. This is what this post was about.
  3. Add this picture to the kml map as well.

So! I think I was wrong! The point of this whole operation was to have elevation profiles directly available in kml animated maps. I wanted them to be automatically included with each QSO, but my ownw specification was that they should be in the map. Not the database!

Basically, there's data, the dates, times, callsigns, and RST reports of QSOs. And then there's visulaizations of data. Things like animated KML maps, maps of F2 skips,



 and elevation profiles.



The database does not contain raw map data. It aslo does not inlcude F2 data. Especially in the case of F2 data, one day the database might contain it, but for now, the point of maps, F2 skips, and elevation profiles, (and sometimes weather radar), is to visualize what was going on during the QSO.

Now that I'm thinking about a better separation of concerns, the application has also become more simple. Where the elevation profiles belong is in the already existing—but not quite released—kml plugin. Here's how things will work:

  1. The kml plugin code will calculate the partial path, (or more likely, just require them as a field in the query).
  2. Given the partial path endpoint, the kml plugin will call out for the elevation data.
  3. Using the returned elevation data, the plugin will create a google chart visualization, targeting it to a 'who cares' div added to the Datasette page.
  4. I say 'who cares' because the real point of the chart will be to call my_chart.getImageURI to return a png of the map in text form that can be inserted in the kmkl without needing to turn it into a kmz file.
  5. The reason to care about that is that most people looking at the maps do not have access to Google Earth Pro. I think it's free at this point, but it's still an extra install. Most people do have access to Google Earth Web where I can put direct links to the kml files displayed in Google Earth Web. (Whether or not the text data encoded pictures will work there remains to be seen.)
And that's that! I'll keep you posted on the rest!


Things I Learned: Pytest and Python git Installs on Windows

 All of the Datasette plugins and enrichments contain this development instruction towards the bottom of their documentation page:


Now install the dependencies and test dependencies:

pip install -e '.[test]'

However, on windows, the above pip install line does not work. It returns an error message:

ERROR: '.[test]'

Note that the single quotes are still in the message. That is in fact the issue. This command:

pip install -e .[test]

Does work on Windows. I've updated the documentation on the datasette-enrichments-gmap-geocode package to reflect this.



Scoping out the Elevation Map Enhancement

 One of my goals this week is to add elevation profiles between QSO stations as a Datasette enrichment to our ham radio QSO Datasette utility. It's turning into a bit more of a convoluted project that I'd expected, so I'm taking the time to right down the first pass of the steps in the process. Here are the broad strokes:


  1. Complete a partial path Datasette plugin to find the coordinates of a small portion of the path between stations to use as the endpoint for an elevation profile path. This is already done.
  2. Use those coordinates as an input to a Datasette plugin that returns the elevation profile as json data. The first step of this has been prototyped. (It's looking like installing jq for Python on Windows is going to be a bit of an issue, so I'll find a different way to parse the returned json. I've done similar tasks with the ionosonde data and the rx station geocoding data.
  3. Produce charts as images for each row so that the query immediately presents a summary of all elevation data.


As for actual steps to be completed, here's what I have so far. More on this very, very soon.

Add a div, then using the DOM execute the JavaScript code to draw the graph ala this page.

Use jq to format the elevation data into JSON using the definitions here.



❌
❌