Paolo Amoroso's Journal

Tech projects, hobby programming, and geeky thoughts of Paolo Amoroso

This blog is hosted at Write.as and uses the default commenting system of the sister platform Remark.as. The Discuss... link in the footer of every post leads to the Remark.as comment box, which does the job but requires a paid account.

To provide alternate options I added to the footer a link to email me and now I'm experimenting with comments from the Fediverse.

The blog has the Fediverse presence @paolo@journal.paoloamoroso.com you can follow. On Mastodon you'll receive a toot for every new blog post. To comment, reply as usual to the toot. But be sure not to delete my Mastodon handle @amoroso@fosstodon.org prefilled in the edit box, which is different from @paolo@journal.paoloamoroso.com

This setup is motivated by a limitation of the blog's Fediverse presence and based on a workaround Write.as founder Matt Baer suggested.

The problem is the blog is unable to receive Fediverse mentions or reactions, so Matt suggested to mention a Mastodon handle anywhere in the blog posts. This way the blog posts' toots automatically include the right Mastodon account in replies, in my case @amoroso@fosstodon.org

When replying to a toot, Mastodon highlights the textual handles of any extra mentioned accounts and typing anything deletes them. That's why you need to pay attention, for example by pressing the right arrow key to un-highlight the handle and move the cursor to the right spot to begin typing the reply.

How did I set up Fediverse commenting on Write.as? In the blog settings, under Customize > Post Signature I inserted this code in the signature, i.e. the footer Write.as appends to posts:

<!-- comment -->
[Email](mailto:info@paoloamoroso.com?subject=Reply%20to%20Paolo%20Amoroso%27s%20Journal) | Reply @amoroso@fosstodon.org

The <!-- comment --> shortcode inserts the Remark.as link (I had to insert spaces to escape it, remove them in your signature). The email link has an ordinary mailto URL. And the Fediverse commenting option is just my Mastodon handle.

#blogging

Discuss... Email | Reply @amoroso@fosstodon.org

Turbo Pascal 3 for CP/M comes preinstalled with the Z80-MBC2 and V20-MBC homebrew computers. Checking out the development environment made me rediscover Turbo Pascal and realize its potential for programming these computers.

Although I owned Turbo Pascal for MS-DOS in the early 1990s, I didn't use it much. Between other languages later getting my attention and Borland losing its market leadership, I eventually forgot about Turbo Pascal. Now, with the development environment handy on the Z80-MBC2 and V20-MBC, I began checking out the Turbo Pascal CP/M version I had never played with.

Being familiar with the Turbo Pascal MS-DOS IDE, which features a nice text user interface with pull-down menus and dialogs, the CP/M version seemed spartan and primitive.

But I pressed ahead, tried the various commands, edited and compiled some code, and got familiar with the keystrokes and workflow. I soon felt at ease with Turbo Pascal for CP/M. The environment is still suprisingly usable and productive, allowing fast edit-compile-run cycles with short compilation times even on the 8-bit Z80-MBC2.

I now understand why Turbo Pascal made such a sensation at the time and revolutionized development tools.

To learn the Turbo Pascal environment and language I began reading the manual, as well as books about Turbo Pascal and Pascal. The more I used Turbo Pascal and read about it, the more I enjoyed it and wanted to learn and explore.

Next thing I knew, I was down a rabbit hole.

This experimentation and reading made me realize the potential of Turbo Pascal as an ideal tool for hobby projects with these homebrew computers.

Pascal is an easy to understand, readable, and expressive language. Despite the age and design flaws, it allows to write fairly advanced code. Pascal makes practicality win over language purity.

Sitting at a sweet spot between ease of use, features, and power, Turbo Pascal is a perfect fit for CP/M as it consumes limited resources, generates moderatly small and fast executables, and can access all the features of the system. That's why it's a good environment for quickly developing small tools or programs for the Z80-MBC2 and V20-MBC.

#pascal #retrocomputing #z80mbc2 #v20mbc

Discuss... Email | Reply @amoroso@fosstodon.org

I was a heavy user of shareware software but my experience was like a story with missing clues and no ending. Reading Shareware Heroes: The renegades who redefined gaming at the dawn of the internet by Richard Moss filled the gaps, completed the story, and gave a sense of resolution.

Android tablet with the cover of the of the Shareware Heroes ebook open in a reading app.

I encountered shareware via the Amiga Fish disk collection, and later MS-DOS productivity software and utilities such as the PC-Write word processor and the CompuShow image viewer.

As an Italian student I loved the affordable programs and the wide selection of shareware, much wider than the fewer and expensive packages by traditional American software houses local retailers carried. I assumed everyone else loved shareware, so I always found puzzling this distribution model was little known even among computer geeks. Equally puzzling was why shareware seemed to have faded since the late 1990s.

Later I realized my narrow focus on productivity software and programming tools made me miss major events, hits, and market players of gaming shareware, which I never was into.

There were other things I didn't know or understand at the time, such as why some shareware never made it to Europe. And, not having owned a Mac until well into the Internet era, I wasn't aware of the role of Mac shareware. Finally, I always wondered about the business side of shareware.

Thanks to accurate and extensive research based on original sources and interviews, Shareware Heroes puts the pieces together and presents a complete, coherent history of shareware from the early days to the Internet era. It paints the big picture, discusses shareware in the context of the computer industry, traces the evolution of shareware business models, and ties the past with the present from early shareware titles to the contemporary indie scene.

Although I'm less focused on gaming, the book has a lot of material also on the application software and utilities at the roots of shareware. But I found the coverage of gaming equally interesting even if I'm a non gamer. For example, I realized the key role of Apogee and id in both the evolution of gaming and software business models.

Interestingly, Shareware Heroes indirectly provides some historical context on the dispute between Epic Games, Apple, and Google over app store fees. Founder Tim Sweeney has always been highly competitive since the early days of Epic Games, for example in his rivarly with Apogee and id.

Sweeney is a tough leader, Apple and Google should have seen it coming. Their executives may want to read Shareware Heroes.

#retrocomputing #books

Discuss... Email | Reply @amoroso@fosstodon.org

On this blog I regularly share my retrocomputing experience and projects with the Z80-MBC2 and the V20-MBC homebrew computers. In addition, on my Mastodon account @amoroso@fosstodon.org I often post screenshots, links, videos, and other short updates grouped under the #z80mbc2 and #v20mbc hashtags.

#z80mbc2 #v20mbc

Discuss... Email | Reply @amoroso@fosstodon.org

I needed to write some Bash scripts on Linux that read the input from stdin or a file passed as an optional argument, but couldn't figure how.

Googling turned up several designs and examples, such as on StackOverflow, where the script directly processes the input. But I actually wanted the script to assemble a pipeline, feed the input into the beginning, and delegate the processing to the programs and filters in the pipeline.

More googling turned up exactly what I wanted, a reply by the user Daniel buried in a long StackExchange thread.

The trick is to assign to a variable the input stream and feed it into the first program of the pipeline. To demonstrate the technique, the script unlc (unique line count) prints the number of unique lines in the input:

#!/usr/bin/env bash

# unlc - print number of unique lines in the optional input file or stdin
#
# Usage:
#
#   unlc [input-file]

input_file="${1:-/dev/stdin}"

cat "$input_file" | sort | uniq | wc -l

The code assigns to input_file the first argument $1 passed to the script, if supplied, or the standard input. Then cat feeds the content of input_file to the rest of the pipeline. The script is invoked by passing a file as an argument or feeding the data into the script's standard input:

$ cat input-file.txt 
1
2
2
3
4
4
4
4
$ unlc input-file.txt 
4
$ cat input-file.txt | unlc
4

Simple and brilliant.

#Linux

Discuss... Email | Reply @amoroso@fosstodon.org

In these digital era there's a growing interest in analog writing with pen and paper, possibly fueled by a reaction to technology and a romantic vision of art.

Almost four decades ago, in 1983, I wrote a full-length book with these tools, all 216 pages of it. Plus one third of the manuscript worth of additional text I cut while editing. Also, for years I kept a personal journal with pen and paper.

Although a practical necessity back then, handwriting was an awful, time-consuming experience that brought no value to me.

Not anymore.

Now I use pen an paper only for occasional short notes of up to a couple of lines, and computers or other digital devices for everything else. I'm not missing the fascination with handwriting, at all.

You’ll have to pry my digital writing tools from my cold, dead hands.

#misc #publishing

Discuss... Email | Reply @amoroso@fosstodon.org

CP/M-86 is a footnote to the history of the personal computer, which is part of why it's interesting.

The downside is the limited popularity the operating system enjoyed makes it difficult to discover online resources, particularly software. I face this issue when looking for software and tools for my V20-MBC homebrew computer, which can run CP/M-86 with the Intel 8088 of its Nec V20.

Therefore, I'm keeping track of the programs and software collections I run across online.

I include a list here, which I'll revise and expand with more entries. On the V20-MBC I tested only a small fraction of this software, so some programs designed for vendor-specific CP/M-86 versions or machines may not run on the device.

Repositories and collections

Old BBS archives, repositories, personal websites, and CD-ROM collections are good starting points. CP/M-86 software in executable form is usually in a section under general CP/M resources.

I found these repositories and websites:

Programs and utilities

Some applications are provided for download from their own websites or distribution archives:

Source code

Some software that works on CP/M-86 is distributed in source form with no prepackaged binaries. It's usually available at general CP/M repositories, in sections specific to the programming language or environment it was developed with such as Turbo Pascal or BASIC. This code may need some tweaks to run on CP/M-86.

For example, the Walnut Creek CP/M CD-ROM has a Turbo Pascal section.

CP/M-86 Miscellaneous Ports is a collection of C and Unix tools ported to CP/M-86, such as yacc.

#v20mbc #retrocomputing

Discuss... Email | Reply @amoroso@fosstodon.org

Most instructional and tutorial videos of screencasts have a common flaw that makes them less effective.

The videos zip by over details such as menu and option selections, changes of settings, and manipulation of user interface elements. These key decision points and actions the users can glimpse only briefly are the whole point of a screencast. And they play out fast, too damn fast.

What makes the issue worse is screencasts are often published as animated GIFs, which don’t provide any control over playback speed or pausing.

Instead, let each action remain visible and motionless for at least 3-5 seconds. Leave menus open more, and don’t release the mouse button too early after a click. The screens with a lot of text or elaborate charts should remain motionless for longer.

#misc

Discuss... Email | Reply @amoroso@fosstodon.org

Delivering files to the Z80-MBC2 and V20-MBC homebrew computers is an essential capability for bringing new software and data to these CP/M devices.

In particular I need a file upload capability, a way of transferring text files from the host system to the remote CP/M devices. Why just text? Because a text stream is the lowest common denominator. The simplest, most ubiquitous, and versatile communication channel.

Encoding binary files as text, such as the Intel HEX format for executables or uuencoding, enables moving arbitrary files over text streams.

The problem

I want to transfer files in the most practical way, i.e. via the serial USB connection from the host system, my Chromebox (where the terminal emulator for controlling the devices runs under Crostini Linux), to the remote devices. I could copy the files to the microSD cards the homebrew computers use to simulate storage devices like hard disks, but this would require additional steps.

An obvious solution would be a file transfer protocol like the XMODEM utility that comes with the Z80-MBC2. But XMODEM file upload to the Z80-MBC2 has issues and the V20-MBC doesn't have XMODEM or other file transfer software preinstalled.

My initial workaround, dumping a text file from the terminal into a CP/M text editor, does the job but creates friction. I wanted a minimal upload channel with less friction, that relies only on native CP/M features, and can work also on the V20-MBC.

The solution: a minimal file transfer channel

I came up with a similar but more streamlined solution.

Like in the workaround, on Linux the process consists in dumping a text file from the terminal emulator.

But on CP/M, instead of running the ed editor to collect the file and manually save it, PIP automatically receives and saves the file. An additional advantage is PIP can handle arbitrarily long files whereas ed is limited to available memory.

There's a reason the CP/M system utility PIP is called Peripheral Interchange Program — emphasis mine. In addition to copying, renaming, and combining files, PIP can transfer data to and from the console and other peripherals. The new transfer channel relies on this feature by receiving the text coming from the console associated with the terminal emulator, and saving it to a file.

The process

How does transfers over this channel work? I initiate file uploads on Linux from the Minicom terminal emulator.

First, to introduce a character trasmission delay I change Minicom's settings with the Ctrl-A T F command, Terminal settings > Character tx delay (ms). A value of 1 ms works well on both the Z80-MBC2 and the V20-MBC.

Why a delay? Although the homebrew computers are connected via a 115200 bps serial link, these 8-bit and 16-bit systems can't keep up with the full speed with which the 64-bit Intel i7 Chrombox can pump data. Hence the need for a transmission delay.

Next, at the CP/M prompt I launch PIP:

G>pip filename.txt=con:

Since the destination of the data before the = symbol preceeds the source, the command instructs PIP save to FILENAME.TXT the data coming from the logical device con:, the source. By default CON: is associated with the console, i.e. Minicom on Linux.

The above command makes PIP receive the text stream Minicom sends over the serial line as if typed by the user at the keyboard. How can Minicom type text virtually? The program's Ctrl-A Y Paste file command allows to select and dump a Linux file, which is the last step of the transfer.

Then, on CP/M, the incoming text is saved to a file and rapidly printed on the screen line by line. The transfer may take up to a few minutes depending on the file size.

When PIP terminates, the new file is ready. A caveat is CP/M expects text files to be encoded with specific line break and end of file control characters, i.e. ^M^J and ^Z, not ^J and ^D like on Linux. If the end of file is missing, PIP pauses until the ^Z keystroke is entered manually.

The procedure works well on both the Z80-MBC2 and the V20-MBC.

Next steps

Dumping text files over a serial line is slow and more involved than dedicated file transfer protocols such as XMODEM, and works only one file at a time.

But text streams are universal, easy to use, and reliable. More importantly, these streams are the only way of uploading binary files encoded as text, such as executable programs not already stored on the remote device. For example, neither XMODEM nor other file transfer utilities are preinstalled under CP/M-86 on the V20-MBC.

I'll leverage this text channel to upload to the V20-MBC the Kermit communication program, which implements the transfer protocol by the same name. I'll see if Kermit can upload from Linux to the V20-MBC, and then the Z80-MBC2.

#z80mbc2 #v20mbc #retrocomputing

Discuss... Email | Reply @amoroso@fosstodon.org

Getting errors for basic disk access functions was a reminder WordStar must be configured for the CP/M system it runs on. The garbled text WordStar 3.30 rendered on the terminal under CP/M-86 on the V20-MBC drove the point home. Setting the terminal type to ANSI with the configuration utility WINSTALL.CMD fixed the issue.

#v20mbc #retrocomputing

Discuss... Email | Reply @amoroso@fosstodon.org

Enter your email to subscribe to updates.