Added I and the vertical consonants: D-, B-, L-, and -N. Back at 33 WPM at nearly 98% accuracy. I think that's going to be a general principle for me, to hit those targets before moving on. 96% accuracy is low enough that I still find myself training in uncertainty and wrong moves. But 98% definitely feels like it's just the occasional error. And 33 WPM isn't an outrageous speed, but it's fast enough that it has to be at least partly automatic. So I think that's a good speed threshold for measuring "have I freed up enough brain cells to start learning something new?"
I've spent some time on my Python scripts. I decided to do some more work on my word sets first, and postpone the nonsense sentence generator. My scripts are currently taking the SCOWL level 35 wordlist, matching that up against the CMU pronouncing dictionary (which removes about 7000 words), computing ordinals based on the word counts from the Google trillion-word web corpus as extracted by Peter Norvig, collecting all the Plover strokes for each word and attempting to convert them to pseudo-steno for readability, then dumping that out to a file. Those word counts are a little wonky from being a web corpus (e.g. "information" and "search" are way up at the top): I should probably go back to using the ones from Martin's autodidict, which IIRC are from the American National Corpus?
The pseudo-steno conversion is buggy. I have to work on that. And there's some overlap, so I probably have to get the pronunciation involved to do a good job. But the leftmost-longest match would be a good start, and for some reason it's not even getting that right. So from there I've been manually grabbing the canonical strokes and putting them into my drill word-sets. In another few days I should have a respectable set of words that I can filter based on which pseudo-steno elements you know. Then I'll go back and add parts of speech and mess with sentence generation.