[rescue] Parallel ports [was Re: Slightly OT: ?Bad Cap Saga]
jorge234q at yahoo.com
Fri Aug 22 00:27:52 CDT 2008
--- On Thu, 8/21/08, der Mouse <mouse at Rodents-Montreal.ORG> wrote:
> As these machines fail, I will gradually lose the capabilities they
> provide. Perhaps I have enough spares to last the rest of my life, but
> perhaps not, too.
But, my point is, you can move to a different interface *if*
you rethink what you expect *of* that interface. E.g., if you
can ignore temporal aspects of the "data" stream, (or, if
you can embed that information *in* the data stream!), then
you can just send everything "in band" over whatever sort of
interface you happen to have available.
I.e., that "data stream" could be pushed down a serial port.
Or, a parallel port. Or, over a network to <whatever> hardware,
> >> If someone has a real USB parallel-port interface, as opposed to a
> >> USB printer interface that speaks parallel-port printer protocol out
> >> the printer end, I'd be interested.
> > What do you think "parallel port printer protocol" is?
> My impression of it is, basically unidirectional output, pulsing STROBE
> for each byte output, with rudimentary output flow control based on
> signals like "out of paper". But I don't use it, so that's quite
> likely missing some important aspects.
> Yes, it is a defensible position that I'm abusing printer interfaces
> when I use them as parallel ports. The traditional hardware is
> actually both at once, by accident of history, and it's the parallel
> port I'm talking about missing, not the printer interface.
But, that's just the "original PC XT implementation you're
talking about. Recall the original PC also used one of the
programmable timers to control the refresh of the DRAM (?).
And, timing loops written to run on the original PC don';t
work on modern machines.
I.e., in each case, if you embrace a particular *implementation*
aspect, you set yourself up to get screwed sooner or later.
Like expecting ints to be 32 bits, etc.
> > Centronics has defined how the interface is supposed to work. If you
> > are taking advantage of a particular bastardized implementation of
> > that interface, then how can you complain that the "new interface"
> > isn't as good as the old?
> Because it doesn't support something the old one did, of course.
But, the old one only supported it as a fluke.
You could overwrite CODE in the old MS-DOS apps.
Would you be upset if your application "suddenly"
stopped working when that OS "grew up"?
E.g., when writing Z80 applications, I would always use 0xFF
as a string terminator, flag value, etc. This was because
I could do an:
(which reads the memory address referenced by register HL,
adds one to it, SETS THE CONDITION CODES ACCORDINGLY, and
writes the new value back) to test for that flag/terminator.
This relied on the fact that my code was running out of
read-only memory -- so, the "write" to update the value
after the increment was ineffective.
Eventually, we designed some development tools to expedite
software testing (burning 2708's and 2716's would take
just almost 3 hours -- so, you got two turns of the
crank in an 8 hour shift! :-/ ).
Suddenly, all my code stopped working! (because,
not only those 0xFF's but *every* byte that I tested
in this way would get incremented and the NEW value
written to memory! :< )
> >> Not really; the application-facing interface presented by a USB
> >> stack is substantially more complicated than that presented by a
> >> serial port.
> > You can't say?
> > mystring = "Aseriesofbytecodes\022\033\044\055"
> > print(mystring);
> I've moved small fractions of my applications into the kernel, inside
> the lpt driver. This was feasible because I was talking to, y'know, a
> parallel port: write bits here and they show up on these wires, read
> from there and you get the signals from those wires. If I'd had
> something like USB getting underfoot, this would probably have been
> somewhere between annoying and impossible, instead of
> trivially easy.
You would have had to move it to user-land and use the established
And, you would have been able to move it to some *other* OS/kernel!
> >> This goes back to my remark about the difference between a parallel
> >> port and a printer interface that happens to speak parallel-port
> >> printer protocol.
> > So, you're not lamenting the loss of a parallel port but, rather, the
> > loss of a particular *style* of parallel port implementation! <grin>
> No...it would be more accurate to say that I'm not lamenting the loss
> of the printer interface (which, as various people have pointed out,
> hasn't really been lost), but rather the loss of the parallel-port
> implementation of the printer interface.
> > Why should a manufacturer have to keep implementing their parallel
> > ports the same way you want them?
> They don't, of course. It's not that any> particular vendor doesn't
> that bothers me; it's the apparently-total loss of them across the
> board that bothers me.
Note that you can't find a discrete floppy disk controller in
machines made in the past 7 or 8 years -- it's been integrated
into "SuperI/O" chips.
And, of course, now you can't even find floppy disk *drives*,
let alone the floppy disk controller! :<
> What someone, Gadi I think, recently said leads me to think
> that it may not be quite as total a loss as I thought. This gives me
> some hope.
> > E.g., when I put a parallel (printer) port in a piece
> of equipment,
> If you're putting in a printer port, do it however you
> like as long as it speaks printer.
> If you're putting in a parallel port, make it a parallel port, dammit!
> Just because the same hardware has served for both at once in the past
> does not make them the same thing. (Admittedly, it doesn't help that a
> lot of people confuse the two, including many people selling USB
> "parallel port" interfaces.)
But, the "parallel port" was always a "printer interface". Note
the names of the signals assigned to the original pins. If it
was intended to be a "parallel I/O" (i.e. PIO), the pins would
have names like D0-D7, STBA, ACKA, etc. And, it would have been
bidirectional from the start! Instead of having dedicated pins
for things like "select", "error", etc.
> >> whereas a real parallel port I can talk to even with discrete
> >> transistors if I want to.
> > Well, in a few years, you'll be stuck with a *computer* that doesn't
> > have those things.
> I don't think so. I've got enough spares to last more than "a few
> years", absent something catastrophic like a fire where most of them
> are stored. But yes, it'll suck if all my spares die before I do.
> It'll _really_ suck if all my spares die before I do and nobody has
> come up with a sane implementation of the parallel-port concept for
> then-modern hardware. (Which, as I remarked above, I now suspect may
> not be quite as likely a possibility as I've feared.)
You may find that you end up stuck with an old kernel/OS as
new kernels move to embrace features that older machines
won't support. Then you end up having to back-port changes, etc.
> > So, the UART needs an oscillator. Big deal! Give it an oscillator.
> > It also needs *power* -- you don't object to giving it *that*, do
> > you?? :>
> Not unless the power it needs is substantially different
> from what the rest of the circuit needs.
But, what makes "power" and "bypass capacitors" any different
of a requirement than "an oscillator"? If I told you to put
power on pins 8 8 and 4 of this DIP8 and hook the output to
pin 27 of this "UART", what's the big deal? Now the UART
has its "clock"...
It's entirely different than requiring you to design some
> So far, the stuff I've built to be driven off a parallel port has been
> clockless, so a clock for a UART would be otherwise unnecessary.
Yeah, and the +-12V that you might add to power EIA232 drivers
would also be "otherwise unnecessary". <shrug> Just think of
it as part of the infrastructure.
> Which makes me view it as unnecessary complication, especially
> when parallel ports are so simple.
Sure! But, parallel ports are also a limitation. E.g., you can't
*really* run a 100 ft of cable off that port. You only have
~8 outputs and ~5 inputs. You probably max out at about 100KB/s
even on really fast machines. You can't guarantee that bytes
appear on the output at specific timing relationships to each other
(unless you disable interrupts and hope you don't get an SMI, etc.)
The amount of current available is limited to a standard TTL
output (i.e., decent Isink but lousy Isource). Etc.
So, you're already bastardizing any approach you come up with to
fit thowse constraints. Why not pick a different set of constraints?
And, use that to give you another set of *capabilities*??
> > With the standalone UART, the receiveed data is present on a set of 8
> > pins. Always. You don't have to talk to the chip. Likewise, the
> > data being transmitted is accepted on ANOTHER set of 8 pins.
> ...which, actually, calls out a significant difference
> between the two:
> a parallel port is "host push, host pull",
> whereas a serial port is "host push, device push".
> (The difference is which end initiates the
> transfer of data in the device->host direction.)
Note that in my example, the host is actually initiating the
read -- since the act of sending a character results in the
UART *replying* with a character. I.e., you could just keep
pushing constant data out and eep getting new "updates"
of the input pins on the UART -- admittedly skewed in time
by one round trip delay.
> > See above. I can show you how to make a serial port driven PROM
> > programmer *without* a microcontroller -- the biggest hassle is
> > controlling the Vpp needed. :>
> I've considered building one, both in order to be able to burn PROMs
> and because I think it'd be fun to design and build.
Spend your time on something more constructive and useful -- design
a parallel port replacement technology, etc. The *few* times
you need a PROM burned you can no doubt find someone who will
do it for you (it takes damn near no time!). If you find yourself
needing to repeatedly reburn PROMs (e.g., when developing
firmware), invest in a BBSRAM module, instead, and modify your
prototype so you can *write* to "ROM", then flip a switch to
inhibit the "WE" signal while your code runs.
> > A better approach, nowadays, is to have everything network aware.
> Better in what respect? Higher bandwidth and better availability of
> the hardware are about the only advantages I can think of offhand.
Bandwidth is high enough for *any* (practical) imagined use.
But, more importantly, it's very portable -- you can plug it
into another "network" and have your application talk to it
in just the same way (i.e. sockets or whatever)
*Really*, look at some of the little MCU's out there. If you're
hacking kernel code, you can figure out how to use one of these
in a few *days*! Some vendors (e.g., Microchip) even give away
"simulators" so you can write and debug your code on a PC
without ever building any hardware. You can play with these
to see just how painful/easy it would be to implement "whatever"
you want to do. Then, if you feel like actually *doing* it,
spend $10 for QTY 1 and program it using your PC and a special
cable, etc. (e.g., when I put PICs in designs, I think it
costs me two or three "pins" on a header to attach a "programming
cable" and an external power source (e.g. 12V?) needed just
during the programming phase)
You will be amused at what a different world it is. And, possibly
challenged by it -- "Gee, I wonder if I could write this in
*40* opcodes instead of 50"...
More information about the rescue