[SunRescue] PCI bus requirements
BrianMcCloskeycheckinmystyle at hotmail.com
BrianMcCloskeycheckinmystyle at hotmail.com
Fri Aug 18 11:34:32 CDT 2000
i don't really understand how more arbitration queues is supposed to equal
better performance. it would seem based on your explanation that one would
want to avoid bridging at all costs. this scenario becomes more problematic
with intel architecture because the processor(s), memory and i/o devices are
all on a single serial bus as well, so that only one communication operation
among any of those can occur at a time. this introduces more wait queues.
this is why intel architecture has so many interrupts. this is why sun
introduced the crossbar switch and multiple sbuses in their newer machines
so that all of these operations can occur simultaneously with the full
bandwidth of the peripheral card and they can definitely handle 12 10/100
Mbit channels at full capacity.
>From: Maxwell Spangler <maxwax at mindspring.com>
>Reply-To: rescue at sunhelp.org
>To: "'rescue at sunhelp.org'" <rescue at sunhelp.org>, Maxwell Spangler
><maxwax at mindspring.com>
>Subject: RE: [SunRescue] PCI bus requirements
>Date: Thu, 17 Aug 2000 14:18:27 -0400 (EDT)
>On Thu, 17 Aug 2000, Gregory Leblanc wrote:
> > > From: apotter at icsa.net [mailto:apotter at icsa.net]
> > > Sent: Thursday, August 17, 2000 7:10 AM
> > > My solution to this was a newer (also surplus) WinTel board,
> > > three PCI quad ethernet cards and OpenBSD (linux disn't line
> > > the cards).
> > I just did the math real quick here.
> > 32-bits times 33MHz yeilds a throughput of 132MB/sec.
> > 12 ports at 100Mbit yeilds a throughput of 150MB/sec.
> > So, uhm, those had best not be all 100MBit ports. I doubt that you can
> > even 100MBit from a standard P-II/III motherboard, probably much less
> > older boards. Not that you'll be using that much bandwidth, but that's
> > certainly more than it can handle. That's probably why none of the
> > commercial switches/routers use PCI. :-)
>This is a common mistake; I made it a couple weeks ago.
>PCI is a bus: a shared communication channel that multiple devices
>over so that they can all share it but only one can use it at a time.
>So in the case above where someone is using multiple quad ethernet cards,
>only has to ensure that the bandwidth requirements of a single card can be
>satisfied by the PCI bus. There will never be a case where two cards are
>sending/receiving data on the same PCI bus simultaneously. If one PCI
>is using the bus, the others will just wait in queue.
>In addition to this, a single quad ethernet card might actually be the
>equivalent of four single ethernet cards connected to the primary PCI bus
>through a technique known as "bridging". Bridging allows two independent
>buses to be daisy-chained together to increase the number of devices that
>be connected to the computer.
>Here's an example of a basic desktop PCI motherboard
>CPU---|PCI Slot #1|---|PCI Slot #2|---|PCI Slot #3|--|PCI Slot #4|
>In this case, only one of the slots can talk on the bus at any given time
>the maximum number of PCI devices you can put in this computer is four
>you have four slots. To add more slots, a motherboard designer might
>additional slots and the layout would look like this:
>CPU---|PCI Slot #1|---|PCI Slot #2|---|PCI Slot #3|--(PCI Slot #4)
> PCI bus 0 ^^^^ |
> PCI bridge chip
> PCI bus 1, Slot 1, "Slot 4" label on motherboard
> PCI bus 1, Slot 2, "Slot 5" label on motherboard
> PCI bus 1, Slot 3, "Slot 6" label on motherboard
>If the slot labeled "Slot 5" on the motherboard wants to send data to the
>CPU, it has to do this:
> * First, arbitrate for time on PCI bus #1. This might mean waiting for
>Slots 4 or 6 to finish.
> * Second, let the PCI bridge chip arbitrate for time on PCI bus 0. This
>might mean waiting for PCI slot 1, 2 or 3 to finish.
> * Finally, Slot 5 has control of PCI bus #1, and PCI bus #0 and can talk
>the host CPU.
>The host CPU sees the entire PCI bus #1 as a single device hooked into PCI
>#0's slot 4. That's the magic of the bridge chip.
>Quad Ethernet cards that include bridge chips should follow this exact same
>layout. So "PCI bus 1" as shown above might just be a single Quad Ethernet
>card plugged into Slot 4 on the motherboard. When one of the 10/100
>chips wants to send data, it arbitrates for time on the card's own PCI bus,
>then arbitrates for time on the motherboard's bus.
>Why I learned this:
>A few weeks ago my company bought a Compaq ML350, a contemporary Pentium600
>class system. It has a motheboard with CPU, RAM, keyboard, mouse, Floppy
>PCI slots on it. A *SINGLE 32-bit, 33Mhz PCI SLOT* contained a
>"multifunction" card that held 10/100 networking, VGA video and two 80MBps
>SCSI Ultra2 i/o channels.
>I added up the bandwidth requirements of just the two SCSI channels
>(80x2=160MBps) and thought it would be too much for the 132MBps 32-bit,
>Turns out that this card used bridging. Of the four or so devices on the
>(net, video, disk, disk), only ONE could talk through its single PCI slot
>connector to the host CPU at one time. So because the maximum requirement
>80MBps for a single SCSI i/o channel, a single 32bit/33Mhz PCI slot is
>Disclaimer: I'm no EE or device driver writer, so my terminology might be
>incorrect, but I'm pretty confident in the concepts.
>Maxwell Spangler "Don't take the penguin too
>Program Writer It's supposed to be kind of goofy and
>Greenbelt, Maryland, U.S.A. that's the whole point" -
>Rescue maillist - Rescue at sunhelp.org
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com
More information about the rescue