jmcgrath at qntm.com
Wed May 24 17:59:35 PDT 1995
Reply to: RE>>FASTER SCSI
We have talked about extending addressing to > 16 devices
(although it appears to be a low priority with most people).
One way is to change arbitration to decouple the number of
devices from the number of data lines. This is straightforward,
but is obviously a backward compatibility problem (all devices
must know the new protocol (actually there may be some
trick to support some legacy devices)).
The other is to use a SCSI bridge to take a SCSI ID and map
it into multiple LUNS, each actually a SSI device on the other
side of the bridge on a different bus. Here current drives can
be used, but the bridge adds cost and (more important) is pretty
complex (it has to handle arbitration, selection, and messaging).
At the end of the day multiple SCSI protocol chips on some
backplane might be easier and cheaper. In that case the
bandwidth/overhead concerns focus on the backplane bus (e.g. PCI).
Date: 5/23/95 4:43 PM
To: Jim McGrath
From: stevew at abbott.SanDiegoCA.ATTGI
Received: by qm_smtpgw.qntm.com with SMTP;23 May 1995 16:43:30 U
Received: from worf.qntm.com (worf-gw.qntm.com) by mail.qntm.com with ESMTP
(188.8.131.52/16.2) id AA042882042; Tue, 23 May 1995 16:34:02 -0700
Received: from mpdgw2.hmpd.com by worf.qntm.com with ESMTP
(184.108.40.206/16.2) id AA080602696; Tue, 23 May 1995 16:44:56 -0700
Received: (from root at localhost) by mpdgw2.hmpd.com (220.127.116.11/8.6.6) id
RAA23870; Tue, 23 May 1995 17:40:50 -0600
Received: from aztec.ncrmicro.ncr.com(18.104.22.168) by mpdgw2.hmpd.com via
id sma023513; Tue May 23 17:29:30 1995
Received: from ncrwic.WichitaKS.NCR.COM (ncrwic.wichitaks.ncr.com
[22.214.171.124]) by Symbios.COM (126.96.36.199/8.6.6) with SMTP id RAA20932; Tue, 23
May 1995 17:28:37 -0600
Message-Id: <199505232328.RAA20932 at Symbios.COM>
Received: by ncrwic.WichitaKS.NCR.COM; 23 May 95 18:28:22 CDT
Received: by ncrcetc.WichitaKS.NCR.COM; 23 May 95 19:28:11 CDT
From: stevew at abbott.SanDiegoCA.ATTGIS.COM
Subject: Re: FASTER SCSI
To: scsi at wichitaks.ncr.com
Date: Tue, 23 May 1995 16:26:34 -0700 (PDT)
>From: "Stephen Wall" <stevew at abbott>
In-Reply-To: <n1410895454.75253 at qm_smtpgw.qntm.com> from "Jim McGrath" at May
23, 95 11:10:31 am
X-Mailer: ELM [version 2.4 PL24]
> Reply to: RE>FASTER SCSI
> I did a similar analysis for the interface forum a while back and got
> similar (although not identical) number. The bottom line is that
> overhead is NOT the right metric to use - bus saturation is instead.
> Adding the 4 Kbyte Fast-40 wide data transfer time to the bus overhead
> yields 65.6 us/4 K command. This implies that we can process
> 15243 such commands/second on a SCSI bus before saturation.
> Note that with 16 drives that is 942 IOs/drive before saturation, or
> 1.06 ms/command. While our solid state drives can match this, our
> electromechanical disk drives are lucky to do 240 IO/s on a good
> day - for a random transaction type environment the drives, not
> the bus, is the bottleneck.
Isn't this assuming one drive per SCSI ID? What about something like a
disk array where we can have multiple drives per LUN and multiple
LUNs per SCSI ID? Are the disks still the bottleneck?
What the overhead numbers tell me (someone correct me if I'm wrong),
is how much max effective throughput I can expect from my 40MB/sec bus
as a function of blocksize when the bus is saturated:
(ie. 40MB/sec - (%overhead * 40MB/sec)).
Most (non-SCSI) users here have a hard time when you explain to them
that doubling the bus link rate won't help as much as they think if they
are using small blocksizes (ie. database records, kernel paging, backups...)
Thanks for the post on this - the numbers are interesting!
Stephen Wall (619) 485-2700
AT&T Global Information Solutions stephen.wall at sandiegoCA.ATTGIS.COM
Parallel Systems Software Division
17095 Via del Campo
San Diego CA. 92127-1711
More information about the T10