DiffSense Timing
Gene_Milligan at notes.seagate.com
Gene_Milligan at notes.seagate.com
Fri Nov 20 07:09:22 PST 1998
* From the T10 (formerly SCSI) Reflector (t10 at symbios.com), posted by:
* Gene_Milligan at notes.seagate.com
*
<<I think the maximum should be about 300ms.>>
OK by me. I wanted the 100 ms to be lower and urge our folks to be
well under 300, say 110. The issue, according to my gut is thermal. No
sweat for the devices that are tristated in a bus free mode to remain
tristated for 100 ms or heaven forbid 300 ms (OK number for what you
propose) but for a device transmitting a Gigabyte of data it may get
somewhat warm.
I have not gone back to look but I think there is an out in that
nothing prevents a device from going unexpectedly bus free under the 100 ms
as long as the device tristates but leaves the prior bus mode unchanged for
the next time it is not tristated within the minimum of 100 ms. But this is
not in the full spirit of noise filtering. The tradeoff here is whether or
not the DIFFSENS line is that noisy or not and if it is do the data lines
still achieve adequate error rates?
I presume the noisy case would be postulated as the environment with
high common mode noise - perhaps due to the longer cable runs. For these
cases the assumption is that LVD is required not simply because the highest
rates are specified only for LVD but because differential is required at
any rate to reject the common mode offsets. But with the specified common
mode limits I suspect DIFFSENS gives false indications well beyond 100 ms
or any other value. I have argued that the common mode specification is
wrong for SCSI - too large but with only modest success - near failure
(reduced from EIA 485 apparent values to ISO/IEC more clearly circuit test
values). The other side is not so much that the values are not too large
but fear of changing them due to inadequate data for what to change to. I
think the data for the fact the common mode voltage specifications for d.c.
are too large is the reality that DIFFSENS actually does work in all the
SCSI environments.
So while not meeting the spirit, I suspect going bus free is not a
serious problem since the feared condition will not occur anyway as long as
a reasonable filter is employed.
But I started with the bottom line at the top. 300 ms is OK with me.
<<I have talked to a few people who said it was never intended that mode
transitions happen in a real system, it is only a stocking issue.>>
Yes this was argued when the proposal came up that a mode change had
to be accompanied by an implied reset and renegotiation of synchronous data
rates for the changed mode. But the winning side was emphatic that they
wanted to be able to bring their LVD hummer down to a crawl when they
plugged in their old single ended CDROM.
<<Companies are shipping systems that expect hot plug to work, even
if it involves a mode change. >>
Yes agreed. But it is a zener diode hot plug regarding Case 4.
Hot plugging like devices should not cause a mode change of the devices
previously working on the bus and should usually not cause any errors in an
operation in progress on the bus. Plugging an LVD or multimode device into
a single ended operating bus should have essentially the same results. But
plugging a single ended device into an operating LVD bus will bring the bus
to an immediate error condition (the old CDROM case), without a 100 ms
delay, followed by an implied reset with a 100 ms delay and renegotiation
of synchronous data rates for the changed mode. For the latter case any LVD
only devices are not heard from again and if that is the HBA of course no
one is heard from again.
Finally I just want to point out that the standard would need to be
clear when the 300 ms (which is OK by me) starts. For hot plugging the mode
change, if any, occurs 100 ms after the DIFFSENS change is sensed. But for
the device that is plugged in some time is consumed while it enables power,
resets the resetables, does a sanity check to determine it is eligible to
participate, primed and eager waits a first 100 ms, then makes an initial
DIFFSENS determination and considers DIFFSENS for 100 ms to see that it
stays in one mode (what about the alleged noise during that 100 ms) and
finally does its SCSI thing. As a minimum this takes 200 ms plus a skosch.
Depending upon how the 300 ms is measured the minimum time for a worse case
device could be 600 ms plus a skosch. (Do devices already powered up on the
bus treat the implied reset as an event to go through this whole sequence
or do they become immediately available for arbitration/selection with the
reset?)
But it is only recommended that the maximum reset to selection time is
250 ms (another of my failures - I recently tried to make that mandatory).
I guess I was wrong saying the 300 ms is OK by me since I am easily
influenced by recommendations. From all this I conclude that the maximum
should be 124 ms. I will eagerly watch to see if those that can not adhere
to the recommendation insist that the maximum can only be a recommendation.
Remember that we should only specify requirements that can be observed on
the bus and the only way to observe compliance with this maximum time is to
see when selection and/or arbitration participation occur.
Well 300 ms is OK by me if the host treats it simply as the real
mandatory requirement for reset to selection and that the SCSI devices are
designing with their own DIFFSENS maximum filter time to meet the reset to
selection time. (Does the HBA pass the mode change to the driver or is the
HBA doing the timing and buffering?)
Gene
*
* For T10 Reflector information, send a message with
* 'info t10' (no quotes) in the message body to majordomo at symbios.com
More information about the T10
mailing list