Duplicate QTAG detection

Kurt Chan kc at core.rose.hp.com
Thu Apr 14 17:07:57 PDT 1994


| I have the same concern about the target responsible of checking the 
| duplicated queue.  Indeed the possibility of tag duplication is in the 
| order that Steve pointed out -- 65,280 cases.  In order to verify the 
| tag duplication, there are few ways to check it:
| 
| 1. Firmware:  This verification requires firmware to search for the 
| 65,280 possibilities to find out the duplication.  With performance in 
| mind, the target has to be force to implement greater MIP's, higher 
| cost CPU.  Even then, it is not sure for me to see what are we buying.
| 
| 2. Silicon: Everything is possible with the cost and die size.  My 
| study shows that a significant increase of die size required for this 
| implementation (if it is not impossible for 65,280 cases).  A Ram 
| based tag table has to be constructed in order for the hardware (or 
| firmware) to check it.  Outrages to me.
| 
| Any other thought?
| 
| Joe Chen
| Cirrus Logic, Inc.
| (510) 226-2101

Joe,

HP has lobbied against the requirement to check for redundant QTAGs
for some time now, with our best efforts seemingly being deferral to
other standards.  I agree with everything Steve said, and will add to
this argument a perception with respect to Fibre Channel.

A similar controversy over the "correct" implementation of Fibre
Channel is going on behind the scenes.  The controversy involves how
thoroughly a Recipient should check CRC-protected fields for errors in
the header.  Since it is virtually impossible for header errors to be
introduced once the frame has been CRC-protected, the only way they
could be corrupted is as a result of misoperation - a defect in an
implementation (either Host, host adapter, fabric, or peripheral) that
put bad data into a good package, or sent data to a destination it
never should have, etc.

At a Gbit, the cost of essentially performing protocol analysis is not
only extraordinarily high, but impractical.  In order to do a thorough
job of checking a FC header for even a moderate number of errors, SW
is needed - the job is too big for silicon today.  However, it is
almost impossible for SW to perform this analysis at a Gbit on a
frame-by-frame basis (essentially every 20us).  Two solutions have
therefore evolved:

 1.  a HW solution which only does minimal checking but which can
     acknowledge individual frames at link rate continuously, and

 2.  a SW solution which does more thorough checking, but cannot
     accept frames at sustained link rate without flow controlling
     inbound traffic in some other fashion, or without checking only
     every N frames.

HP's position is that high speed interfaces need to be trusted
interfaces in order to provide the value they were intended to
provide.  Rather than assume a network which can inject bad
payload/headers or misroute data and thus design a cumbersome solution
around that architecture, we feel it would be better to limit the
scope of the network to the point where it CAN be trusted, and pay as
you go for all the other scenarios where you MUST support traffic
across an unreliable network (e.g., the wide area).

Originally, I'm told that redundant QTAG checking required of the
Target was a result of a fear of defective SW that may find its way
into the field, and rather than repair the defects or jeapordize the
data, the architecture was changed to check the areas of the protocol
most likely to be defective (in this case Queuing SW).

The revelation of the 90s should be that this paradigm cannot exist as
we move towards higher performance interfaces.  While channel folks
often like to poke fun at networking stacks for their redundancy, the
concept of designing a check into the performance path of a channel
for defective SW is exactly the same.

This is NOT to say that we shouldn't put appropriate checking into our
devices for errors that WE EXPECT to occur in normal operation (like
bit errors, etc).  However, I don't consider a redundant QTAG to be
something we should EXPECT to happen (and therefore have to architect
a check for).  Typically, these checks are lobbied for out of
desparation by developers who have not planned for or executed
rigorous test and qualification cycles.  My response is that the
checking should therefore be made OPTIONAL, and only those who feel
they are operating in an untrusted environment need pay the penalties.

Regards,

Kurt Chan       Hewlett-Packard, IND-SIL         916-785-5621
                8000 Foothills Blvd, MS R5NF     916-785-2875 fax
                Roseville, CA 95747              kc at core.rose.hp.com 




More information about the T10 mailing list