This is the mail archive of the ecos-discuss@sources.redhat.com mailing list for the eCos project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: RedBoot gets() problems


On Fri, Mar 02, 2001 at 09:48:06AM -0700, Gary Thomas wrote:

> On 02-Mar-2001 Grant Edwards wrote:
> >> > I think I'm going to have to re-design the input scheme so that
> >> > Redboot still responds to the network and to all ports while in
> >> > the "middle" of reading an input line.
> >> 
> >> I'm not convinced that this is the right thing to do.  Maybe
> >> the check for network packets is OK (but I ruled out doing it
> >> all the time because of overhead costs),
> > 
> > I'm not sure what you mean by "overhead costs".  Are you
> > concerned about not handling characters fast enough once they
> > start to arrive?  The minimum inter-character gap is already
> > defined by the length of time it takes to do a network poll.
> 
> I'm mostly concerned about the cost of checking the network
> interface for data. This involves seeing if any packets have
> arrived, processing them if they have and then checking to see
> if a request to make a Telnet/TCP connection has been made.
> This is all quite expensive

Right.  But, it's not the absolute cost, only the opportunity
cost that counts.  It only matters if there's something else
that we could be doing but can't do because we're processing
network traffic.

> and should not be encumbered on every input character, thus the
> choice to only make such a check when the "console" port is
> idle.

That cost is already incurred.  If the gap between rx
characters is smaller than the time it takes to do a network
poll, then initial characters in a command can be lost.  If I
send a command line to the board at 57600 baud, I generally
loose 2-3 characters due to the fact that rx characters are
only processed between network polls.

[I could probably get around that problem by setting up the DMA
controller to handle incoming serial data, but that's too
complicated.  This problem would also be alleviated by UARTs
with FIFOs]

Ignoring the network while reading a line isn't really gaining
me anything: I can't tolerate rx characters arriving any faster
than one per network poll anyway, so checking the network while
reading a line isn't imposing any additional restrictions on
baud rate.

> One change which might help (wrt network packets) is to treat
> all characters the same, i.e. with a timeout, that would let
> you check for network activity while input was being received
> on a serial port. I think that the code would get very upset if
> a Telnet/TCP connection arrived while a command was being
> entered though, another reason for only handling it at the pure
> idle point, when no [serial] characters have arrived at all.

I can see how the current scheme works well if there are FIFOs
in the UARTs -- we can tolerate the initial read delay due to
network polling (the FIFO starts to fill), but once we notice
characters are coming in, we stop polling the network until we
see the EOL.

This reflects (I think) the assumption that serial port traffic
is more important than network traffic.  For me, however,
handling network traffic is the top priority. I can tolerate
the serial ports going dead occasionally, but if the network is
ignored, I'm sunk.

-- 
Grant Edwards
grante@visi.com


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]