This is the mail archive of the
ecos-discuss@sources.redhat.com
mailing list for the eCos project.
Re: amount of cpu state to be saved for exceptions
- From: sandeep <shimple0 at yahoo dot com>
- To: ecos-discuss at ecos dot sourceware dot org
- Date: Mon, 13 Dec 2004 15:16:04 +0530
- Subject: Re: [ECOS] amount of cpu state to be saved for exceptions
- References: <41BB1C0F.4060606@yahoo.com>
in __default_exception_handler, cyg_hal_exception_handler is passed the
address of saved CPU state.
since we don't know, in what way application installed handler would
want to make use of saved state (it might choose to look at some/all of
the saved cpu state), we can't drop any savings even for production
kernel, where for speedups we could drop saving callee-saved registers
in case of interrupts.
Is above right?
There is another related doubt that i have is -
is it necessary to use same HAL_SavedRegisters structure for interrupts,
exceptions and context switch? could you have variations of same structure
definition to be used with each of them?
following issues are responsible for above question.
the architecture i have worked upon has some facility to save/restore two
registers at a time, but that requires save/restore address 64 bit aligned.
again
- during context saving only callee saved registers and minor housekeeping
information are saved (<50 bytes)
- for interrupts, all the general registers (callee saved can be omitted though)
are saved and bit of housekeeping information, (<150 bytes)
- for exceptions, all the savings of interrupts and some more information (that
is significant in size) (300+ bytes)
using same structure for all the three cases would consume reasonable stack
space in case of contxt saving (so already we don't use the same structure for
that) and even interrupts, exceptions are not often.
currently, there is no gdb support for this HAL.
sandeep
__________________________________
Do you Yahoo!?
Yahoo! Mail - Find what you need with new enhanced search.
http://info.mail.yahoo.com/mail_250
--
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss