This is the mail archive of the
gsl-discuss@sources.redhat.com
mailing list for the GSL project.
Re: a question about GSL matrix
- To: Jianhua Zhu <jianhua at bu dot edu>
- Subject: Re: a question about GSL matrix
- From: Brian Gough <bjg at network-theory dot co dot uk>
- Date: Sun, 2 Jul 2000 11:55:54 +0100 (BST)
- Cc: gsl-discuss at sourceware dot cygnus dot com
- References: <Pine.A41.4.10.10006302126440.133204-100000@acs3.bu.edu>
- Reply-To: gsl-discuss at sourceware dot cygnus dot com
Good question. GSL's matrix and vector types use a contiguous block
of memory of type * for compatibility with BLAS. We follow the NAG and
IMSL libraries in this respect. The additional size, stride and tda
(trailing dimension) parameters are also from BLAS and allow arbitrary
submatrix and subvector views to be passed by reference to functions
as gsl_matrix and gsl_vector types themselves.
In terms of efficiency the goal is for computationally intensive
matrix and vector algorithms to be written in terms of optimized BLAS
operations rather than functions which access individual elements like
gsl_matrix_get.
regards
Brian Gough
Jianhua Zhu writes:
> Dear GSL Team,
>
> I am wondering why GSL choose such a complicated representation for
> matrices. As an engineering student, I have some experience with
> several numerical libraries. They simply use the type * and type **
> for vectors and matrices respectively. GSL's scheme looks professional.
> But I cannot see the advantage. In addition, I feel GSL does not pay
> enough attention to efficiency. To read and write a cell in a
> matrix, at least one multiplication and three additions are needed (as
> shown in the frequently called function gsl_matrix_get()).
>
> Sincerely,
> Jianhua
>
>