Archivi Categorie: Octave

Octave – Vettorializzazione II – 107

knuth-v04f

Copio qui, continuando da qui.

Broadcasting
Nota per me: in altri linguaggi broadcasting vuol dire un’altra cosa.
Broadcasting refers to how Octave binary operators and functions behave when their matrix or array operands or arguments differ in size. […] Octave […] automatically broadcasts vectors, matrices, and arrays when using elementwise binary operators and functions. Broadly speaking, smaller arrays are “broadcast” across the larger one, until they have a compatible shape. The rule is that corresponding array dimensions must either

  1. be equal, or
  2. one of them must be 1.

In case all dimensions are equal, no broadcasting occurs and ordinary element-by-element arithmetic takes place. For arrays of higher dimensions, if the number of dimensions isn’t the same, then missing trailing dimensions are treated as 1. When one of the dimensions is 1, the array with that singleton dimension gets copied along that dimension until it matches the dimension of the other array. For example, consider

x = [1 2 3;
     4 5 6;
     7 8 9];

y = [10 20 30];

x + y

Without broadcasting, x + y would be an error because the dimensions do not agree. However, with broadcasting it is as if the following operation were performed:

x = [1 2 3
     4 5 6
     7 8 9];

y = [10 20 30
     10 20 30
     10 20 30];

o592

That is, the smaller array of size [1 3] gets copied along the singleton dimension (the number of rows) until it is [3 3]. No actual copying takes place, however. The internal implementation reuses elements along the necessary dimension in order to achieve the desired effect without copying in memory.

Both arrays can be broadcast across each other, for example, all pairwise differences of the elements of a vector with itself:

o593

Here the vectors of size [1 3] and [3 1] both get broadcast into matrices of size [3 3] before ordinary matrix subtraction takes place.
A special case of broadcasting that may be familiar is when all dimensions of the array being broadcast are 1, i.e., the array is a scalar. Thus for example, operations like x - 42 and max (x, 2) are basic examples of broadcasting.

For a higher-dimensional example, suppose img is an RGB image of size [m n 3] and we wish to multiply each color by a different scalar. The following code accomplishes this with broadcasting,

img .*= permute ([0.8, 0.9, 1.2], [1, 3, 2]);

Note the usage of permute to match the dimensions of the [0.8, 0.9, 1.2] vector with img.

For functions that are not written with broadcasting semantics, bsxfun can be useful for coercing them to broadcast.

Built-in Function: bsxfun (f, A, B)
The binary singleton expansion function performs broadcasting, that is, it applies a binary function f element-by-element to two array arguments A and B, and expands as necessary singleton dimensions in either input argument.
f is a function handle, inline function, or string containing the name of the function to evaluate. The function f must be capable of accepting two column-vector arguments of equal length, or one column vector argument and a scalar.
The dimensions of A and B must be equal or singleton. The singleton dimensions of the arrays will be expanded to the same dimensionality as the other array.

o594

Broadcasting is only applied if either of the two broadcasting conditions hold. As usual, however, broadcasting does not apply when two dimensions differ and neither is 1:

o595

Besides common arithmetic operations, several functions of two arguments also broadcast. The full list of functions and operators that broadcast is

      plus      +  .+
      minus     -  .-
      times     .*
      rdivide   ./
      ldivide   .\
      power     .^  .**
      lt        <
      le        <=
      eq        ==
      gt        >
      ge        >=
      ne        !=  ~=
      and       &
      or        |
      atan2
      hypot
      max
      min
      mod
      rem
      xor

      +=  -=  .+=  .-=  .*=  ./=  .\=  .^=  .**=  &=  |=

Beware of resorting to broadcasting if a simpler operation will suffice. For matrices a and b, consider the following:

c = sum (permute (a, [1, 3, 2]) .* permute (b, [3, 2, 1]), 3);

This operation broadcasts the two matrices with permuted dimensions across each other during elementwise multiplication in order to obtain a larger 3-D array, and this array is then summed along the third dimension. A moment of thought will prove that this operation is simply the much faster ordinary matrix multiplication, c = a*b;.

A note on terminology: “broadcasting” is the term popularized by the Numpy numerical environment in the Python programming language. In other programming languages and environments, broadcasting may also be known as binary singleton expansion (BSX, in MATLAB, and the origin of the name of the bsxfun function), recycling (R programming language), single-instruction multiple data (SIMD), or replication. Ah! ecco proprio come dicevo 😉

Broadcasting e legacy code
Yesss! non so tradurre, sarebbe quacosa come “ereditarietà” o “come si faceva una volta”.
The new broadcasting semantics almost never affect code that worked in previous versions of Octave. Consequently, all code inherited from MATLAB that worked in previous versions of Octave should still work without change in Octave. The only exception is code such as

try
  c = a.*b;
catch
  c = a.*a;
end_try_catch

that may have relied on matrices of different size producing an error. Because such operation is now valid Octave syntax, this will no longer produce an error. Instead, the following code should be used:

if (isequal (size (a), size (b)))
  c = a .* b;
else
  c = a .* a;
endif

Dubbi, chissà se si dovrà indagare in proposito 😦
:mrgreen:

Octave – Vettorializzazione I – 106

mbperseo-p2

Capitolo nuovo, qui intanto che proseguo da qui.

Vettorializzare per velocizzare

Vectorization is a programming technique that uses vector operations instead of element-by-element loop-based operations. Besides frequently producing more succinct Octave code, vectorization also allows for better optimization in the subsequent implementation. The optimizations may occur either in Octave’s own Fortran, C, or C++ internal implementation, or even at a lower level depending on the compiler and external numerical libraries used to build Octave. The ultimate goal is to make use of your hardware’s vector instructions if possible or to perform other optimizations in software.
Vectorization is not a concept unique to Octave, but it is particularly important because Octave is a matrix-oriented language. Vectorized Octave code will see a dramatic speed up (10X–100X) in most cases.
This chapter discusses vectorization and other techniques for writing faster code.

Pronto a passare qui.

Vettorializzazione elementare
To a very good first approximation, the goal in vectorization is to write code that avoids loops and uses whole-array operations. As a trivial example, consider

for i = 1:n
  for j = 1:m
    c(i,j) = a(i,j) + b(i,j);
  endfor
endfor

compared to the much simpler

c = a + b;

This isn’t merely easier to write; it is also internally much easier to optimize. Octave delegates this operation to an underlying implementation which, among other optimizations, may use special vector hardware instructions or could conceivably even perform the additions in parallel. In general, if the code is vectorized, the underlying implementation has more freedom about the assumptions it can make in order to achieve faster execution.
This is especially important for loops with “cheap” bodies. Often it suffices to vectorize just the innermost loop to get acceptable performance. A general rule of thumb is that the “order” of the vectorized body should be greater or equal to the “order” of the enclosing loop.
As a less trivial example, instead of

for i = 1:n-1
  a(i) = b(i+1) - b(i);
endfor

write

a = b(2:n) - b(1:n-1);

This shows an important general concept about using arrays for indexing instead of looping over an index variable. See Index Expressions. Also use boolean indexing generously. If a condition needs to be tested, this condition can also be written as a boolean index. For instance, instead of

for i = 1:n
  if (a(i) > 5)
    a(i) -= 20
  endif
endfor

write

a(a>5) -= 20;

which exploits the fact that a > 5 produces a boolean index.

Use elementwise vector operators whenever possible to avoid looping (operators like .* and .^). See Arithmetic Ops. For simple inline functions, the vectorize function can do this automatically.

Built-in Function: vectorize (fun)
Create a vectorized version of the inline function fun by replacing all occurrences of *, /, etc., with .*, ./, etc.
This may be useful, for example, when using inline functions with numerical integration or optimization where a vector-valued function is expected.

o591

Nota: la mysteryosa quadv è qui.

Also exploit broadcasting in these elementwise operators both to avoid looping and unnecessary intermediate memory allocations. See Broadcasting.
Use built-in and library functions if possible. Built-in and compiled functions are very fast. Even with an m-file library function, chances are good that it is already optimized, or will be optimized more in a future release.

For instance, even better than  a = b(2:n) - b(1:n-1); is a = diff (b);.

Most Octave functions are written with vector and array arguments in mind. If you find yourself writing a loop with a very simple operation, chances are that such a function already exists.
Segue un elenco di funzioni vettorializzate, non lo copio.

:mrgreen:

Octave – Algebra lineare – IX – 105

knuth-c10

Continuo da qui copiando qui.

Risolutori specializzati per matrici

Function File: x = bicg (A, b, rtol, maxit, M1, M2, x0)
Function File: x = bicg (A, b, rtol, maxit, P)
Function File: [x, flag, relres, iter, resvec] = bicg (A, b, ...)

Solve A x = b using the Bi-conjugate gradient iterative method.
rtol is the relative tolerance, if not given or set to [] the default value 1e-6 is used.
maxit the maximum number of outer iterations, if not given or set to [] the default value min (20, numel (b)) is used.
x0 the initial guess, if not given or set to [] the default value zeros (size (b)) is used.
A can be passed as a matrix or as a function handle or inline function f such that f(x, "notransp") = A*x and f(x, "transp") = A'*x.
The preconditioner P is given as P = M1 * M2. Both M1 and M2 can be passed as a matrix or as a function handle or inline function g such that g(x, "notransp") = M1 \ x or g(x, "notransp") = M2 \ x and g(x, "transp") = M1' \ x or g(x, "transp") = M2' \ x.

If called with more than one output parameter

  • flag indicates the exit status:
  • 0: iteration converged to the within the chosen tolerance
  • 1: the maximum number of iterations was reached before convergence
  • 3: the algorithm reached stagnation
  • (the value 2 is unused but skipped for compatibility).
  • relres is the final value of the relative residual.
  • iter is the number of iterations performed.
  • resvec is a vector containing the relative residual at each iteration.

o589

Function File: x = bicgstab (A, b, rtol, maxit, M1, M2, x0)
Function File: x = bicgstab (A, b, rtol, maxit, P)
Function File: [x, flag, relres, iter, resvec] = bicgstab (A, b, ...)

Solve A x = b using the stabilizied Bi-conjugate gradient iterative method.
Vedi bicg per le opzioni.

o590

Function File: x = cgs (A, b, rtol, maxit, M1, M2, x0)
Function File: x = cgs (A, b, rtol, maxit, P)
Function File: [x, flag, relres, iter, resvec] = cgs (A, b, ...)

Solve A x = b, where A is a square matrix, using the Conjugate Gradients Squared method.
Niente esempio, a me da errore sh: 1: most: not found.

Function File: x = gmres (A, b, m, rtol, maxit, M1, M2, x0)
Function File: x = gmres (A, b, m, rtol, maxit, P)
Function File: [x, flag, relres, iter, resvec] = gmres (...)

Solve A x = b using the Preconditioned GMRES iterative method with restart, a.k.a. PGMRES(m).
Anche qui ottengo l’errore precedente.

Function File: x = qmr (A, b, rtol, maxit, M1, M2, x0)
Function File: x = qmr (A, b, rtol, maxit, P)
Function File: [x, flag, relres, iter, resvec] = qmr (A, b, ...)

Solve A x = b using the Quasi-Minimal Residual iterative method (without look-ahead).
Per me errore anche qui. Dovesse servire ci sarebbe da indagare.

Prossimamente si cambia capitolo 😀
:mrgreen:

Octave – Algebra lineare – VIII – 104

cst

Argomento nuovo, qui, proseguendo da qui.

Funzioni per matrici

Function File: expm (A)
Return the exponential of a matrix.
The matrix exponential is defined as the infinite Taylor series expm (A) = I + A + A^2/2! + A^3/3! + ....
However, the Taylor series is not the way to compute the matrix exponential; see Moler and Van Loan, Nineteen Dubious Ways to Compute the Exponential of a Matrix, SIAM Review, 1978. This routine uses Ward’s diagonal Padé approximation method with three step preconditioning (SIAM Journal on Numerical Analysis, 1977). Diagonal Padé approximations are rational polynomials of matrices

o582

whose Taylor series matches the first 2q+1 terms of the Taylor series above; direct evaluation of the Taylor series (with the same preconditioning steps) may be desirable in lieu of the Padé approximation when Dq(A) is ill-conditioned.

o583

Function File: s = logm (A)
Function File: s = logm (A, opt_iters)
Function File: [s, iters] = logm (...)

Compute the matrix logarithm of the square matrix A.
The implementation utilizes a Padé approximant and the identity logm (A) = 2^k * logm (A^(1 / 2^k)).
The optional input opt_iters is the maximum number of square roots to compute and defaults to 100.
The optional output iters is the number of square roots actually computed.

o584

Built-in Function: s = sqrtm (A)
Built-in Function: [s, error_estimate] = sqrtm (A)

Compute the matrix square root of the square matrix A.
Ref: N.J. Higham. A New sqrtm for MATLAB. Numerical Analysis Report No. 336, Manchester Centre for Computational Mathematics, Manchester, England, January 1999.

o585

Built-in Function: kron (A, B)
Built-in Function: kron (A1, A2, ...)

Form the Kronecker product of two or more matrices.
This is defined block by block as x = [ a(i,j)*b ].

o586

If there are more than two input arguments A1, A2, ..., An the Kronecker product is computed as kron (kron (A1, A2), ..., An). Since the Kronecker product is associative, this is well-defined.

Built-in Function: blkmm (A, B)
Compute products of matrix blocks.
The blocks are given as 2-dimensional subarrays of the arrays A, B. The size of A must have the form [m,k,...] and size of B must be [k,n,...]. The result is then of size [m,n,...] and is computed as follows:

for i = 1:prod (size (A)(3:end))
  C(:,:,i) = A(:,:,i) * B(:,:,i)
endfor

o587

Built-in Function: X = syl (A, B, C)

Solve the Sylvester equation A X + X B = C using standard LAPACK subroutines.

o588

:mrgreen:

Octave – Algebra lineare – VII – 103

pop2

Continuo da qui, oggi copio qui.

Fattorializzazione di matrici – III

Built-in Function: lambda = qz (A, B)
Built-in Function: lambda = qz (A, B, opt)

QZ decomposition of the generalized eigenvalue problem (A x = s B x).

Le opzioni sono troppo pasticciate per copiarle, sono di là.

Function File: [aa, bb, q, z] = qzhess (A, B)
Compute the Hessenberg-triangular decomposition of the matrix pencil (A, B), returning aa = q * A * z, bb = q * B * z, with q and z orthogonal.

o578

The Hessenberg-triangular decomposition is the first step in Moler and Stewart’s QZ decomposition algorithm.

Built-in Function: S = schur (A)
Built-in Function: S = schur (A, "real")
Built-in Function: S = schur (A, "complex")
Built-in Function: S = schur (A, opt)
Built-in Function: [U, S] = schur (...)

The Schur decomposition is defined as S = U' * A * U where U is a unitary matrix (U'* U is identity) and S is upper triangular. The eigenvalues of A (and S) are the diagonal elements of S. If the matrix A is real, then the real Schur decomposition is computed, in which the matrix U is orthogonal and S is block upper triangular with blocks of size at most 2 x 2 along the diagonal. The diagonal elements of S (or the eigenvalues of the 2 x 2 blocks, when appropriate) are the eigenvalues of A and S.
The default for real matrices is a real Schur decomposition. A complex decomposition may be forced by passing the flag "complex".
The default for real matrices is a real Schur decomposition.
The eigenvalues are optionally ordered along the diagonal according to the value of opt. opt = "a" indicates that all eigenvalues with negative real parts should be moved to the leading block of S (used in are), opt = "d" indicates that all eigenvalues with magnitude less than one should be moved to the leading block of S (used in dare), and opt = "u", the default, indicates that no ordering of eigenvalues should occur. The leading k columns of U always span the A-invariant subspace corresponding to the k leading eigenvalues of S.
The Schur decomposition is used to compute eigenvalues of a square matrix, and has applications in the solution of algebraic Riccati equations in control (see are and dare).

Function File: [U, T] = rsf2csf (UR, TR)
Convert a real, upper quasi-triangular Schur form TR to a complex, upper triangular Schur form T.
Note that the following relations hold: UR * TR * UR' = U * T * U' and U' * U is the identity matrix I.
Note also that U and T are not unique.
Nota: a me da errore, chissà; ma chissà se serve 😦

Loadable Function: [UR, SR] = ordschur (U, S, select)
Reorders the real Schur factorization (U, S) obtained with the schur function, so that selected eigenvalues appear in the upper left diagonal blocks of the quasi triangular Schur matrix.
The logical vector select specifies the selected eigenvalues as they appear along S’s diagonal.

o579

Built-in Function: s = svd (A)
Built-in Function: [U, S, V] = svd (A)
Built-in Function: [U, S, V] = svd (A, econ)

Compute the singular value decomposition of A: A = U*S*V'.
The function svd normally returns only the vector of singular values. When called with three return values, it computes U, S, and V.

0580

If given a second argument, svd returns an economy-sized decomposition, eliminating the unnecessary rows or columns of U or V.

Built-in Function: val = svd_driver ()
Built-in Function: old_val = svd_driver (new_val)
Built-in Function: svd_driver (new_val, "local")

Query or set the underlying LAPACK driver used by svd.
Currently recognized values are "gesvd" and "gesdd". The default is "gesvd".
When called from inside a function with the "local" option, the variable is changed locally for the function and any subroutines it calls. The original variable value is restored when exiting the function.

o581

Function File: [housv, beta, zer] = housh (x, j, z)
Compute Householder reflection vector housv to reflect x to be the j-th column of identity, i.e.,

(I - beta*housv*housv')x =  norm (x)*e(j) if x(j) < 0,
(I - beta*housv*housv')x = -norm (x)*e(j) if x(j) >= 0

Inputs:

  • x vector
  • j index into vector
  • z threshold for zero (usually should be the number 0)

Outputs (see Golub and Van Loan):

  • beta If beta = 0, then no reflection need be applied (zer set to 0)
  • housv Householder vector

Function File: [u, h, nu] = krylov (A, V, k, eps1, pflg)
Construct an orthogonal basis u of block Krylov subspace [v a*v a^2*v ... a^(k+1)*v] using Householder reflections to guard against loss of orthogonality.
If V is a vector, then h contains the Hessenberg matrix such that a*u == u*h+rk*ek', in which rk = a*u(:,k)-u*h(:,k), and ek' is the vector [0, 0, ..., 1] of length k. Otherwise, h is meaningless.
If V is a vector and k is greater than length (A) - 1, then h contains the Hessenberg matrix such that a*u == u*h.
The value of nu is the dimension of the span of the Krylov subspace (based on eps1).
If b is a vector and k is greater than m-1, then h contains the Hessenberg decomposition of A.
The optional parameter eps1 is the threshold for zero. The default value is 1e-12.
If the optional parameter pflg is nonzero, row pivoting is used to improve numerical behavior. The default value is 0.

Reference: A. Hodel, P. Misra, Partial Pivoting in the Computation of Krylov Subspaces of Large Sparse Systems, Proceedings of the 42nd IEEE Conference on Decision and Control, December 2003.

Dai, prossimamente si cambia 😉
:mrgreen:

Octave – Algebra lineare – VI – 102

nikon-small-world-copper-crystals

Sempre a copiare qui, continuando da qui.

Fattorializzazione di matrici – II

Built-in Function: H = hess (A)
Built-in Function: [P, H] = hess (A)

Compute the Hessenberg decomposition of the matrix A.
The Hessenberg decomposition is P * H * P' = A where P is a square unitary matrix (P' * P = I, using complex-conjugate transposition) and H is upper Hessenberg (H(i, j) = 0 for all i > j+1).

The Hessenberg decomposition is usually used as the first step in an eigenvalue computation, but has other applications as well (see Golub, Nash, and Van Loan, IEEE Transactions on Automatic Control, 1979).

o574

Built-in Function: [L, U] = lu (A)
Built-in Function: [L, U, P] = lu (A)
Built-in Function: [L, U, P, Q] = lu (S)
Built-in Function: [L, U, P, Q, R] = lu (S)
Built-in Function: [...] = lu (S, thres)
Built-in Function: y = lu (...)
Built-in Function: [...] = lu (..., "vector")

Compute the LU decomposition of A.
If A is full subroutines from LAPACK are used and if A is sparse then UMFPACK is used.
The result is returned in a permuted form, according to the optional return value P.

o575

The matrix is not required to be square.
When called with two or three output arguments and a spare input matrix, lu does not attempt to perform sparsity preserving column permutations. Called with a fourth output argument, the sparsity preserving column transformation Q is returned, such that P * A * Q = L * U.
Called with a fifth output argument and a sparse input matrix, lu attempts to use a scaling factor R on the input matrix such that P * (R \ A) * Q = L * U. This typically leads to a sparser and more stable factorization.
An additional input argument thres, that defines the pivoting threshold can be given. thres can be a scalar, in which case it defines the UMFPACK pivoting tolerance for both symmetric and unsymmetric cases. If thres is a 2-element vector, then the first element defines the pivoting tolerance for the unsymmetric UMFPACK pivoting strategy and the second for the symmetric strategy. By default, the values defined by spparms are used ([0.1, 0.001]).
Given the string argument "vector", lu returns the values of P and Q as vector values, such that for full matrix, A (P,:) = L * U, and R (P,:) * A (:, Q) = L * U.
With two output arguments, returns the permuted forms of the upper and lower triangular matrices, such that A = L * U. With one output argument y, then the matrix returned by the LAPACK routines is returned. If the input matrix is sparse then the matrix L is embedded into U to give a return value similar to the full case. For both full and sparse matrices, lu loses the permutation information.

Built-in Function: [L, U] = luupdate (L, U, x, y)
Built-in Function: [L, U, P] = luupdate (L, U, P, x, y)

Given an LU factorization of a real or complex matrix A = L*U, L lower unit trapezoidal and U upper trapezoidal, return the LU factorization of A + x*y.’, where x and y are column vectors (rank-1 update) or matrices with equal number of columns (rank-k update).
Optionally, row-pivoted updating can be used by supplying a row permutation (pivoting) matrix P; in that case, an updated permutation matrix is returned. Note that if L, U, P is a pivoted LU factorization as obtained by lu: [L, U, P] = lu (A); then a factorization of A+x*y can be obtained either as [L1, U1] = lu (L, U, P*x, y) or [L1, U1, P1] = lu (L, U, P, x, y).

The first form uses the unpivoted algorithm, which is faster, but less stable. The second form uses a slower pivoted algorithm, which is more stable.
The matrix case is done as a sequence of rank-1 updates; thus, for large enough k, it will be both faster and more accurate to recompute the factorization from scratch.

Loadable Function: [Q, R, P] = qr (A)
Loadable Function: [Q, R, P] = qr (A, "0")
Loadable Function: [C, R] = qr (A, B)
Loadable Function: [C, R] = qr (A, B, "0")

Compute the QR factorization of A, using standard LAPACK subroutines.

o576
The qr factorization has applications in the solution of least squares problems min norm(A x - b) for overdetermined systems of equations (i.e., A is a tall, thin matrix). The QR factorization is Q * R = A where Q is an orthogonal matrix and R is upper triangular.
If given a second argument of '0', qr returns an economy-sized QR factorization, omitting zero rows of R and the corresponding columns of Q.
If the matrix A is full, the permuted QR factorization [Q, R, P] = qr (A) forms the QR factorization such that the diagonal entries of R are decreasing in magnitude order.

o577

The permuted qr factorization [Q, R, P] = qr (A) factorization allows the construction of an orthogonal basis of span (A).
If the matrix A is sparse, then compute the sparse QR factorization of A, using CSPARSE. As the matrix Q is in general a full matrix, this function returns the Q-less factorization R of A, such that R = chol (A' * A).
If the final argument is the scalar 0 and the number of rows is larger than the number of columns, then an economy factorization is returned. That is R will have only size (A,1) rows.
If an additional matrix B is supplied, then qr returns C, where C = Q' * B. This allows the least squares approximation of A \ B to be calculated as [C, R] = qr (A, B) and x = R \ C

Loadable Function: [Q1, R1] = qrupdate (Q, R, u, v)
Given a QR factorization of a real or complex matrix A = Q*R, Q unitary and R upper trapezoidal, return the QR factorization of A + u*v’, where u and v are column vectors (rank-1 update) or matrices with equal number of columns (rank-k update). Notice that the latter case is done as a sequence of rank-1 updates; thus, for k large enough, it will be both faster and more accurate to recompute the factorization from scratch.
The QR factorization supplied may be either full (Q is square) or economized (R is square).

Loadable Function: [Q1, R1] = qrinsert (Q, R, j, x, orient)
Given a QR factorization of a real or complex matrix A = Q*R, Q unitary and R upper trapezoidal, return the QR factorization of [A(:,1:j-1) x A(:,j:n)], where u is a column vector to be inserted into A (if orient is "col"), or the QR factorization of [A(1:j-1,:);x;A(:,j:n)], where x is a row vector to be inserted into A (if orient is "row").
The default value of orient is "col". If orient is "col", u may be a matrix and j an index vector resulting in the QR factorization of a matrix B such that B(:,j) gives u and B(:,j) = [] gives A. Notice that the latter case is done as a sequence of k insertions; thus, for k large enough, it will be both faster and more accurate to recompute the factorization from scratch.
If orient is "col", the QR factorization supplied may be either full (Q is square) or economized (R is square).
If orient is "row", full factorization is needed.

Loadable Function: [Q1, R1] = qrdelete (Q, R, j, orient)
Given a QR factorization of a real or complex matrix A = Q*R, Q unitary and R upper trapezoidal, return the QR factorization of [A(:,1:j-1) A(:,j+1:n)], i.e., A with one column deleted (if orient is "col"), or the QR factorization of [A(1:j-1,:);A(j+1:n,:)], i.e., A with one row deleted (if orient is "row").
The default value of orient is "col".
If orient is "col", j may be an index vector resulting in the QR factorization of a matrix B such that A(:,j) = [] gives B. Notice that the latter case is done as a sequence of k deletions; thus, for k large enough, it will be both faster and more accurate to recompute the factorization from scratch.
If orient is "col", the QR factorization supplied may be either full (Q is square) or economized (R is square).
If orient is "row", full factorization is needed.

Loadable Function: [Q1, R1] = qrshift (Q, R, i, j)
Given a QR factorization of a real or complex matrix A = Q*R, Q unitary and R upper trapezoidal, return the QR factorization of A(:,p), where p is the permutation p = [1:i-1, shift(i:j, 1), j+1:n] if i < j or p = [1:j-1, shift(j:i,-1), i+1:n] if j < i.

Pausa 😉
:mrgreen:

Octave – Algebra lineare – V – 101

eclisse

Copio qui, continuando da qui.

Fattorializzazione di matrici – I

Loadable Function: R = chol (A)
Loadable Function: [R, p] = chol (A)
Loadable Function: [R, p, Q] = chol (S)
Loadable Function: [R, p, Q] = chol (S, "vector")
Loadable Function: [L, ...] = chol (..., "lower")
Loadable Function: [L, ...] = chol (..., "upper")

Compute the Cholesky factor, R, of the symmetric positive definite matrix A.
The Cholesky factor is defined by R' * R = A.
Called with one output argument chol fails if A or S is not positive definite. With two or more output arguments p flags whether the matrix was positive definite and chol does not fail. A zero value indicated that the matrix was positive definite and the R gives the factorization, and p will have a positive value otherwise.
If called with 3 outputs then a sparsity preserving row/column permutation is applied to A prior to the factorization. That is R is the factorization of A(Q, Q) such that R' * R = Q' * A * Q.
The sparsity preserving permutation is generally returned as a matrix. However, given the flag "vector", Q will be returned as a vector such that R' * R = A(Q, Q).
Called with either a sparse or full matrix and using the "lower" flag, chol returns the lower triangular factorization such that L * L' = A.
For full matrices, if the "lower" flag is set only the lower triangular part of the matrix is used for the factorization, otherwise the upper triangular part is used.
In general the lower triangular factorization is significantly faster for sparse matrices.

o570

Loadable Function: cholinv (A)
Compute the inverse of the symmetric positive definite matrix A using the Cholesky factorization.

o571

Loadable Function: chol2inv (U)
Invert a symmetric, positive definite square matrix from its Cholesky decomposition, U.
Note that U should be an upper-triangular matrix with positive diagonal elements. chol2inv (U) provides inv (U'*U) but it is much faster than using inv.

o572

Loadable Function: [R1, info] = cholupdate (R, u, op)
Update or downdate a Cholesky factorization.
Given an upper triangular matrix R and a column vector u, attempt to determine another upper triangular matrix R1 such that

  • R1’*R1 = R’*R + u*u’ if op is "+"
  • R1’*R1 = R’*R - u*u’ if op is "-"

If op is "-", info is set to

  • 0 if the downdate was successful,
  • 1 if R’*R - u*u’ is not positive definite,
  • 2 if R is singular.

If info is not present, an error message is printed in cases 1 and 2.

o573

Loadable Function: R1 = cholinsert (R, j, u)
Loadable Function: [R1, info] = cholinsert (R, j, u)

Given a Cholesky factorization of a real symmetric or complex Hermitian positive definite matrix A = R’*R, R upper triangular, return the Cholesky factorization of A1, where A1(p, p) = A, A1(:,j) = A1(j,:)’ = u and p = [1:j-1,j+1:n+1]. u(j) should be positive.

On return, info is set to

  • 0 if the insertion was successful,
  • 1 if A1 is not positive definite,
  • 2 if R is singular.

Loadable Function: R1 = choldelete (R, j)
Given a Cholesky factorization of a real symmetric or complex Hermitian positive definite matrix A = R’*R, R upper triangular, return the Cholesky factorization of A(p,p), where p = [1:j-1,j+1:n+1].

Loadable Function: R1 = cholshift (R, i, j)
Given a Cholesky factorization of a real symmetric or complex Hermitian positive definite matrix A = R’*R, R upper triangular, return the Cholesky factorization of A(p,p), where p is the permutation p = [1:i-1, shift(i:j, 1), j+1:n] if i < j or p = [1:j-1, shift(j:i,-1), i+1:n] if j < i.

Le funzioni cholinsert, choldelete e cholshift non sono presenti (o non le ho trovate) in Matlab.

:mrgreen:

Octave – Algebra lineare – IV – 100

note_musicali

Continuando da qui copio qui.


Funzioni di base per le matrici – III

Built-in Function: norm (A)
Built-in Function: norm (A, p)
Built-in Function: norm (A, p, opt)

Compute the p-norm of the matrix A.
If the second argument is missing, p = 2 is assumed.

If A is a matrix (or sparse matrix):

  • p = 1 1-norm, the largest column sum of the absolute values of A.
  • p = 2 Largest singular value of A.
  • p = Inf or "inf" Infinity norm, the largest row sum of the absolute values of A.
  • p = "fro" Frobenius norm of A, sqrt (sum (diag (A' * A))).
  • other p, p > 1 maximum norm (A*x, p) such that norm (x, p) == 1

If A is a vector or a scalar:

  • p = Inf or "infmax (abs (A)).
  • p = -Inf min (abs (A)).
  • p = "fro" Frobenius norm of A, sqrt (sumsq (abs (A))).
  • p = 0 Hamming norm – the number of nonzero elements.
  • otherp, p > 1 p-norm of A, (sum (abs (A) .^ p)) ^ (1/p).
  • otherp p < 1 the p-pseudonorm defined as above.

If opt is the value "rows", treat each row as a vector and compute its norm. The result is returned as a column vector. Similarly, if opt is "columns" or "cols" then compute the norms of each column and return a row vector.

o562

Function File: null (A)
Function File: null (A, tol)

Return an orthonormal basis of the null space of A.
The dimension of the null space is taken as the number of singular values of A not greater than tol. If the argument tol is missing, it is computed as max (size (A)) * max (svd (A)) * eps

o563

Function File: orth (A)
Function File: orth (A, tol)

Return an orthonormal basis of the range space of A.
The dimension of the range space is taken as the number of singular values of A greater than tol. If the argument tol is missing, it is computed as max (size (A)) * max (svd (A)) * eps

o564

Built-in Function: [y, h] = mgorth (x, v)
Orthogonalize a given column vector x with respect to a set of orthonormal vectors comprising the columns of v using the modified Gram-Schmidt method.

On exit, y is a unit vector such that:

norm (y) = 1
v' * y = 0
x = [v, y]*h'

Built-in Function: pinv (x)
Built-in Function: pinv (x, tol)

Return the pseudoinverse of x.
Singular values less than tol are ignored.
If the second argument is omitted, it is taken to be tol = max (size (x)) * sigma_max (x) * eps, where sigma_max (x) is the maximal singular value of x.

o565

Function File: rank (A)
Function File: rank (A, tol)

Compute the rank of matrix A, using the singular value decomposition.
The rank is taken to be the number of singular values of A that are greater than the specified tolerance tol. If the second argument is omitted, it is taken to be tol = max (size (A)) * sigma(1) * eps; where eps is machine precision and sigma(1) is the largest singular value of A.
The rank of a matrix is the number of linearly independent rows or columns and determines how many particular solutions exist to a system of equations. Use null for finding the remaining homogenous solutions.

o566

Built-in Function: c = rcond (A)
Compute the 1-norm estimate of the reciprocal condition number as returned by LAPACK.
If the matrix is well-conditioned then c will be near 1 and if the matrix is poorly conditioned it will be close to 0.
The matrix A must not be sparse. If the matrix is sparse then condest (A) or rcond (full (A)) should be used instead.

o567

Function File: trace (A)
Compute the trace of A, the sum of the elements along the main diagonal.
The implementation is straightforward: sum (diag (A)).

o568

Function File: rref (A)
Function File: rref (A, tol)
Function File: [r, k] = rref (...)

Return the reduced row echelon form of A.
tol defaults to eps * max (size (A)) * norm (A, inf).
The optional return argument k contains the vector of “bound variables”, which are those columns on which elimination has been performed.

o569

:mrgreen:

Octave – Algebra lineare – III – 99

btf0

Continuo copiando qui.

Funzioni di base per le matrici – II

Function File: [G, y] = planerot (x)
Given a two-element column vector, return the 2 by 2 orthogonal matrix G such that y = g * x and y(2) = 0.

o558

Built-in Function: x = inv (A)
Built-in Function: [x, rcond] = inv (A)

Compute the inverse of the square matrix A.
Return an estimate of the reciprocal condition number if requested, otherwise warn of an ill-conditioned matrix if the reciprocal condition number is small.
In general it is best to avoid calculating the inverse of a matrix directly. For example, it is both faster and more accurate to solve systems of equations (A*x = b) with y = A \ b, rather than y = inv (A) * b.
If called with a sparse matrix, then in general x will be a full matrix requiring significantly more storage. Avoid forming the inverse of a sparse matrix if possible.

o559

Function File: x = linsolve (A, b)
Function File: x = linsolve (A, b, opts)
Function File: [x, R] = linsolve (...)

Solve the linear system A*x = b.
With no options, this function is equivalent to the left division operator (x = A \ b) or the matrix-left-divide function (x = mldivide (A, b)).
Octave ordinarily examines the properties of the matrix A and chooses a solver that best matches the matrix. By passing a structure opts to linsolve you can inform Octave directly about the matrix A. In this case Octave will skip the matrix examination and proceed directly to solving the linear system.
Warning: If the matrix A does not have the properties listed in the opts structure then the result will not be accurate AND no warning will be given. When in doubt, let Octave examine the matrix and choose the appropriate solver as this step takes little time and the result is cached so that it is only done once per linear system.

Possible opts fields (set value to true/false):

  • LT A is lower triangular
  • UT A is upper triangular
  • UHESS A is upper Hessenberg (currently makes no difference)
  • SYM A is symmetric or complex Hermitian (currently makes no difference)
  • POSDEF A is positive definite
  • RECT A is general rectangular (currently makes no difference)
  • TRANSA Solve A'*x = b by transpose (A) \ b

The optional second output R is the inverse condition number of A (zero if matrix is singular).

o560

Built-in Function: type = matrix_type (A)
Built-in Function: type = matrix_type (A, "nocompute")
Built-in Function: A = matrix_type (A, type)
Built-in Function: A = matrix_type (A, "upper", perm)
Built-in Function: A = matrix_type (A, "lower", perm)
Built-in Function: A = matrix_type (A, "banded", nl, nu)

Identify the matrix type or mark a matrix as a particular type.
This allows more rapid solutions of linear equations involving A to be performed.
Called with a single argument, matrix_type returns the type of the matrix and caches it for future use.
Called with more than one argument, matrix_type allows the type of the matrix to be defined.
If the option "nocompute" is given, the function will not attempt to guess the type if it is still unknown. This is useful for debugging purposes.

The possible matrix types depend on whether the matrix is full or sparse, and can be one of the following

  • "unknown" Remove any previously cached matrix type, and mark type as unknown.
  • "full" Mark the matrix as full.
  • "positive definite" Probable full positive definite matrix.
  • "diagonal" Diagonal matrix. (Sparse matrices only)
  • "permuted diagonal" Permuted Diagonal matrix. The permutation does not need to be specifically indicated, as the structure of the matrix explicitly gives this. (Sparse matrices only)
  • "upper" Upper triangular. If the optional third argument perm is given, the matrix is assumed to be a permuted upper triangular with the permutations defined by the vector perm.
  • "lower" Lower triangular. If the optional third argument perm is given, the matrix is assumed to be a permuted lower triangular with the permutations defined by the vector perm.
  • "banded", "banded positive definite" Banded matrix with the band size of nl below the diagonal and nu above it. If nl and nu are 1, then the matrix is tridiagonal and treated with specialized code. In addition the matrix can be marked as probably a positive definite. (Sparse matrices only)
  • "singular" The matrix is assumed to be singular and will be treated with a minimum norm solution.

Note that the matrix type will be discovered automatically on the first attempt to solve a linear equation involving A. Therefore matrix_type is only useful to give Octave hints of the matrix type. Incorrectly defining the matrix type will result in incorrect results from solutions of linear equations; it is entirely the responsibility of the user to correctly identify the matrix type.

Also, the test for positive definiteness is a low-cost test for a Hermitian matrix with a real positive diagonal. This does not guarantee that the matrix is positive definite, but only that it is a probable candidate. When such a matrix is factorized, a Cholesky factorization is first attempted, and if that fails the matrix is then treated with an LU factorization. Once the matrix has been factorized, matrix_type will return the correct classification of the matrix.

o561

Pausa 😀
:mrgreen:

Octave – Algebra lineare – II – 98

p1

Continuo da qui, sono qui.

Funzioni di base per le matrici – I

Built-in Function: AA = balance (A)
Built-in Function: AA = balance (A, opt)
Built-in Function: [DD, AA] = balance (A, opt)
Built-in Function: [D, P, AA] = balance (A, opt)
Built-in Function: [CC, DD, AA, BB] = balance (A, B, opt)

Balance the matrix A to reduce numerical errors in future calculations.
Compute AA = DD \ A * DD in which AA is a matrix whose row and column norms are roughly equal in magnitude, and DD = P * D, in which P is a permutation matrix and D is a diagonal matrix of powers of two. This allows the equilibration to be computed without round-off. Results of eigenvalue calculation are typically improved by balancing first.
If two output values are requested, balance returns the diagonal D and the permutation P separately as vectors. In this case, DD = eye(n)(:,P) * diag (D), where n is the matrix size.
If four output values are requested, compute AA = CC*A*DD and BB = CC*B*DD, in which AA and BB have nonzero elements of approximately the same magnitude and CC and DD are permuted diagonal matrices as in DD for the algebraic eigenvalue problem.

The eigenvalue balancing option opt may be one of:

  • "noperm", "S" Scale only; do not permute.
  • "noscal", "P" Permute only; do not scale.

Algebraic eigenvalue balancing uses standard LAPACK routines.
Generalized eigenvalue problem balancing uses Ward’s algorithm (SIAM Journal on Scientific and Statistical Computing, 1981).

o552

Function File: bw = bandwidth (A, type)
Function File: [lower, upper] = bandwidth (A)

Compute the bandwidth of A.
The type argument is the string "lower" for the lower bandwidth and "upper" for the upper bandwidth. If no type is specified return both the lower and upper bandwidth of A.
The lower/upper bandwidth of a matrix is the number of subdiagonals/superdiagonals with nonzero entries.

o553

Function File: cond (A)
Function File: cond (A, p)

Compute the p-norm condition number of a matrix.
cond (A) is defined as norm (A, p) * norm (inv (A), p).
By default, p = 2 is used which implies a (relatively slow) singular value decomposition. Other possible selections are p = 1, Inf, "fro" which are generally faster. See norm for a full discussion of possible p values.
The condition number of a matrix quantifies the sensitivity of the matrix inversion operation when small changes are made to matrix elements. Ideally the condition number will be close to 1. When the number is large this indicates small changes (such as underflow or round-off error) will produce large changes in the resulting output. In such cases the solution results from numerical computing are not likely to be accurate.

o554

Built-in Function: det (A)
Built-in Function: [d, rcond] = det (A)

Compute the determinant of A.
Return an estimate of the reciprocal condition number if requested.
Programming Notes: Routines from LAPACK are used for full matrices and code from UMFPACK is used for sparse matrices.
The determinant should not be used to check a matrix for singularity. For that, use any of the condition number functions: cond, condest, rcond.

o555

Built-in Function: lambda = eig (A)
Built-in Function: lambda = eig (A, B)
Built-in Function: [V, lambda] = eig (A)
Built-in Function: [V, lambda] = eig (A, B)

Compute the eigenvalues (and optionally the eigenvectors) of a matrix or a pair of matrices
The algorithm used depends on whether there are one or two input matrices, if they are real or complex, and if they are symmetric (Hermitian if complex) or non-symmetric.
The eigenvalues returned by eig are not ordered.

o556

Built-in Function: G = givens (x, y)
Built-in Function: [c, s] = givens (x, y)

Compute the Givens rotation matrix G.
The Givens matrix is a 2 by 2 orthogonal matrix g = [c s; -s' c] such that g [x; y] = [*; 0] with x and y scalars.

o557

d’ho! non sapevo avesse questo nome 😉

Pausa ma poi continua 😀
:mrgreen: