Category Archives: Julia

Julia – 74 – calcolo parallelo – 1

Continuo da qui, copio qui.

Most modern computers possess more than one CPU, and several computers can be combined together in a cluster. Harnessing the power of these multiple CPUs allows many computations to be completed more quickly. There are two major factors that influence performance: the speed of the CPUs themselves, and the speed of their access to memory. In a cluster, it’s fairly obvious that a given CPU will have fastest access to the RAM within the same computer (node). Perhaps more surprisingly, similar issues are relevant on a typical multicore laptop, due to differences in the speed of main memory and the cache. Consequently, a good multiprocessing environment should allow control over the “ownership” of a chunk of memory by a particular CPU. Julia provides a multiprocessing environment based on message passing to allow programs to run on multiple processes in separate memory domains at once.

Julia’s implementation of message passing is different from other environments such as MPI [1]. Communication in Julia is generally “one-sided”, meaning that the programmer needs to explicitly manage only one process in a two-process operation. Furthermore, these operations typically do not look like “message send” and “message receive” but rather resemble higher-level operations like calls to user functions.

Note 1. In this context, MPI refers to the MPI-1 standard. Beginning with MPI-2, the MPI standards committee introduced a new set of communication mechanisms, collectively referred to as Remote Memory Access (RMA). The motivation for adding RMA to the MPI standard was to facilitate one-sided communication patterns. For additional information on the latest MPI standard, see here.

Parallel programming in Julia is built on two primitives: remote references and remote calls. A remote reference is an object that can be used from any process to refer to an object stored on a particular process. A remote call is a request by one process to call a certain function on certain arguments on another (possibly the same) process.

Remote references come in two flavors: Future and RemoteChannel.

A remote call returns a Future to its result. Remote calls return immediately; the process that made the call proceeds to its next operation while the remote call happens somewhere else. You can wait for a remote call to finish by calling wait() on the returned Future, and you can obtain the full value of the result using fetch().

On the other hand, RemoteChannels are rewritable. For example, multiple processes can co-ordinate their processing by referencing the same remote Channel.

Each process has an associated identifier. The process providing the interactive Julia prompt always has an id equal to 1. The processes used by default for parallel operations are referred to as “workers”. When there is only one process, process 1 is considered a worker. Otherwise, workers are considered to be all processes other than process 1.

Let’s try this out. Starting with julia -p n provides n worker processes on the local machine. Generally it makes sense for n to equal the number of CPU cores on the machine.

The first argument to remotecall() is the function to call. Most parallel programming in Julia does not reference specific processes or the number of processes available, but remotecall() is considered a low-level interface providing finer control. The second argument to remotecall() is the id of the process that will do the work, and the remaining arguments will be passed to the function being called.

As you can see, in the first line we asked process 2 to construct a 2-by-2 random matrix, and in the second line we asked it to add 1 to it. The result of both calculations is available in the two futures, r and s. The @spawnat macro evaluates the expression in the second argument on the process specified by the first argument.

Occasionally you might want a remotely-computed value immediately. This typically happens when you read from a remote object to obtain data needed by the next local operation. The function remotecall_fetch() exists for this purpose. It is equivalent to fetch(remotecall(...)) but is more efficient.

Remember that getindex(r,1,1) is equivalent to r[1,1], so this call fetches the first element of the future r.

The syntax of remotecall() is not especially convenient. The macro @spawn makes things easier. It operates on an expression rather than a function, and picks where to do the operation for you:

Note that we used 1 .+ fetch(r) instead of 1 .+ r. This is because we do not know where the code will run, so in general a fetch() might be required to move r to the process doing the addition. In this case, @spawn is smart enough to perform the computation on the process that owns r, so the fetch() will be a no-op (no work is done).

(It is worth noting that @spawn is not built-in but defined in Julia as a macro. It is possible to define your own such constructs.)

An important thing to remember is that, once fetched, a Future will cache its value locally. Further fetch() calls do not entail a network hop. Once all referencing Futures have fetched, the remote stored value is deleted.

Sembra intrigante 😊

:mrgreen:

Annunci

Julia – 73 – networking e streams – 2

Marco Bruno

Continuo da qui, copio qui.

Esempio: un semplice TCP
Let’s jump right in with a simple example involving TCP sockets. Let’s first create a simple server:

To those familiar with the Unix socket API, the method names will feel familiar, though their usage is somewhat simpler than the raw Unix socket API. The first call to listen() will create a server waiting for incoming connections on the specified port (2000) in this case. The same function may also be used to create various other kinds of servers:

Note that the return type of the last invocation is different. This is because this server does not listen on TCP, but rather on a named pipe (Windows) or UNIX domain socket. The difference is subtle and has to do with the accept() and connect() methods. The accept() method retrieves a connection to the client that is connecting on the server we just created, while the connect() function connects to a server using the specified method. The connect() function takes the same arguments as listen(), so, assuming the environment (i.e. host, cwd, etc.) is the same you should be able to pass the same arguments to connect() as you did to listen to establish the connection. So let’s try that out (after having created the server above):

As expected we saw “Hello World” printed. So, let’s actually analyze what happened behind the scenes. When we called connect(), we connect to the server we had just created. Meanwhile, the accept function returns a server-side connection to the newly created socket and prints “Hello World” to indicate that the connection was successful.

A great strength of Julia is that since the API is exposed synchronously even though the I/O is actually happening asynchronously, we didn’t have to worry callbacks or even making sure that the server gets to run. When we called connect() the current task waited for the connection to be established and only continued executing after that was done. In this pause, the server task resumed execution (because a connection request was now available), accepted the connection, printed the message and waited for the next client. Reading and writing works in the same way. To see this, consider the following simple echo server:

Uhmmm… non scrive 😡

As with other streams, use close() to disconnect the socket:

Risolvere (😊) gli indirizzi IP
One of the connect() methods that does not follow the listen() methods is connect(host::String,port), which will attempt to connect to the host given by the host parameter on the port given by the port parameter. It allows you to do things like:

At the base of this functionality is getaddrinfo(), which will do the appropriate address resolution:

:mrgreen:

Julia – 72 – networking e streams – 1

Continuo da qui, copio qui.

Julia provides a rich interface to deal with streaming I/O objects such as terminals, pipes and TCP sockets. This interface, though asynchronous at the system level, is presented in a synchronous manner to the programmer and it is usually unnecessary to think about the underlying asynchronous operation. This is achieved by making heavy use of Julia cooperative threading (coroutine) functionality.

I/O base
All Julia streams expose at least a read() and a write() method, taking the stream as their first argument, e.g.:

Note that write() returns 11, the number of bytes (in “Hello World”) written to STDOUT, but this return value is suppressed with the ;.

Here Enter was pressed again so that Julia would read the newline. Now, as you can see from this example, write() takes the data to write as its second argument, while read() takes the type of the data to be read as the second argument. No, ho barato e scritto J+Enter 😜

For example, to read a simple byte array, we could do:

However, since this is slightly cumbersome, there are several convenience methods provided. For example, we could have written the above as:

or if we had wanted to read the entire line instead:

Note that depending on your terminal settings, your TTY may be line buffered and might thus require an additional enter before the data is sent to Julia.

To read every line from STDIN you can use eachline():

or read() if you wanted to read by character instead:

I/O di testo
Note that the write() method mentioned above operates on binary streams. In particular, values do not get converted to any canonical text representation but are written out as is:

Note that a is written to STDOUT by the write() function and that the returned value is 1 (since 0x61 is one byte).

For text I/O, use the print() or show() methods, depending on your needs (see the standard library reference for a detailed discussion of the difference between the two):

Proprietà contestuali di I/O
Sometimes IO output can benefit from the ability to pass contextual information into show methods. The IOContext object provides this framework for associating arbitrary metadata with an IO object. For example, showcompact adds a hinting parameter to the IO object that the invoked show method should print a shorter output (if applicable).

Lavorare con i files
Like many other environments, Julia has an open() function, which takes a filename and returns an IOStream object that you can use to read and write things from the file. For example if we have a file, hello.txt, whose contents are Hello, World!:

If you want to write to a file, you can open it with the write ("w") flag:

If you examine the contents of hello.txt at this point, you will notice that it is empty; nothing has actually been written to disk yet. This is because the IOStream must be closed before the write is actually flushed to disk:

Examining hello.txt again will show its contents have been changed.

Opening a file, doing something to its contents, and closing it again is a very common pattern. To make this easier, there exists another invocation of open() which takes a function as its first argument and filename as its second, opens the file, calls the function with the file as an argument, and then closes it again. For example, given a function:

You can call:

to open hello.txt, call read_and_capitalize on it, close hello.txt and return the capitalized contents.

To avoid even having to define a named function, you can use the do syntax, which creates an anonymous function on the fly:

:mrgreen:

Julia – 71 -algebra lineare – 2

Continuo da qui, copio qui.

Matrici speciali
Matrices with special symmetries and structures arise often in linear algebra and are frequently associated with various matrix factorizations. Julia features a rich collection of special matrix types, which allow for fast computation with specialized routines that are specially developed for particular matrix types.

The following tables summarize the types of special matrices that have been implemented in Julia, as well as whether hooks to various optimized methods for them in LAPACK are available.

Type            Description
Hermitian       Hermitian matrix
UpperTriangular Upper triangular matrix
LowerTriangular Lower triangular matrix
Tridiagonal     Tridiagonal matrix
SymTridiagonal  Symmetric tridiagonal matrix
Bidiagonal      Upper/lower bidiagonal matrix
Diagonal        Diagonal matrix
UniformScaling  Uniform scaling operator

operazioni elementari

Matrix type       +   -   *   \   Other functions with optimized methods
Hermitian                     MV  inv(), sqrtm(), expm()
UpperTriangular           MV  MV  inv(), det()
LowerTriangular           MV  MV  inv(), det()
SymTridiagonal    M   M   MS  MV  eigmax(), eigmin()
Tridiagonal       M   M   MS  MV     
Bidiagonal        M   M   MS  MV     
Diagonal          M   M   MV  M   inv(), det(), logdet(), /()
UniformScaling    M   M   MVS MVS /()

legenda

Key         Description
M (matrix)  An optimized method for matrix-matrix operations is available
V (vector)  An optimized method for matrix-vector operations is available
S (scalar)  An optimized method for matrix-scalar operations is available

l’operatore UniformScaling
A UniformScaling operator represents a scalar times the identity operator, λ*I. The identity operator I is defined as a constant and is an instance of UniformScaling. The size of these operators are generic and match the other matrix in the binary operations +, -, * and \. For A+I and A-I this means that A must be square. Multiplication with the identity operator I is a noop (except for checking that the scaling factor is one) and therefore almost without overhead.

Fattorializzazione di matrici
Matrix factorizations (a.k.a. matrix decompositions) compute the factorization of a matrix into a product of matrices, and are one of the central concepts in linear algebra.

The following table summarizes the types of matrix factorizations that have been implemented in Julia. Details of their associated methods can be found in the Linear Algebra section of the standard library documentation.

Type            Description
Cholesky        Cholesky factorization
CholeskyPivoted Pivoted Cholesky factorization
LU              LU factorization
LUTridiagonal   LU factorization for Tridiagonal matrices
UmfpackLU       LU factorization for sparse matrices (computed by UMFPack)
QR              QR factorization
QRCompactWY     Compact WY form of the QR factorization
QRPivoted       Pivoted QR factorization
Hessenberg      Hessenberg decomposition
Eigen           Spectral decomposition
SVD             Singular value decomposition
GeneralizedSVD  Generalized SVD

Argomento che andrebbe ampliato, secondo me 😊

:mrgreen:

Julia – 70 -algebra lineare – 1

Continuo da qui, copio qui.

In addition to (and as part of) its support for multi-dimensional arrays, Julia provides native implementations of many common and useful linear algebra operations. Basic operations, such as trace, det, and inv are all supported:

As well as other useful operations, such as finding eigenvalues or eigenvectors:

In addition, Julia provides many factorizations which can be used to speed up problems such as linear solve or matrix exponentiation by pre-factorizing a matrix into a form more amenable (for performance or memory reasons) to the problem. See the documentation on factorize for more information. As an example:

Since A is not Hermitian, symmetric, triangular, tridiagonal, or bidiagonal, an LU factorization may be the best we can do. Compare with:

Here, Julia was able to detect that B is in fact symmetric, and used a more appropriate factorization. Often it’s possible to write more efficient code for a matrix that is known to have certain properties e.g. it is symmetric, or tridiagonal. Julia provides some special types so that you can “tag” matrices as having these properties. For instance:

sB has been tagged as a matrix that’s (real) symmetric, so for later operations we might perform on it, such as eigenfactorization or computing matrix-vector products, efficiencies can be found by only referencing half of it. For example:

The \ operation here performs the linear solution. Julia’s parser provides convenient dispatch to specialized methods for the transpose of a matrix left-divided by a vector, or for the various combinations of transpose operations in matrix-matrix solutions. Many of these are further specialized for certain special matrix types. For example, A\B will end up calling Base.LinAlg.A_ldiv_B! while A'\B will end up calling Base.LinAlg.Ac_ldiv_B, even though we used the same left-division operator. This works for matrices too: A.'\B.' would call Base.LinAlg.At_ldiv_Bt. The left-division operator is pretty powerful and it’s easy to write compact, readable code that is flexible enough to solve all sorts of systems of linear equations.

:mrgreen:

Julia – 69 – arrays multidimensionali – 6

Jed Davis

Continuo da qui, copio qui.

Matrici sparse
Sparse matrices are matrices that contain enough zeros that storing them in a special data structure leads to savings in space and execution time. Sparse matrices may be used when operations on the sparse representation of a matrix lead to considerable gains in either time or space when compared to performing the same operations on a dense matrix.

Memorizzazione compressed sparse column (CSC)
In Julia, sparse matrices are stored in the Compressed Sparse Column (CSC) format. Julia sparse matrices have the type SparseMatrixCSC{Tv,Ti}, where Tv is the type of the stored values, and Ti is the integer type for storing column pointers and row indices.:

struct SparseMatrixCSC{Tv,Ti<:Integer} <: AbstractSparseMatrix{Tv,Ti}
    m::Int                  # Number of rows
    n::Int                  # Number of columns
    colptr::Vector{Ti}      # Column i is in colptr[i]:(colptr[i+1]-1)
    rowval::Vector{Ti}      # Row indices of stored values
    nzval::Vector{Tv}       # Stored values, typically nonzeros
end

The compressed sparse column storage makes it easy and quick to access the elements in the column of a sparse matrix, whereas accessing the sparse matrix by rows is considerably slower. Operations such as insertion of previously unstored entries one at a time in the CSC structure tend to be slow. This is because all elements of the sparse matrix that are beyond the point of insertion have to be moved one place over.

All operations on sparse matrices are carefully implemented to exploit the CSC data structure for performance, and to avoid expensive operations.

If you have data in CSC format from a different application or library, and wish to import it in Julia, make sure that you use 1-based indexing. The row indices in every column need to be sorted. If your SparseMatrixCSC object contains unsorted row indices, one quick way to sort them is by doing a double transpose.

In some applications, it is convenient to store explicit zero values in a SparseMatrixCSC. These are accepted by functions in Base (but there is no guarantee that they will be preserved in mutating operations). Such explicitly stored zeros are treated as structural nonzeros by many routines. The nnz() function returns the number of elements explicitly stored in the sparse data structure, including structural nonzeros. In order to count the exact number of actual values that are nonzero, use countnz(), which inspects every stored element of a sparse matrix.

Costruttori di matrici sparse
The simplest way to create sparse matrices is to use functions equivalent to the zeros() and eye() functions that Julia provides for working with dense matrices. To produce sparse matrices instead, you can use the same names with an sp prefix:

The sparse() function is often a handy way to construct sparse matrices. It takes as its input a vector I of row indices, a vector J of column indices, and a vector V of stored values. sparse(I,J,V) constructs a sparse matrix such that S[I[k], J[k]] = V[k].

The inverse of the sparse() function is findn(), which retrieves the inputs used to create the sparse matrix.

Another way to create sparse matrices is to convert a dense matrix into a sparse matrix using the sparse() function:

You can go in the other direction using the full() function. The issparse() function can be used to query if a matrix is sparse.

Operazioni su matrici sparse
Arithmetic operations on sparse matrices also work as they do on dense matrices. Indexing of, assignment into, and concatenation of sparse matrices work in the same way as dense matrices. Indexing operations, especially assignment, are expensive, when carried out one element at a time. In many cases it may be better to convert the sparse matrix into (I,J,V) format using findnz(), manipulate the values or the structure in the dense vectors (I,J,V), and then reconstruct the sparse matrix.

Corrispondenze tra i metodi per matrici dense e sparse
The following table gives a correspondence between built-in methods on sparse matrices and their corresponding methods on dense matrix types. In general, methods that generate sparse matrices differ from their dense counterparts in that the resulting matrix follows the same sparsity pattern as a given sparse matrix S, or that the resulting sparse matrix has density d, i.e. each matrix element has a probability d of being non-zero.

Details can be found in the Sparse Vectors and Matrices [prossimamente] section of the standard library reference.

Sparse            Dense         Description
spzeros(m,n)      zeros(m,n)    Creates a m-by-n matrix of zeros. 
                                (spzeros(m,n) is empty.)
spones(S)         ones(m,n)     Creates a matrix filled with ones. Unlike the
                                dense version, spones() has the same sparsity
                                pattern as S.
speye(n)          eye(n)        Creates a n-by-n identity matrix.
full(S)           sparse(A)     Interconverts between dense and sparse formats.
sprand(m,n,d)     rand(m,n)     Creates a m-by-n random matrix (of density d)
                                with iid non-zero elements distributed uniformly
                                on the half-open interval [0,1).
sprandn(m,n,d)    randn(m,n)    Creates a m-by-n random matrix (of density d)
                                with iid non-zero elements distributed according
                                to the standard normal (Gaussian) distribution.
sprandn(m,n,d,X)  randn(m,n,X)  Creates a m-by-n random matrix (of density d)
                                with iid non-zero elements distributed according
                                to the X distribution. (Requires the 
                                Distributions package.)

:mrgreen:

Julia – 68 – arrays multidimensionali – 5

Continuo da qui, copio qui.

broadcasting
It is sometimes useful to perform element-by-element binary operations on arrays of different sizes, such as adding a vector to each column of a matrix. An inefficient way to do this would be to replicate the vector to the size of the matrix:

This is wasteful when dimensions get large, so Julia offers broadcast(), which expands singleton dimensions in array arguments to match the corresponding dimension in the other array without using extra memory, and applies the given function elementwise:

Dotted operators [qui] such as .+ and .* are equivalent to broadcast calls (except that they fuse, as described below). There is also a broadcast!() function to specify an explicit destination (which can also be accessed in a fusing fashion by .= assignment), and functions broadcast_getindex() and broadcast_setindex!() that broadcast the indices before indexing. Moreover, f.(args...) is equivalent to broadcast(f, args...), providing a convenient syntax to broadcast any function (dot syntax). Nested “dot calls” f.(...) (including calls to .+ etcetera) automatically fuse into a single broadcast call.

Additionally, broadcast() is not limited to arrays (see the function documentation), it also handles tuples and treats any argument that is not an array, tuple or Ref (except for Ptr) as a “scalar”.

implementazione
The base array type in Julia is the abstract type AbstractArray{T,N}. It is parametrized by the number of dimensions N and the element type T. AbstractVector and AbstractMatrix are aliases for the 1-d and 2-d cases. Operations on AbstractArray objects are defined using higher level operators and functions, in a way that is independent of the underlying storage. These operations generally work correctly as a fallback for any specific array implementation.

The AbstractArray type includes anything vaguely array-like, and implementations of it might be quite different from conventional arrays. For example, elements might be computed on request rather than stored. However, any concrete AbstractArray{T,N} type should generally implement at least size(A) (returning an Int tuple), getindex(A,i) and getindex(A,i1,...,iN); mutable arrays should also implement setindex!(). It is recommended that these operations have nearly constant time complexity, or technically Õ(1) complexity, as otherwise some array functions may be unexpectedly slow. Concrete types should also typically provide a similar(A,T=eltype(A),dims=size(A)) method, which is used to allocate a similar array for copy() and other out-of-place operations. No matter how an AbstractArray{T,N} is represented internally, T is the type of object returned by integer indexing (A[1, ..., 1], when A is not empty) and N should be the length of the tuple returned by size().

DenseArray is an abstract subtype of AbstractArray intended to include all arrays that are laid out at regular offsets in memory, and which can therefore be passed to external C and Fortran functions expecting this memory layout. Subtypes should provide a method stride(A,k) that returns the “stride” of dimension k: increasing the index of dimension k by 1 should increase the index i of getindex(A,i) by stride(A,k). If a pointer conversion method Base.unsafe_convert(Ptr{T}, A) is provided, the memory layout should correspond in the same way to these strides.

The Array type is a specific instance of DenseArray where elements are stored in column-major order (see additional notes in Performance Tips [prossimamente]). Vector and Matrix are aliases for the 1-d and 2-d cases. Specific operations such as scalar indexing, assignment, and a few other basic storage-specific operations are all that have to be implemented for Array, so that the rest of the array library can be implemented in a generic manner.

SubArray is a specialization of AbstractArray that performs indexing by reference rather than by copying. A SubArray is created with the view() function, which is called the same way as getindex() (with an array and a series of index arguments). The result of view() looks the same as the result of getindex(), except the data is left in place. view() stores the input index vectors in a SubArray object, which can later be used to index the original array indirectly. By putting the @views macro in front of an expression or block of code, any array[...] slice in that expression will be converted to create a SubArray view instead.

StridedVector and StridedMatrix are convenient aliases defined to make it possible for Julia to call a wider range of BLAS and LAPACK functions by passing them either Array or SubArray objects, and thus saving inefficiencies from memory allocation and copying.

The following example computes the QR decomposition of a small section of a larger array, without creating any temporaries, and by calling the appropriate LAPACK function with the right leading dimension size and stride parameters.

:mrgreen:

Julia – 67 – arrays multidimensionali – 4

Continuo da qui, copio qui.

indicizzazione logica
Often referred to as logical indexing or indexing with a logical mask, indexing by a boolean array selects elements at the indices where its values are true. Indexing by a boolean vector B is effectively the same as indexing by the vector of integers that is returned by find(B). Similarly, indexing by a N-dimensional boolean array is effectively the same as indexing by the vector of CartesianIndex{N}s where its values are true. A logical index must be a vector of the same length as the dimension it indexes into, or it must be the only index provided and match the size and dimensionality of the array it indexes into. It is generally more efficient to use boolean arrays as indices directly instead of first calling find().

iterazione
The recommended ways to iterate over a whole array are

for a in A
    # Do something with the element a
end

for i in eachindex(A)
    # Do something with i and/or A[i]
end

The first construct is used when you need the value, but not index, of each element. In the second construct, i will be an Int if A is an array type with fast linear indexing; otherwise, it will be a CartesianIndex:

In contrast with for i = 1:length(A), iterating with eachindex provides an efficient way to iterate over any array type.

If you write a custom AbstractArray type, you can specify that it has fast linear indexing using

Base.IndexStyle(::Type{<:MyArray}) = IndexLinear()

This setting will cause eachindex iteration over a MyArray to use integers. If you don’t specify this trait, the default value IndexCartesian() is used.

arrays e operazioni vettorializzate e funzioni
The following operators are supported for arrays:

  • Unary arithmetic: -, +
  • Binary arithmetic: -, +, *, /, \, ^
  • Comparison: ==, !=, (isapprox),

Most of the binary arithmetic operators listed above also operate elementwise when one argument is scalar: -, +, and * when either argument is scalar, and / and \ when the denominator is scalar. For example

Additionally, to enable convenient vectorization of mathematical and other operations, Julia provides the dot syntax [qui] f.(args...), e.g. sin.(x) or min.(x,y), for elementwise operations over arrays or mixtures of arrays and scalars (a Broadcasting operation [prossimamente]); these have the additional advantage of “fusing” into a single loop when combined with other dot calls, e.g. sin.(cos.(x)).

Also, every binary operator supports a dot version that can be applied to arrays (and combinations of arrays and scalars) in such fused broadcasting operations[stesso post], e.g. z .== sin.(x .* y).

Note that comparisons such as == operate on whole arrays, giving a single boolean answer. Use dot operators like .== for elementwise comparisons. (For comparison operations like <, only the elementwise .< version is applicable to arrays.)

Also notice the difference between max.(a,b), which broadcasts max() elementwise over a and b, and maximum(a), which finds the largest value within a. The same relationship holds for min.(a,b) and minimum(a).

:mrgreen:

Julia – 66 – arrays multidimensionali – 3

Continuo da qui, copio qui.

Assegnazione
The general syntax for assigning values in an n-dimensional array A is:

A[I_1, I_2, ..., I_n] = X

where each I_k may be a scalar integer, an array of integers, or any other supported index. This includes Colon (:) to select all indices within the entire dimension, ranges of the form a:c or a:b:c to select contiguous or strided subsections, and arrays of booleans to select elements at their true indices.

If X is an array, it must have the same number of elements as the product of the lengths of the indices: prod(length(I_1), length(I_2), ..., length(I_n)). The value in location I_1[i_1], I_2[i_2], ..., I_n[i_n] of A is overwritten with the value X[i_1, i_2, ..., i_n]. If X is not an array, its value is written to all referenced locations of A.

Just as in Indexing [post precedente], the end keyword may be used to represent the last index of each dimension within the indexing brackets, as determined by the size of the array being assigned into. Indexed assignment syntax without the end keyword is equivalent to a call to setindex!():

setindex!(A, X, I_1, I_2, ..., I_n)

Example:

tipi di indici supportati
In the expression A[I_1, I_2, ..., I_n], each I_k may be a scalar index, an array of scalar indices, or an object that represents an array of scalar indices and can be converted to such by to_indices:

1. A scalar index. By default this includes:

  • Non-boolean integers
  • CartesianIndex{N}s, which behave like an N-tuple of integers spanning multiple dimensions (see below for more details)

2. An array of scalar indices. This includes:

  • Vectors and multidimensional arrays of integers
  • Empty arrays like [], which select no elements.
  • Ranges of the form a:c or a:b:c, which select contiguous or strided subsections from a to c (inclusive).
  • Any custom array of scalar indices that is a subtype of AbstractArray
  • Arrays of CartesianIndex{N} (see below for more details)

3. An object that represents an array of scalar indices and can be converted to such by to_indices. By default this includes:

  • Colon() (:), which represents all indices within an entire dimension or across the entire array
  • Arrays of booleans, which select elements at their true indices (see below for more details)

indici cartesiani
The special CartesianIndex{N} object represents a scalar index that behaves like an N-tuple of integers spanning multiple dimensions. For example:

Considered alone, this may seem relatively trivial; CartesianIndex simply gathers multiple integers together into one object that represents a single multidimensional index. When combined with other indexing forms and iterators that yield CartesianIndexes, however, this can lead directly to very elegant and efficient code. See Iteration below, and for some more advanced examples, see this blog post on multidimensional algorithms and iteration.

Arrays of CartesianIndex{N} are also supported. They represent a collection of scalar indices that each span N dimensions, enabling a form of indexing that is sometimes referred to as pointwise indexing. For example, it enables accessing the diagonal elements from the first “page” of A from above:

This can be expressed much more simply with dot broadcasting and by combining it with a normal integer index (instead of extracting the first page from A as a separate step). It can even be combined with a : to extract both diagonals from the two pages at the same time:

Warning: CartesianIndex and arrays of CartesianIndex are not compatible with the end keyword to represent the last index of a dimension. Do not use end in indexing expressions that may contain either CartesianIndex or arrays thereof.

:mrgreen:

Julia – 65 – arrays multidimensionali – 2

Continuo da qui, copio qui.

Comprehensions
Davvero non so come si traduce 😯 ma quello è.

Comprehensions provide a general and powerful way to construct arrays. Comprehension syntax is similar to set construction notation in mathematics:

A = [ F(x,y,...) for x=rx, y=ry, ... ]

The meaning of this form is that F(x,y,...) is evaluated with the variables x, y, etc. taking on each value in their given list of values. Values can be specified as any iterable object, but will commonly be ranges like 1:n or 2:(n-1), or explicit arrays of values like [1.2, 3.4, 5.7]. The result is an N-d dense array with dimensions that are the concatenation of the dimensions of the variable ranges rx, ry, etc. and each F(x,y,...) evaluation returns a scalar.

The following example computes a weighted average of the current element and its left and right neighbor along a 1-d grid.:

The resulting array type depends on the types of the computed elements. In order to control the type explicitly, a type can be prepended to the comprehension. For example, we could have requested the result in single precision by writing:

Float32[ 0.25*x[i-1] + 0.5*x[i] + 0.25*x[i+1] for i=2:length(x)-1 ]

Generatore di espressioni
Comprehensions can also be written without the enclosing square brackets, producing an object known as a generator. This object can be iterated to produce values on demand, instead of allocating an array and storing them in advance (see Iteration [prossimamente]). For example, the following expression sums a series without allocating memory:

When writing a generator expression with multiple dimensions inside an argument list, parentheses are needed to separate the generator from subsequent arguments:

All comma-separated expressions after for are interpreted as ranges. Adding parentheses lets us add a third argument to map:

Ranges in generators and comprehensions can depend on previous ranges by writing multiple for keywords:

In such cases, the result is always 1-d.

Generated values can be filtered using the if keyword:

Indicizzazione
The general syntax for indexing into an n-dimensional array A is:

X = A[I_1, I_2, ..., I_n]

where each I_k may be a scalar integer, an array of integers, or any other supported index. This includes Colon (:) to select all indices within the entire dimension, ranges of the form a:c or a:b:c to select contiguous or strided subsections, and arrays of booleans to select elements at their true indices.

If all the indices are scalars, then the result X is a single element from the array A. Otherwise, X is an array with the same number of dimensions as the sum of the dimensionalities of all the indices.

If all indices are vectors, for example, then the shape of X would be (length(I_1), length(I_2), ..., length(I_n)), with location (i_1, i_2, ..., i_n) of X containing the value A[I_1[i_1], I_2[i_2], ..., I_n[i_n]]. If I_1 is changed to a two-dimensional matrix, then X becomes an n+1-dimensional array of shape (size(I_1, 1), size(I_1, 2), length(I_2), ..., length(I_n)). The matrix adds a dimension. The location (i_1, i_2, i_3, ..., i_{n+1}) contains the value at A[I_1[i_1, i_2], I_2[i_3], ..., I_n[i_{n+1}]]. All dimensions indexed with scalars are dropped. For example, the result of A[2, I, 3] is an array with size size(I). Its ith element is populated by A[2, I[i], 3].

uhmmm… 😯

As a special part of this syntax, the end keyword may be used to represent the last index of each dimension within the indexing brackets, as determined by the size of the innermost array being indexed. Indexing syntax without the end keyword is equivalent to a call to getindex:

X = getindex(A, I_1, I_2, ..., I_n)

Example:

Empty ranges of the form n:n-1 are sometimes used to indicate the inter-index location between n-1 and n. For example, the searchsorted() function uses this convention to indicate the insertion point of a value not found in a sorted array:

:mrgreen: