Category Archives: Python

SciPy – 10 – funzioni speciali

Continuo da qui, copio qui.

The main feature of the scipy.special package is the definition of numerous special functions of mathematical physics. Available functions include airy, elliptic, bessel, gamma, beta, hypergeometric, parabolic cylinder, mathieu, spheroidal wave, struve, and kelvin. There are also some low-level stats functions that are not intended for general use as an easier interface to these functions is provided by the stats module. Most of these functions can take array arguments and return array results following the same broadcasting rules as other math functions in Numerical Python. Many of these functions also accept complex numbers as input. For a complete list of the available functions with a one-line description type help(special), da me non funziona 😡 Each function also has its own documentation accessible using help. If you don’t see a function you need, consider writing it and contributing it to the library. You can write the function in either C, Fortran, or Python. Look in the source code of the library for examples of each of these kinds of functions.

Bessel
Bessel functions are a family of solutions to Bessel’s differential equation with real or complex order alpha:

Among other uses, these functions arise in wave propagation problems such as the vibrational modes of a thin drum head. Here is an example of a circular drum head anchored at the edge:

Inserisco il codice nel file bs.py che eseguo con python3 bs.py.

import numpy as np
from scipy import special
def drumhead_height(n, k, distance, angle, t):
    kth_zero = special.jn_zeros(n, k)[-1]
    return np.cos(t) * np.cos(n*angle) * special.jn(n, distance*kth_zero)

theta = np.r_[0:2*np.pi:50j]
radius = np.r_[0:1:50j]
x = np.array([r * np.cos(theta) for r in radius])
y = np.array([r * np.sin(theta) for r in radius])
z = np.array([drumhead_height(1, 1, r, theta, 0.5) for r in radius])

import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
fig = plt.figure()
ax = Axes3D(fig)
ax.plot_surface(x, y, z, rstride=1, cstride=1, cmap=cm.jet)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
fig.savefig('sp32.png')

Ottengo un warning: UserWarning:
Matplotlib is building the font cache using fc-list. This may take a moment.
‘Matplotlib is building the font cache using fc-list.’

e alla fine ecco

Cython Bindings per funzioni special
Scipy also offers Cython bindings for scalar, typed versions of many of the functions in special. The following Cython code gives a simple example of how to use these functions:

Non ci sono riuscito. Se del caso sarà da approfondire ma la documentazione non mi sembra tanto chiara.

E così per tutto quanto riguarda Cython. Chissà se… È solo un problema di configurazione, pasticci dovuti ad Anaconda, se serve si può fare (cit.) 😯

:mrgreen:

SciPy – 9 – funzioni base

Continuo da qui, copio qui.

Interazione con NumPy
Scipy builds on Numpy, and for all basic array handling needs you can use Numpy functions:

import numpy as np
np.some_function()

Rather than giving a detailed description of each of these functions (which is available in the Numpy Reference Guide or by using the help, info and source commands), this tutorial will discuss some of the more useful commands which require a little introduction to use to their full potential.

To use functions from some of the Scipy modules, you can do:

from scipy import some_module
some_module.some_function()

Trucchi
There are some class instances that make special use of the slicing functionality to provide efficient means for array construction. This part will discuss the operation of np.mgrid, np.ogrid, np.r_, and np.c_ for quickly constructing arrays.

For example, rather than writing something like the following

a = np.concatenate(([3], [0]*5, np.arange(-1, 1.002, 2/9.0)))

with the r_ command one can enter this as

a = np.r_[3,[0]*5,-1:1:10j]

which can ease typing and make for more readable code. Notice how objects are concatenated, and the slicing syntax is (ab)used to construct ranges. The other term that deserves a little explanation is the use of the complex number 10j as the step size in the slicing syntax. This non-standard use allows the number to be interpreted as the number of points to produce in the range rather than as a step size (note we would have used the long integer notation, 10L, but this notation may go away in Python as the integers become unified). This non-standard usage may be unsightly to some, but it gives the user the ability to quickly construct complicated vectors in a very readable fashion. When the number of points is specified in this way, the end- point is inclusive.

The “r” stands for row concatenation because if the objects between commas are 2 dimensional arrays, they are stacked by rows (and thus must have commensurate columns). There is an equivalent command c_ that stacks 2d arrays by columns but works identically to r_ for 1d arrays

Another very useful class instance which makes use of extended slicing notation is the function mgrid. In the simplest case, this function can be used to construct 1d ranges as a convenient substitute for arange. It also allows the use of complex-numbers in the step-size to indicate the number of points to place between the (inclusive) end-points. The real purpose of this function however is to produce N, N-d arrays which provide coordinate arrays for an N-dimensional volume. The easiest way to understand this is with an example of its usage:

Having meshed arrays like this is sometimes very useful. However, it is not always needed just to evaluate some N-dimensional function over a grid due to the array-broadcasting rules of Numpy and SciPy. If this is the only purpose for generating a meshgrid, you should instead use the function ogrid which generates an “open” grid using newaxis judiciously to create N, N-d arrays where only one dimension in each array has length greater than 1. This will save memory and create the same result if the only purpose for the meshgrid is to generate sample points for evaluation of an N-d function.

Manipolazione della forma
In this category of functions are routines for squeezing out length- one dimensions from N-dimensional arrays, ensuring that an array is at least 1-, 2-, or 3-dimensional, and stacking (concatenating) arrays by rows, columns, and “pages “(in the third dimension). Routines for splitting arrays (roughly the opposite of stacking arrays) are also available.

Polinomi
There are two (interchangeable) ways to deal with 1-d polynomials in SciPy. The first is to use the poly1d class from Numpy. This class accepts coefficients or polynomial roots to initialize a polynomial. The polynomial object can then be manipulated in algebraic expressions, integrated, differentiated, and evaluated. It even prints like a polynomial:

The other way to handle polynomials is as an array of coefficients with the first element of the array giving the coefficient of the highest power. There are explicit functions to add, subtract, multiply, divide, integrate, differentiate, and evaluate polynomials represented as sequences of coefficients.

Vettorializzare funzioni
One of the features that NumPy provides is a class vectorize to convert an ordinary Python function which accepts scalars and returns scalars into a “vectorized-function” with the same broadcasting rules as other Numpy functions (i.e. the Universal functions, or ufuncs). For example, suppose you have a Python function named addsubtract defined as:

which defines a function of two scalar variables and returns a scalar result. The class vectorize can be used to “vectorize“ this function so that vec_addsubtract = np.vectorize(addsubtract) returns a function which takes array arguments and returns an array result:

This particular function could have been written in vector form without the use of vectorize. But, what if the function you have written is the result of some optimization or integration routine. Such functions can likely only be vectorized using vectorize.

Gestione dei tipi
Note the difference between np.iscomplex / np.isreal and np.iscomplexobj / np.isrealobj. The former command is array based and returns byte arrays of ones and zeros providing the result of the element-wise test. The latter command is object based and returns a scalar describing the result of the test on the entire object.

Often it is required to get just the real and/or imaginary part of a complex number. While complex numbers and arrays have attributes that return those values, if one is not sure whether or not the object will be complex-valued, it is better to use the functional forms np.real and np.imag. These functions succeed for anything that can be turned into a Numpy array. Consider also the function np.real_if_close which transforms a complex-valued number with tiny imaginary part into a real number.

Occasionally the need to check whether or not a number is a scalar (Python (long)int, Python float, Python complex, or rank-0 array) occurs in coding. This functionality is provided in the convenient function np.isscalar which returns a 1 or a 0.

Finally, ensuring that objects are a certain Numpy type occurs often enough that it has been given a convenient interface in SciPy through the use of the np.cast dictionary. The dictionary is keyed by the type it is desired to cast to and the dictionary stores functions to perform the casting. Thus, np.cast['f'](d) returns an array of np.float32 from d. This function is also useful as an easy way to get a scalar of a certain type:

Altre funzioni utili
There are also several other useful functions which should be mentioned. For doing phase processing, the functions angle, and unwrap are useful. Also, the linspace and logspace functions return equally spaced samples in a linear or log scale. Finally, it’s useful to be aware of the indexing capabilities of Numpy. Mention should be made of the function select which extends the functionality of where to include multiple conditions and multiple choices. The calling convention is select(condlist,choicelist,default=0). select is a vectorized form of the multiple if-statement. It allows rapid construction of a function which returns an array of results based on a list of conditions. Each element of the return array is taken from the array in a choicelist corresponding to the first condition in condlist that is true. For example

Some additional useful functions can also be found in the module scipy.misc. For example the factorial and comb functions compute n! and n!/k!(n−k)! using either exact integer arithmetic (thanks to Python’s Long integer object), or by using floating-point precision and the gamma function. Another function returns a common image used in image processing: lena.

Finally, two functions are provided that are useful for approximating derivatives of functions using discrete-differences. The function central_diff_weights returns weighting coefficients for an equally-spaced N-point approximation to the derivative of order o. These weights must be multiplied by the function corresponding to these points and the results added to obtain the derivative approximation. This function is intended for use when only samples of the function are available. When the function is an object that can be handed to a routine and evaluated, the function derivative can be used to automatically evaluate the object at the correct points to obtain an N-point approximation to the o-th derivative at a given point.

:mrgreen:

SciPy – 8 – perché e per cosa SciPy

Continuo da qui, copio qui.

Dopo l’esame del corso della prof Karlijn 🚀 e qualche scambio di opinioni con un paio di nerds che usano queste cose (non voglio dire che è colpa|merito loro, nèh!) credo che sia utile un aproccio a SciPy simile a quello adottato per NumPy (&co.). Purtroppo manca Jake 🚀.

Credo che la documentazione ufficiale faccia al mio caso. Proprio dall’introduzione ecco:

SciPy is a collection of mathematical algorithms and convenience functions built on the Numpy extension of Python. It adds significant power to the interactive Python session by providing the user with high-level commands and classes for manipulating and visualizing data. With SciPy an interactive Python session becomes a data-processing and system-prototyping environment rivaling systems such as MATLAB, IDL, Octave, R-Lab, and SciLab.

Due cose da notare secondo me: 1) non è che siano cose che si usano tutti i giorni, anzi solo quando servono, cioè raramente; e 2) c’è MATLAB, si usa quello perché si sta usando da tempo, ci sono le cose già fatte (cioè conosciute) e quello lo conoscono (usano?) tutti. Poi va benissimo (quasi sempre) la versione free Octave. Ne ho parlato in passato, non condivido quest’opinione ma la capisco. Anche perché (non voglio entrare nel dettaglio, sarebbe un discorso lungo) si può usare interattivamente in una GUI davvero funzionale.

E Python sarà anche sexy ma è comunque una cosa da apprendere e certe cose sono nuove 😯

In ogni caso questi posts non sono per nessun altro che per me, per cui parto 😊

Una nota fin da subito, l’usuale convenzione: For brevity and convenience, we will often assume that the main packages (numpy, scipy, and matplotlib) have been imported as:

import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt

L’organizzazione di SciPy
SciPy is organized into subpackages covering different scientific computing domains. These are summarized in the following table:

Subpackage  Description
cluster     Clustering algorithms
constants   Physical and mathematical constants
fftpack     Fast Fourier Transform routines
integrate   Integration and ordinary differential equation solvers
interpolate Interpolation and smoothing splines
io          Input and Output
linalg      Linear algebra
ndimage     N-dimensional image processing
odr         Orthogonal distance regression
optimize    Optimization and root-finding routines
signal      Signal processing
sparse      Sparse matrices and associated routines
spatial     Spatial data structures and algorithms
special     Special functions
stats       Statistical distributions and functions

Scipy sub-packages need to be imported separately, for example:

from scipy import linalg, optimize

Because of their ubiquitousness, some of the functions in these subpackages are also made available in the scipy namespace to ease their use in interactive sessions and programs. In addition, many basic array functions from numpy are also available at the top-level of the scipy package. Before looking at the sub-packages individually, we will first look at some of these common functions.

Trovare documentazione
SciPy and NumPy have documentation versions in both HTML and PDF format available [here], that cover nearly all available functionality. However, this documentation is still work-in-progress and some parts may be incomplete or sparse. As we are a volunteer organization and depend on the community for growth, your participation – everything from providing feedback to improving the documentation and code – is welcome and actively encouraged.

Python’s documentation strings are used in SciPy for on-line documentation. There are two methods for reading them and getting help. One is Python’s command help in the pydoc module. Entering this command with no arguments (i.e. help) launches an interactive help session that allows searching through the keywords and modules available to all of Python. Secondly, running the command help(obj) with an object as the argument displays that object’s calling signature, and documentation string.

The pydoc method of help is sophisticated but uses a pager to display the text. Sometimes this can interfere with the terminal you are running the interactive session within. A numpy/scipy-specific help system is also available under the command numpy.info. The signature and documentation string for the object passed to the help command are printed to standard output (or to a writeable object passed as the third argument). The second keyword argument of numpy.info defines the maximum width of the line for printing. If a module is passed as the argument to help then a list of the functions and classes defined in that module is printed. For example: *** no non sono riuscito a fare l’esempio *** ma c’è la reference guide; per optimize è questa.

:mrgreen:

SciPy – 7 – autovalori e autovettori

Continuo da qui, copio qui.
Davvero, quella del titolo è la traduzione di eigenvalues and eigenvectors; o almeno credo 😎

The first topic that you will tackle are the eigenvalues and eigenvectors.

Eigenvalues are a new way to see into the heart of a matrix. But before you go more into that, let’s explain first what eigenvectors are. Almost all vectors change direction, when they are multiplied by a matrix. However, certain exceptional, resulting vectors are in the same direction as the vectors that are the result of the multiplication. These are the eigenvectors.

In other words, multiply an eigenvector by a matrix, and the resulting vector of that multiplication is equal to a multiplication of the original eigenvector with λ, the eigenvalue: Ax=λx.

This means that the eigenvalue gives you very valuable information: it tells you whether one of the eigenvectors is stretched, shrunk, reversed, or left unchanged—when it is multiplied by a matrix.

e

You use the eig() function from the linalg SciPy module to solve ordinary or generalized eigenvalue problems for square matrices.

Note that the eigvals() function is another way of unpacking the eigenvalues of a matrix.

When you’re working with sparse matrices, you can fall back on the module scipy.sparse to provide you with the correct functions to find the eigenvalues and eigenvectors: la, v = sparse.linalg.eigs(myMatrix,1).

Note that the code above specifies the number of eigenvalues and eigenvectors that has to be retrieved, namely, 1.

The eigenvalues and eigenvectors are important concepts in many computer vision and machine learning techniques, such as Principal Component Analysis (PCA) for dimensionality reduction and EigenFaces for face recognition.

Singular Value Decomposition (SVD)
Next, you need to know about SVD if you want to really learn data science. The singular value decomposition of a matrix A is the decomposition or facorization of A into the product of three matrices:

The size of the individual matrices is as follows if you know that matrix A is of size M x N:

  • Matrix U is of size M x M
  • Matrix V is of size N x N
  • Matrix Σ is of size M x N

The indicates that the matrices are multiplied and the t that you see in Vt means that the matrix is transposed, which means that the rows and columns are interchanged.

Simply stated, singular value decomposition provides a way to break a matrix into simpler, meaningful pieces. These pieces may contain some data we are interested in.

Al solito, il risultato non è quello atteso 😡

Note that for sparse matrices, you can use the sparse.linalg.svds() function to perform the decomposition.

If you’re new to data science, the matrix decomposition will be quite opaque for you. Ecco, come pure l’esempio che segue per cui non sono attrezzato. Forse il tutorial di Karlijn (rockz) è troppo specialistico per me. Pausa, poi si continua 😊

:mrgreen:

SciPy – 6 – operazioni sui vettori

Continuo da qui, copio qui.

Now that you have learned or refreshed the difference between vectors, dense matrices and sparse matrices, it’s time to take a closer look at vectors and what kind of mathematical operations you can do with them. The tutorial focuses here explicitly on mathematical operations so that you’ll come to see the similarities and differences with matrices, and because a huge part of linear algebra is, ultimately, working with matrices.

You have already seen that you can easily create a vector with np.array(). But now that you have vectors at your disposal, you might also want to know of some basic operations that can be performed on them. vector1 and vector2 are already loaded for you in the following code chunk:

Now that you have successfully seen some vector operations, it’s time to get started on to the real matrix work!

Matrici: operazioni e funzioni
Similarly to where you left it off at the start of the previous section, you know how to create matrices, but you don’t know yet how you can use them to your advantage. This section will provide you with an overview of some matrix functions and basic matrix routines that you can use to work efficiently.

Firstly, let’s go over some functions. These will come quite easily if you have worked with NumPy before, but even if you don’t have any experience with it yet, you’ll see that these functions are easy to get going.

Let’s look at some examples of functions.

There’s np.add() and np.subtract() to add and subtract arrays or matrices, and also np.divide() and np.multiply for division and multiplication. This really doesn’t seem like a big msytery, does it? Also the np.dot() function that you have seen in the previous section where it was used to calculate the dot product, can also be used with matrices. But don’t forget to pass in two matrices instead of vectors.

These are basic, right? Let’s go a bit less basic. When it comes to multiplications, there are also some other functions that you can consider, such as np.vdot() for the dot product of vectors, np.inner() or np.outer() for the inner or outer products of arrays, np.tensordot() and np.kron() for the Kronecker product of two arrays:

Besides these, it might also be useful to consider some functions of the linalg module: the matrix exponential functions linalg.expm(), linalg.expm2() and linalg.expm3(). The difference between these three lies in the ways that the exponential is calculated. Stick to the first one for a general matrix exponential, but definitely try the three of them out to see the difference in results!

Also trigonometric functions such as linalg.cosm(), linalg.sinm() and linalg.tanm(), hyperbolic trigonometric functions such as linalg.coshm(), linalg.sinhm() and linalg.tanhm(), the sign function linalg.signm(), the matrix logarithm linalg.logm(), and the matrix square root linalg.sqrtm().

Additionally, you can also evaluate a matrix function with the help of the linalg.funm() function. OOPS 😯 non riuscito a farlo, funm() sconosciuta 😡

Let’s now take a look at some basic matrix routines. The first thing that you probably want to check out are the matrix attributes: T for transposition, H for conjugate transposition, I for inverse, and A to cast as an array.

Qui di nuovo panico, devo approfondire, per adesso così:

When you tranpose a matrix, you make a new matrix whose rows are the columns of the original. A conjugate transposition, on the other hand, interchanges the row and column index for each matrix element. The inverse of a matrix is a matrix that, if multiplied with the original matrix, results in an identity matrix.

But besides those attributes, there are also real functions that you can use to perform some basic matrix routines, such as np.transpose() and linalg.inv() for transposition and matrix inverse, respectively. Ah! ‘desso me lo dici! restano comunque da scoprite i nomi corti.

Besides these, you can also retrieve the trace or sum of the elements on the main matrix diagonal with np.trace(). Similarly, you can also retrieve the matrix rank or the number of Singular Value Decomposition singular values of an array that are greater than a certain treshold with linalg.matrix_rank from NumPy. Don’t worry if the matrix rank doesn’t make sense for now; You’ll see more on that later on in this tutorial.

For now, let’s focus on two more routines that you can use:

  • The norm of a matrix can be computed with linalg.norm: a matrix norm is a number defined in terms of the entries of the matrix. The norm is a useful quantity which can give important information about a matrix because it tells you how large the elements are.
  • On top of that, you can also calculate the determinant, which is a useful value that can be computed from the elements of a square matrix, with linalg.det(). The determinant boils down a square matrix to a a single number, which determines whether the square matrix is invertible or not.

Lastly, solving large systems of linear equations are one of the most basic applications of matrices. If you have a system of Ax = b, where A is a square matrix and b a general matrix, you have two methods that you can use to find x, depending of course on which type of matrix you’re working with.

Non riesco a fare la prima parte (cioè sì ma mi da altri risultati), questa è la seconda:

To solve sparse matrices, you can use linalg.spsolve(). When you can not solve the equation, it might still be possible to obtain an approximate x with the help of the linalg.lstsq() command.

Now that you have gotten a clue on how you can create matrices and how you can use them for mathematical operations, it’s time to tackle some more advanced topics that you’ll need to really get into machine learning.

:mrgreen:

SciPy – 5 – algebra lineare con SciPy

Continuo da qui, copio qui.
Finito il ripasso di NumPy oggi entra in campo SciPy 😁

Of course you first need to make sure firstly that you have Python installed. Go to this page if you still need to do this 🙂 If you’re working on Windows, make sure that you have added Python to the PATH environment variable. In addition, don’t forget to install a package manager, such as pip, which will ensure that you’re able to use Python’s open-source libraries.

Kalijn suggerisce di usare pip ma avendo installato Anaconda SciPy ce l’ho già:

After these steps, you’re ready to goy!

Uh! ecco qui, proprio come dicevo:

Tip: install the package by downloading the Anaconda Python distribution. It’s an easy way to get started quickly, as Anaconda not only includes 100 of the most popular [qui] Python, R and Scala packages for data science, but also includes several open course development environments such as Jupyter and Spyder.

Vettori e Matrici, le basi
Now that you have made sure that your workspace is prepped, you can finally get started with linear algebra in Python. In essence, this discipline is occupied with the study of vector spaces and the linear mappings that exist between them. These linear mappings can be described with matrices, which also makes it easier to calculate.

Remember that a vector space is a fundamental concept in linear algebra. It’s a space where you have a collection of objects (vectors) and where you can add or scale two vectors without the resulting vector leaving the space. Remember also that vectors are rows (or columns) of a matrix.

But how does this work in Python?

You can easily create a vector with the np.array() function. Similarly, you can give a matrix structure to every one-or two-dimensional ndarray with either the np.matrix() or np.mat() commands.

Well, not exactly. There are some differences:

  • A matrix is 2-D, while arrays are usually n-D,
  • As the functions above already implied, the matrix is a subclass of ndarray,
  • Both arrays and matrices have .T(), but only matrices have .H() and .I(),
  • Matrix multiplication works differently from element-wise array multiplication, and
  • To add to this, the ** operation has different results for matrices and arrays

When you’re working with matrices, you might sometimes have some in which most of the elements are zero. These matrices are called “sparse matrices”, while the ones that have mostly non-zero elements are called “dense matrices”.

In itself, this seems trivial, but when you’re working with SciPy for linear algebra, this can sometimes make a difference in the modules that you use to get certain things done. More concretely, you can use scipy.linalg for dense matrices, but when you’re working with sparse matrices, you might also want to consider checking up on the scipy.sparse module, which also contains its own scipy.sparse.linalg.

For sparse matrices, there are quite a number of options to create them. The code chunk below lists some:

Additionally, there are also some other functions that you might be able to use to create sparse matrices: Block Sparse Row matrices with bsr_matrix(), COOrdinate format sparse matrices with coo_matrix(), DIAgonal storage sparse matrices with dia_matrix(), and Row-based linked list sparse matrices with lil_matrix().

There are really a lot of options, but which one should you choose if you’re making a sparse matrix yourself? It’s not that hard.

Basically, it boils down to first is how you’re going to initialize it. Next, consider what you want to be doing with your sparse matrix.

More concretely, you can go through the following checklist to decide what type of sparse matrix you want to use:

  • If you plan to fill the matrix with numbers one by one, pick a coo_matrix() or dok_matrix() to create your matrix.
  • If you want to initialize the matrix with an array as the diagonal, pick dia_matrix() to initialize your matrix.
  • For sliced-based matrices, use lil_matrix().
  • If you’re constructing the matrix from blocks of smaller matrices, consider using bsr_matrix().
  • If you want to have fast access to your rows and columns, convert your matrices by using the csr_matrix() and csc_matrix() functions, respectively. The last two functions are not great to pick when you need to initialize your matrices, but when you’re multiplying, you’ll definitely notice the difference in speed.

Vai tranquillo! dice la prof 😊 A volte manca un’istruzione, come nello screenshot precedente, a volte manca un import ma si può fare (cit.). E Karlijn tockz! 🚀

:mrgreen:

SciPy – 4 – indicizzare e suddividere arrays

Continuo da qui, copio qui.

With indexing, you basically use square brackets [] to index the array values. In other words, you can use indexing if you want to gain access to selected elements -a subset- of an array. Slicing your data is very similar to subsetting it, but you can consider it to be a little bit more advanced. When you slice an array, you don’t consider just particular values, but you work with “regions” of data instead of pure “locations”.

Preparo gli arrays

ed ecco

Now that your mind is fresh with slicing and indexing, you might also be interested in some index tricks that can make your work more efficient when you’re going into scientific computing together with SciPy. There are four functions that will definitely come up and these are np.mgrid(), np.ogrid(), np.r_ and np.c_.

You might already know the two last functions if you already have some experience with NumPy. np.r and np.c are often used when you need to stack arrays row-wise or column-wise, respectively. With these functions, you can quickly construct arrays instead of using the np.concatenate() function.

e

By looking at the two first of these four functions, you might ask yourself why you would need a meshgrid. You can use meshgrids to generate two arrays containing the x- and y-coordinates at each position in a rectilinear grid. The np.meshgrid() function takes two 1D arrays and produces two 2D matrices corresponding to all pairs of (x, y) in the two arrays.

  • The np.mgrid() function is an implementation of MATLAB’s meshgrid and returns arrays that have the same shape. That means that the dimensions and number of the output arrays are equal to the number of indexing dimensions.
  • The np.ogrid() function, on the other hand, gives an open meshgrid and isn’t as dense as the result that the np.mgrid() function gives. You can see the visual difference between the two in the code chunk above.

You can also read up on the differences between these two functions here.

Another function that you might be able to use for indexing/slicing purposes is the np.select() function. You can use it to return values from a list of arrays depending on conditions, which you can specify yourself in the first argument of the function. In the second argument, you pass the array that you want to consider for this selection process.

Awesome! Now that you have selected the right values of your original array, you can still select the shape and manipulate your new array.

Selezione di forma e manipolazione
NumPy offers a lot of ways to select and manipulate the shape of your arrays and you’ll probably already know a lot of them. The following section will only give a short overview of the functions that might come in handy, so the post won’t cover all of them.

Now, is there such a thing as functions that are handy when you’re working with SciPy?

Well, the ones that are most useful are the ones that can help you to flatten arrays, stack and split arrays. You have already seen the np.c_ and np.r_ functions that you’ll often prefer instead of np.concatenate(), but there are many more that you want to know!

Like np.hstack() to horizontally stack your arrays or np.vstack() to vertically stack your arrays. Similarly, you can use np.vsplit() and np.hsplit() to split your arrays vertically and horizontally. But you’ll probably know all of this already.

Remember that the np.eye() function creates a 2X2 identity array, which is perfect to stack with the 2-D array that has been loaded in for you in the code chunk above.

If you want to know more about the conditions that you need to take into account if you want to stack arrays, go here. However, you can also just look at the arrays and what the functions to do gather an intuition of which ‘rules’ you need to respect if you want to join the two arrays by row or column.

The most important thing that you need to take into account when splitting arrays is probably the shape of your array, because you want to select the correct index at which you want the split to occur.

Besides functions to stack and split arrays, you will also want to keep in mind that you have functions that help to ensure that you’re working with arrays of a certain dimension are indispensable when you’re diving deeper into scientific computing.

Consider the following functions:

Nota: ho seguito le dritte di Karlijn ma la trasposta ha senso per arrays multidimensionali

Note the difference between reshaping and resizing your array. With the first, you change the shape of the data but you don’t change the data itself. When you resize, there is the possibility that the data that is contained within the array will change, depending on the shape that you select, of course.

Besides the splitting and stacking routines and the functions that allow you to further manipulate your arrays, there is also something like “vectorization” that you need to consider. When you apply a function to an array, you usually apply it to each element of the array. Consider, for example, applying np.cos(), np.sin() or np.tan() to an array. You’ll see that it works on all array elements. Now, when you see this, you know that the function is vectorized.

But, when you define functions by yourself, as you will most likely do when you’re getting into scientific computing, you might also want to vectorize them. In those cases, you can call np.vectorize():

When it comes to other vectorized mathematical functions that you might want to know, you should consider np.angle() to provide the angle of the elements of complex array elements, but also basic trigonometric, exponential or logarithmic functions will come in handy.

:mrgreen:

SciPy – 3 – creare arrays

Continuo da qui, copio qui.

You have now seen how to inspect your array and to make adjustments in the data type of it, but you haven’t explicitly seen hwo to create arrays. You should already know that you can use np.array() to to this, but there are other routines for array creation that you should know about: np.eye() and np.identity().

The np.eye() function allows you to create a square matrix with dimensions that are equal to the positive integer that you give as an argument to the function. The entries are generally filled with zeros, only the matrix diagonal is filled with ones. The np.identity() function works and does the same and also returns an identity array.

However, note that np.eye() can take an additional argument k that you can specify to pick the index of the diagonal that you want to populate with ones.

Nota per me che a volte la memoria… 😉: qui si trovano gli indici delle funzioni di NumPy e di SciPy, da bookmarkare prima di subito 😊

Other array creation functions that will most definitely come in handy when you’re working with the matrices for linear algebra are the following:

  • The np.arange() function creates an array with uniformly spaced values between two numbers. You can specify the spacing between the elements,
  • The latter also holds for np.linspace(), but with this function you specify the number of elements that you want in your array.
  • Lastly, the np.logspace() function also creates arrays with uniformly spaced values, but this time in a logarithmic scale. This means that the spacing is now logarithmical: two numbers are evenly spaced between the logarithms of these two to the base of 10.

Adesso dice la prof Karlijn: Now that you have refreshed your memory and you know how to handle the data types of your arrays, it’s time to also tackle the topic of indexing and slicing. Prossimamente.

:mrgreen:

SciPy – 2 – oggetti essenziali di NumPy

Continuo da qui, copio qui.

Un ripasso veloce di cose già viste.

An array is, structurally speaking, nothing but pointers. It’s a combination of a memory address, a data type, a shape and strides. It contains information about the raw data, how to locate an element and how to interpret an element.

The memory address and strides are important when you dive deeper into the lower-level details of arrays, while the data type and shape are things that beginners should surely know and understand. Two other attributes that you might want to consider are the data and size, which allow you to gather even more information on your array.

You’ll see in the results of the code that is included in the code chunk above that the data type of myArray is int64. When you’re intensively working with arrays, you will definitely remember that there are ways to convert your arrays from one data type to another with the astype() method.

Nevertheless, when you’re using SciPy and NumPy together, you might also find the following type handling NumPy functions very useful, especially when you’re working with complex numbers:

Try to add print() calls to see the results of the code that is given above. Then, you’ll see that complex numbers have a real and an imaginary part to them. The np.real() and np.imag() functions are designed to return these parts to the user, respectively.

Alternatively, you might also be able to use np.cast to cast an array object to a different data type, such as float in the example above.

The only thing that really stands out in difficulty in the above code chunk is the np.real_if_close() function. When you give it a complex input, such as myArray, you’ll get a real array back if the complex parts are close to zero. This last part, “close to 0”, can be adjusted by yourself with the tol argument that you can pass to the function.

OK? 😎 continua

:mrgreen:

SciPy – 1 – algebra lineare

Sono sempre fermo all’inizio perché chi ben comincia… 😉 e poi devo ancora decidere da dove devo copiare 😊, ci vorrebbe uno come Jake, lui spiega bene, rockz 🚀

Potrei partire da SciPy Reference Guide ma mi sembra un po’ troppo documentosa, forse è meglio il Scipy Tutorial: Vectors and Arrays (Linear Algebra) di Karlijn Willems.

Il tutorial di Karlijn mi sembra troppo corto, inoltre copre solo una parte di SciPy (a quanto vedo dall’indice della Reference) ma può essere un inizio. Anzi parto; poi si vedrà 😉 Intanto il solito mantra.

Continuo da qui, copio qui.

Much of what you need to know to really dive into machine learning is linear algebra, and that is exactly what this tutorial tackles. Today’s post goes over the linear algebra topics that you need to know and understand to improve your intuition for how and when machine learning methods work by looking at the level of vectors and matrices.

By the end of the tutorial, you’ll hopefully feel more confident to take a closer look at an algorithm!

Introduzione
Ho scorso con Jake tutto un notebook su NumPy, one of the core libraries for scientific computing in Python. This library contains a collection of tools and techniques that can be used to solve on a computer mathematical models of problems in Science and Engineering. Ma c’è SciPy, un package che ci consente prestazioni migliori, it’s a powerful data structure that allows you to efficiently compute arrays and matrices.

Now, SciPy is basically NumPy.

It’s also one of the core packages for scientific computing that provides mathematical algorithms and convenience functions, but it’s built on the NumPy extension of Python. This means that SciPy and NumPy are often used together.

Later on in this tutorial, it will become clear to you how the collaboration between these two libraries has become self-evident.

Interagire con NumPy e SciPy
To interact efficiently with both packages, you first need to know some of the basics of this library and its powerful data structure. To work with these arrays, there’s a huge amount of high-level mathematical functions operate on these matrices and arrays.

Vedremo adesso cosa serve per usare efficientemente SciPy. In essence, you have to know how about the array structure and how you can handle data types and how you can manipulate the shape of arrays. Ah! c’è un cheat sheet sia per NumPy che per SciPy.

Pausa 😊 in fondo ci stiamo ancora preparando alla partenza 😎

:mrgreen: