Category Archives: SciPy

SciPy – 8 – perché e per cosa SciPy

Continuo da qui, copio qui.

Dopo l’esame del corso della prof Karlijn 🚀 e qualche scambio di opinioni con un paio di nerds che usano queste cose (non voglio dire che è colpa|merito loro, nèh!) credo che sia utile un aproccio a SciPy simile a quello adottato per NumPy (&co.). Purtroppo manca Jake 🚀.

Credo che la documentazione ufficiale faccia al mio caso. Proprio dall’introduzione ecco:

SciPy is a collection of mathematical algorithms and convenience functions built on the Numpy extension of Python. It adds significant power to the interactive Python session by providing the user with high-level commands and classes for manipulating and visualizing data. With SciPy an interactive Python session becomes a data-processing and system-prototyping environment rivaling systems such as MATLAB, IDL, Octave, R-Lab, and SciLab.

Due cose da notare secondo me: 1) non è che siano cose che si usano tutti i giorni, anzi solo quando servono, cioè raramente; e 2) c’è MATLAB, si usa quello perché si sta usando da tempo, ci sono le cose già fatte (cioè conosciute) e quello lo conoscono (usano?) tutti. Poi va benissimo (quasi sempre) la versione free Octave. Ne ho parlato in passato, non condivido quest’opinione ma la capisco. Anche perché (non voglio entrare nel dettaglio, sarebbe un discorso lungo) si può usare interattivamente in una GUI davvero funzionale.

E Python sarà anche sexy ma è comunque una cosa da apprendere e certe cose sono nuove 😯

In ogni caso questi posts non sono per nessun altro che per me, per cui parto 😊

Una nota fin da subito, l’usuale convenzione: For brevity and convenience, we will often assume that the main packages (numpy, scipy, and matplotlib) have been imported as:

import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt

L’organizzazione di SciPy
SciPy is organized into subpackages covering different scientific computing domains. These are summarized in the following table:

Subpackage  Description
cluster     Clustering algorithms
constants   Physical and mathematical constants
fftpack     Fast Fourier Transform routines
integrate   Integration and ordinary differential equation solvers
interpolate Interpolation and smoothing splines
io          Input and Output
linalg      Linear algebra
ndimage     N-dimensional image processing
odr         Orthogonal distance regression
optimize    Optimization and root-finding routines
signal      Signal processing
sparse      Sparse matrices and associated routines
spatial     Spatial data structures and algorithms
special     Special functions
stats       Statistical distributions and functions

Scipy sub-packages need to be imported separately, for example:

from scipy import linalg, optimize

Because of their ubiquitousness, some of the functions in these subpackages are also made available in the scipy namespace to ease their use in interactive sessions and programs. In addition, many basic array functions from numpy are also available at the top-level of the scipy package. Before looking at the sub-packages individually, we will first look at some of these common functions.

Trovare documentazione
SciPy and NumPy have documentation versions in both HTML and PDF format available [here], that cover nearly all available functionality. However, this documentation is still work-in-progress and some parts may be incomplete or sparse. As we are a volunteer organization and depend on the community for growth, your participation – everything from providing feedback to improving the documentation and code – is welcome and actively encouraged.

Python’s documentation strings are used in SciPy for on-line documentation. There are two methods for reading them and getting help. One is Python’s command help in the pydoc module. Entering this command with no arguments (i.e. help) launches an interactive help session that allows searching through the keywords and modules available to all of Python. Secondly, running the command help(obj) with an object as the argument displays that object’s calling signature, and documentation string.

The pydoc method of help is sophisticated but uses a pager to display the text. Sometimes this can interfere with the terminal you are running the interactive session within. A numpy/scipy-specific help system is also available under the command numpy.info. The signature and documentation string for the object passed to the help command are printed to standard output (or to a writeable object passed as the third argument). The second keyword argument of numpy.info defines the maximum width of the line for printing. If a module is passed as the argument to help then a list of the functions and classes defined in that module is printed. For example: *** no non sono riuscito a fare l’esempio *** ma c’è la reference guide; per optimize è questa.

:mrgreen:

SciPy – 7 – autovalori e autovettori

Continuo da qui, copio qui.
Davvero, quella del titolo è la traduzione di eigenvalues and eigenvectors; o almeno credo 😎

The first topic that you will tackle are the eigenvalues and eigenvectors.

Eigenvalues are a new way to see into the heart of a matrix. But before you go more into that, let’s explain first what eigenvectors are. Almost all vectors change direction, when they are multiplied by a matrix. However, certain exceptional, resulting vectors are in the same direction as the vectors that are the result of the multiplication. These are the eigenvectors.

In other words, multiply an eigenvector by a matrix, and the resulting vector of that multiplication is equal to a multiplication of the original eigenvector with λ, the eigenvalue: Ax=λx.

This means that the eigenvalue gives you very valuable information: it tells you whether one of the eigenvectors is stretched, shrunk, reversed, or left unchanged—when it is multiplied by a matrix.

e

You use the eig() function from the linalg SciPy module to solve ordinary or generalized eigenvalue problems for square matrices.

Note that the eigvals() function is another way of unpacking the eigenvalues of a matrix.

When you’re working with sparse matrices, you can fall back on the module scipy.sparse to provide you with the correct functions to find the eigenvalues and eigenvectors: la, v = sparse.linalg.eigs(myMatrix,1).

Note that the code above specifies the number of eigenvalues and eigenvectors that has to be retrieved, namely, 1.

The eigenvalues and eigenvectors are important concepts in many computer vision and machine learning techniques, such as Principal Component Analysis (PCA) for dimensionality reduction and EigenFaces for face recognition.

Singular Value Decomposition (SVD)
Next, you need to know about SVD if you want to really learn data science. The singular value decomposition of a matrix A is the decomposition or facorization of A into the product of three matrices:

The size of the individual matrices is as follows if you know that matrix A is of size M x N:

  • Matrix U is of size M x M
  • Matrix V is of size N x N
  • Matrix Σ is of size M x N

The indicates that the matrices are multiplied and the t that you see in Vt means that the matrix is transposed, which means that the rows and columns are interchanged.

Simply stated, singular value decomposition provides a way to break a matrix into simpler, meaningful pieces. These pieces may contain some data we are interested in.

Al solito, il risultato non è quello atteso 😡

Note that for sparse matrices, you can use the sparse.linalg.svds() function to perform the decomposition.

If you’re new to data science, the matrix decomposition will be quite opaque for you. Ecco, come pure l’esempio che segue per cui non sono attrezzato. Forse il tutorial di Karlijn (rockz) è troppo specialistico per me. Pausa, poi si continua 😊

:mrgreen:

SciPy – 6 – operazioni sui vettori

Continuo da qui, copio qui.

Now that you have learned or refreshed the difference between vectors, dense matrices and sparse matrices, it’s time to take a closer look at vectors and what kind of mathematical operations you can do with them. The tutorial focuses here explicitly on mathematical operations so that you’ll come to see the similarities and differences with matrices, and because a huge part of linear algebra is, ultimately, working with matrices.

You have already seen that you can easily create a vector with np.array(). But now that you have vectors at your disposal, you might also want to know of some basic operations that can be performed on them. vector1 and vector2 are already loaded for you in the following code chunk:

Now that you have successfully seen some vector operations, it’s time to get started on to the real matrix work!

Matrici: operazioni e funzioni
Similarly to where you left it off at the start of the previous section, you know how to create matrices, but you don’t know yet how you can use them to your advantage. This section will provide you with an overview of some matrix functions and basic matrix routines that you can use to work efficiently.

Firstly, let’s go over some functions. These will come quite easily if you have worked with NumPy before, but even if you don’t have any experience with it yet, you’ll see that these functions are easy to get going.

Let’s look at some examples of functions.

There’s np.add() and np.subtract() to add and subtract arrays or matrices, and also np.divide() and np.multiply for division and multiplication. This really doesn’t seem like a big msytery, does it? Also the np.dot() function that you have seen in the previous section where it was used to calculate the dot product, can also be used with matrices. But don’t forget to pass in two matrices instead of vectors.

These are basic, right? Let’s go a bit less basic. When it comes to multiplications, there are also some other functions that you can consider, such as np.vdot() for the dot product of vectors, np.inner() or np.outer() for the inner or outer products of arrays, np.tensordot() and np.kron() for the Kronecker product of two arrays:

Besides these, it might also be useful to consider some functions of the linalg module: the matrix exponential functions linalg.expm(), linalg.expm2() and linalg.expm3(). The difference between these three lies in the ways that the exponential is calculated. Stick to the first one for a general matrix exponential, but definitely try the three of them out to see the difference in results!

Also trigonometric functions such as linalg.cosm(), linalg.sinm() and linalg.tanm(), hyperbolic trigonometric functions such as linalg.coshm(), linalg.sinhm() and linalg.tanhm(), the sign function linalg.signm(), the matrix logarithm linalg.logm(), and the matrix square root linalg.sqrtm().

Additionally, you can also evaluate a matrix function with the help of the linalg.funm() function. OOPS 😯 non riuscito a farlo, funm() sconosciuta 😡

Let’s now take a look at some basic matrix routines. The first thing that you probably want to check out are the matrix attributes: T for transposition, H for conjugate transposition, I for inverse, and A to cast as an array.

Qui di nuovo panico, devo approfondire, per adesso così:

When you tranpose a matrix, you make a new matrix whose rows are the columns of the original. A conjugate transposition, on the other hand, interchanges the row and column index for each matrix element. The inverse of a matrix is a matrix that, if multiplied with the original matrix, results in an identity matrix.

But besides those attributes, there are also real functions that you can use to perform some basic matrix routines, such as np.transpose() and linalg.inv() for transposition and matrix inverse, respectively. Ah! ‘desso me lo dici! restano comunque da scoprite i nomi corti.

Besides these, you can also retrieve the trace or sum of the elements on the main matrix diagonal with np.trace(). Similarly, you can also retrieve the matrix rank or the number of Singular Value Decomposition singular values of an array that are greater than a certain treshold with linalg.matrix_rank from NumPy. Don’t worry if the matrix rank doesn’t make sense for now; You’ll see more on that later on in this tutorial.

For now, let’s focus on two more routines that you can use:

  • The norm of a matrix can be computed with linalg.norm: a matrix norm is a number defined in terms of the entries of the matrix. The norm is a useful quantity which can give important information about a matrix because it tells you how large the elements are.
  • On top of that, you can also calculate the determinant, which is a useful value that can be computed from the elements of a square matrix, with linalg.det(). The determinant boils down a square matrix to a a single number, which determines whether the square matrix is invertible or not.

Lastly, solving large systems of linear equations are one of the most basic applications of matrices. If you have a system of Ax = b, where A is a square matrix and b a general matrix, you have two methods that you can use to find x, depending of course on which type of matrix you’re working with.

Non riesco a fare la prima parte (cioè sì ma mi da altri risultati), questa è la seconda:

To solve sparse matrices, you can use linalg.spsolve(). When you can not solve the equation, it might still be possible to obtain an approximate x with the help of the linalg.lstsq() command.

Now that you have gotten a clue on how you can create matrices and how you can use them for mathematical operations, it’s time to tackle some more advanced topics that you’ll need to really get into machine learning.

:mrgreen:

SciPy – 5 – algebra lineare con SciPy

Continuo da qui, copio qui.
Finito il ripasso di NumPy oggi entra in campo SciPy 😁

Of course you first need to make sure firstly that you have Python installed. Go to this page if you still need to do this 🙂 If you’re working on Windows, make sure that you have added Python to the PATH environment variable. In addition, don’t forget to install a package manager, such as pip, which will ensure that you’re able to use Python’s open-source libraries.

Kalijn suggerisce di usare pip ma avendo installato Anaconda SciPy ce l’ho già:

After these steps, you’re ready to goy!

Uh! ecco qui, proprio come dicevo:

Tip: install the package by downloading the Anaconda Python distribution. It’s an easy way to get started quickly, as Anaconda not only includes 100 of the most popular [qui] Python, R and Scala packages for data science, but also includes several open course development environments such as Jupyter and Spyder.

Vettori e Matrici, le basi
Now that you have made sure that your workspace is prepped, you can finally get started with linear algebra in Python. In essence, this discipline is occupied with the study of vector spaces and the linear mappings that exist between them. These linear mappings can be described with matrices, which also makes it easier to calculate.

Remember that a vector space is a fundamental concept in linear algebra. It’s a space where you have a collection of objects (vectors) and where you can add or scale two vectors without the resulting vector leaving the space. Remember also that vectors are rows (or columns) of a matrix.

But how does this work in Python?

You can easily create a vector with the np.array() function. Similarly, you can give a matrix structure to every one-or two-dimensional ndarray with either the np.matrix() or np.mat() commands.

Well, not exactly. There are some differences:

  • A matrix is 2-D, while arrays are usually n-D,
  • As the functions above already implied, the matrix is a subclass of ndarray,
  • Both arrays and matrices have .T(), but only matrices have .H() and .I(),
  • Matrix multiplication works differently from element-wise array multiplication, and
  • To add to this, the ** operation has different results for matrices and arrays

When you’re working with matrices, you might sometimes have some in which most of the elements are zero. These matrices are called “sparse matrices”, while the ones that have mostly non-zero elements are called “dense matrices”.

In itself, this seems trivial, but when you’re working with SciPy for linear algebra, this can sometimes make a difference in the modules that you use to get certain things done. More concretely, you can use scipy.linalg for dense matrices, but when you’re working with sparse matrices, you might also want to consider checking up on the scipy.sparse module, which also contains its own scipy.sparse.linalg.

For sparse matrices, there are quite a number of options to create them. The code chunk below lists some:

Additionally, there are also some other functions that you might be able to use to create sparse matrices: Block Sparse Row matrices with bsr_matrix(), COOrdinate format sparse matrices with coo_matrix(), DIAgonal storage sparse matrices with dia_matrix(), and Row-based linked list sparse matrices with lil_matrix().

There are really a lot of options, but which one should you choose if you’re making a sparse matrix yourself? It’s not that hard.

Basically, it boils down to first is how you’re going to initialize it. Next, consider what you want to be doing with your sparse matrix.

More concretely, you can go through the following checklist to decide what type of sparse matrix you want to use:

  • If you plan to fill the matrix with numbers one by one, pick a coo_matrix() or dok_matrix() to create your matrix.
  • If you want to initialize the matrix with an array as the diagonal, pick dia_matrix() to initialize your matrix.
  • For sliced-based matrices, use lil_matrix().
  • If you’re constructing the matrix from blocks of smaller matrices, consider using bsr_matrix().
  • If you want to have fast access to your rows and columns, convert your matrices by using the csr_matrix() and csc_matrix() functions, respectively. The last two functions are not great to pick when you need to initialize your matrices, but when you’re multiplying, you’ll definitely notice the difference in speed.

Vai tranquillo! dice la prof 😊 A volte manca un’istruzione, come nello screenshot precedente, a volte manca un import ma si può fare (cit.). E Karlijn tockz! 🚀

:mrgreen:

SciPy – 4 – indicizzare e suddividere arrays

Continuo da qui, copio qui.

With indexing, you basically use square brackets [] to index the array values. In other words, you can use indexing if you want to gain access to selected elements -a subset- of an array. Slicing your data is very similar to subsetting it, but you can consider it to be a little bit more advanced. When you slice an array, you don’t consider just particular values, but you work with “regions” of data instead of pure “locations”.

Preparo gli arrays

ed ecco

Now that your mind is fresh with slicing and indexing, you might also be interested in some index tricks that can make your work more efficient when you’re going into scientific computing together with SciPy. There are four functions that will definitely come up and these are np.mgrid(), np.ogrid(), np.r_ and np.c_.

You might already know the two last functions if you already have some experience with NumPy. np.r and np.c are often used when you need to stack arrays row-wise or column-wise, respectively. With these functions, you can quickly construct arrays instead of using the np.concatenate() function.

e

By looking at the two first of these four functions, you might ask yourself why you would need a meshgrid. You can use meshgrids to generate two arrays containing the x- and y-coordinates at each position in a rectilinear grid. The np.meshgrid() function takes two 1D arrays and produces two 2D matrices corresponding to all pairs of (x, y) in the two arrays.

  • The np.mgrid() function is an implementation of MATLAB’s meshgrid and returns arrays that have the same shape. That means that the dimensions and number of the output arrays are equal to the number of indexing dimensions.
  • The np.ogrid() function, on the other hand, gives an open meshgrid and isn’t as dense as the result that the np.mgrid() function gives. You can see the visual difference between the two in the code chunk above.

You can also read up on the differences between these two functions here.

Another function that you might be able to use for indexing/slicing purposes is the np.select() function. You can use it to return values from a list of arrays depending on conditions, which you can specify yourself in the first argument of the function. In the second argument, you pass the array that you want to consider for this selection process.

Awesome! Now that you have selected the right values of your original array, you can still select the shape and manipulate your new array.

Selezione di forma e manipolazione
NumPy offers a lot of ways to select and manipulate the shape of your arrays and you’ll probably already know a lot of them. The following section will only give a short overview of the functions that might come in handy, so the post won’t cover all of them.

Now, is there such a thing as functions that are handy when you’re working with SciPy?

Well, the ones that are most useful are the ones that can help you to flatten arrays, stack and split arrays. You have already seen the np.c_ and np.r_ functions that you’ll often prefer instead of np.concatenate(), but there are many more that you want to know!

Like np.hstack() to horizontally stack your arrays or np.vstack() to vertically stack your arrays. Similarly, you can use np.vsplit() and np.hsplit() to split your arrays vertically and horizontally. But you’ll probably know all of this already.

Remember that the np.eye() function creates a 2X2 identity array, which is perfect to stack with the 2-D array that has been loaded in for you in the code chunk above.

If you want to know more about the conditions that you need to take into account if you want to stack arrays, go here. However, you can also just look at the arrays and what the functions to do gather an intuition of which ‘rules’ you need to respect if you want to join the two arrays by row or column.

The most important thing that you need to take into account when splitting arrays is probably the shape of your array, because you want to select the correct index at which you want the split to occur.

Besides functions to stack and split arrays, you will also want to keep in mind that you have functions that help to ensure that you’re working with arrays of a certain dimension are indispensable when you’re diving deeper into scientific computing.

Consider the following functions:

Nota: ho seguito le dritte di Karlijn ma la trasposta ha senso per arrays multidimensionali

Note the difference between reshaping and resizing your array. With the first, you change the shape of the data but you don’t change the data itself. When you resize, there is the possibility that the data that is contained within the array will change, depending on the shape that you select, of course.

Besides the splitting and stacking routines and the functions that allow you to further manipulate your arrays, there is also something like “vectorization” that you need to consider. When you apply a function to an array, you usually apply it to each element of the array. Consider, for example, applying np.cos(), np.sin() or np.tan() to an array. You’ll see that it works on all array elements. Now, when you see this, you know that the function is vectorized.

But, when you define functions by yourself, as you will most likely do when you’re getting into scientific computing, you might also want to vectorize them. In those cases, you can call np.vectorize():

When it comes to other vectorized mathematical functions that you might want to know, you should consider np.angle() to provide the angle of the elements of complex array elements, but also basic trigonometric, exponential or logarithmic functions will come in handy.

:mrgreen:

SciPy – 3 – creare arrays

Continuo da qui, copio qui.

You have now seen how to inspect your array and to make adjustments in the data type of it, but you haven’t explicitly seen hwo to create arrays. You should already know that you can use np.array() to to this, but there are other routines for array creation that you should know about: np.eye() and np.identity().

The np.eye() function allows you to create a square matrix with dimensions that are equal to the positive integer that you give as an argument to the function. The entries are generally filled with zeros, only the matrix diagonal is filled with ones. The np.identity() function works and does the same and also returns an identity array.

However, note that np.eye() can take an additional argument k that you can specify to pick the index of the diagonal that you want to populate with ones.

Nota per me che a volte la memoria… 😉: qui si trovano gli indici delle funzioni di NumPy e di SciPy, da bookmarkare prima di subito 😊

Other array creation functions that will most definitely come in handy when you’re working with the matrices for linear algebra are the following:

  • The np.arange() function creates an array with uniformly spaced values between two numbers. You can specify the spacing between the elements,
  • The latter also holds for np.linspace(), but with this function you specify the number of elements that you want in your array.
  • Lastly, the np.logspace() function also creates arrays with uniformly spaced values, but this time in a logarithmic scale. This means that the spacing is now logarithmical: two numbers are evenly spaced between the logarithms of these two to the base of 10.

Adesso dice la prof Karlijn: Now that you have refreshed your memory and you know how to handle the data types of your arrays, it’s time to also tackle the topic of indexing and slicing. Prossimamente.

:mrgreen:

SciPy – 2 – oggetti essenziali di NumPy

Continuo da qui, copio qui.

Un ripasso veloce di cose già viste.

An array is, structurally speaking, nothing but pointers. It’s a combination of a memory address, a data type, a shape and strides. It contains information about the raw data, how to locate an element and how to interpret an element.

The memory address and strides are important when you dive deeper into the lower-level details of arrays, while the data type and shape are things that beginners should surely know and understand. Two other attributes that you might want to consider are the data and size, which allow you to gather even more information on your array.

You’ll see in the results of the code that is included in the code chunk above that the data type of myArray is int64. When you’re intensively working with arrays, you will definitely remember that there are ways to convert your arrays from one data type to another with the astype() method.

Nevertheless, when you’re using SciPy and NumPy together, you might also find the following type handling NumPy functions very useful, especially when you’re working with complex numbers:

Try to add print() calls to see the results of the code that is given above. Then, you’ll see that complex numbers have a real and an imaginary part to them. The np.real() and np.imag() functions are designed to return these parts to the user, respectively.

Alternatively, you might also be able to use np.cast to cast an array object to a different data type, such as float in the example above.

The only thing that really stands out in difficulty in the above code chunk is the np.real_if_close() function. When you give it a complex input, such as myArray, you’ll get a real array back if the complex parts are close to zero. This last part, “close to 0”, can be adjusted by yourself with the tol argument that you can pass to the function.

OK? 😎 continua

:mrgreen:

SciPy – 1 – algebra lineare

Sono sempre fermo all’inizio perché chi ben comincia… 😉 e poi devo ancora decidere da dove devo copiare 😊, ci vorrebbe uno come Jake, lui spiega bene, rockz 🚀

Potrei partire da SciPy Reference Guide ma mi sembra un po’ troppo documentosa, forse è meglio il Scipy Tutorial: Vectors and Arrays (Linear Algebra) di Karlijn Willems.

Il tutorial di Karlijn mi sembra troppo corto, inoltre copre solo una parte di SciPy (a quanto vedo dall’indice della Reference) ma può essere un inizio. Anzi parto; poi si vedrà 😉 Intanto il solito mantra.

Continuo da qui, copio qui.

Much of what you need to know to really dive into machine learning is linear algebra, and that is exactly what this tutorial tackles. Today’s post goes over the linear algebra topics that you need to know and understand to improve your intuition for how and when machine learning methods work by looking at the level of vectors and matrices.

By the end of the tutorial, you’ll hopefully feel more confident to take a closer look at an algorithm!

Introduzione
Ho scorso con Jake tutto un notebook su NumPy, one of the core libraries for scientific computing in Python. This library contains a collection of tools and techniques that can be used to solve on a computer mathematical models of problems in Science and Engineering. Ma c’è SciPy, un package che ci consente prestazioni migliori, it’s a powerful data structure that allows you to efficiently compute arrays and matrices.

Now, SciPy is basically NumPy.

It’s also one of the core packages for scientific computing that provides mathematical algorithms and convenience functions, but it’s built on the NumPy extension of Python. This means that SciPy and NumPy are often used together.

Later on in this tutorial, it will become clear to you how the collaboration between these two libraries has become self-evident.

Interagire con NumPy e SciPy
To interact efficiently with both packages, you first need to know some of the basics of this library and its powerful data structure. To work with these arrays, there’s a huge amount of high-level mathematical functions operate on these matrices and arrays.

Vedremo adesso cosa serve per usare efficientemente SciPy. In essence, you have to know how about the array structure and how you can handle data types and how you can manipulate the shape of arrays. Ah! c’è un cheat sheet sia per NumPy che per SciPy.

Pausa 😊 in fondo ci stiamo ancora preparando alla partenza 😎

:mrgreen:

SciPy – inizio – 0

Questo post –e quelli che verranno– possono essere considerati una continuazione della serie relativa a NumPy (e Panda e Matplotlib e Scikit-Learn). Ma anche no, cioè non lo so. Davvero 😯

Dal sito di Scipy (a proposito per gli italiani si pronuncia “scipai“) salta fuori che i collegamenti con Numpy sono strettissimi. Ecco la prima cosa da approfondire sarà proprio in cosa i due packages si differenziano, come si inegrano, se ho fatto bene a raccontare (quasi cento posts) la telenovela di NumPy, se devo iniziare a seguire questa nuova di pakka.
Una nota: lo so che i ggiovani non usano il lemma “telenovela” ma io ho una certa età, sono senior direbbe Jakob Nielsen, portate passiensa.
Ah! un’altra nota ancora, per me questa: chissà se ritroverò Jake VanderPlas o uno come lui, un maestro che rockz 🚀 assay, che ti dice come fare, in modo chiaro tanto che riesca a farlo anch’io (me) 😉

Non so se anche voi avete quella che per me è una reazione istintiva (un riflesso pavloviano direbbe chi sa cosa vuol dire (probabilmente, o che comunque suona bane)) quando sentite di una cosa nuova viene spontaneo dire: “cosa ne dice la Wiki?”, come faremmo senza (auto-cit).

La Wiki dice appunto:

SciPy contains modules for optimization, linear algebra, integration, interpolation, special functions, FFT, signal and image processing, ODE solvers and other tasks common in science and engineering.

SciPy builds on the NumPy array object and is part of the NumPy stack which includes tools like Matplotlib, pandas and SymPy, and an expanding set of scientific computing libraries. This NumPy stack has similar users to other applications such as MATLAB, GNU Octave, and Scilab. The NumPy stack is also sometimes referred to as the SciPy stack.

E poi:

In the 1990s, Python was extended to include an array type for numerical computing called Numeric (This package was eventually replaced by Travis Oliphant who wrote NumPy in 2006 as a blending of Numeric and Numarray which had been started in 2001). As of 2000, there was a growing number of extension modules and increasing interest in creating a complete environment for scientific and technical computing. In 2001, Travis Oliphant, Eric Jones, and Pearu Peterson merged code they had written and called the resulting package SciPy. The newly created package provided a standard collection of common numerical operations on top of the Numeric array data structure. Shortly thereafter, Fernando Pérez released IPython, an enhanced interactive shell widely used in the technical computing community, and John Hunter released the first version of Matplotlib, the 2D plotting library for technical computing. Since then the SciPy environment has continued to grow with more packages and tools for technical computing.

Ma mica finisce qui; in effetti ho un altro riflesso pavloviano (di ordine superiore?): Quora. E –quando si dice la combinazione– ecco: What is the difference between NumPy and SciPy?
Due risposte, ecco la prima breve ma densa:

NumPy is part of SciPy, and basically just defines arrays along with some basic numerical functions. The rest of SciPy implements stuff like numerical integration and optimization and machine learning using NumPy’s functionality.

OK, mi sa che sono su qualcosa di sexy, da fare. Prossimamente. Senza dilazionare. Subito. Dopo la pausa 😉

:mrgreen: