SICP – cap. 2 – Strutture gerarchiche – 33 – esercizi

Continuo da qui, copio qui.

Exercise 2.27: Modify your reverse procedure of Exercise 2.18 [qui] to produce a deep-reverse procedure that takes a list as argument and returns as its value the list with its elements reversed and with all sublists deep-reversed as well. For example,

(define x 
  (list (list 1 2) (list 3 4)))

x
((1 2) (3 4))

(reverse x)
((3 4) (1 2))

(deep-reverse x)
((4 3) (2 1))

L’esercizio 2.18 non è che l’abbia affrontato bene, anzi tutt’altro 😡
Ma la soluzione (copiata) è bellissima:

Intanto ecco

funziona per le liste flat, come ovvio 😎
Esiste il predicato list? ma chissà se si può usare? Non mi sembra che se ne sia parlato ma sono anche distratto perché mi occupo di SICP solo di tanto in tanto, purtroppo. C’è pair? e di questo se ne è parlato a lezione.

Sì, si può (deve?) usare pair? 😁
Poi –solo per me– pair? differisce da list? e se sì in cosa? Da indagare 😊

La soluzione di Bill the Lizard è esemplare; la faccio mia 😜 (file dep-rev.rkt):

(define (reverse items)
  (if (null? items)
     items
     (append (reverse (cdr items)) (list (car items)))))

(define (deep-reverse items)
  (cond ((null? items) null)
        ((pair? (car items))
         (append (deep-reverse (cdr items))
                 (list (deep-reverse (car items)))))
        (else
         (append (deep-reverse (cdr items))
                 (list (car items))))))

Ho usato la procedura append di Racket; in alternativa a quella definita dai profs.

sicp-ex ha diverse varianti, di cui una brevissima, da testare:


OK 🚀 ottima, anche se da costruire con carta e matita (e gomma) per capire come funziona.
Non tutte le soluzioni lì proposte funzionano, controllare sempre per casi diversi da quelli proposti dai profs.

Buona anche la versione di DreWiki.

:mrgreen:

NumPy – 66 – alte prestazioni di Pandas: eval() e query() – 2

Continuo da qui, copio qui.

DataFrame.eval() per operazioni sulle colonne
Just as Pandas has a top-level pd.eval() function, DataFrames have an eval() method that works in similar ways. The benefit of the eval() method is that columns can be referred to by name. We’ll use this labeled array as an example:

Using pd.eval() as above, we can compute expressions with the three columns like this:

The DataFrame.eval() method allows much more succinct evaluation of expressions with the columns:

Notice here that we treat column names as variables within the evaluated expression, and the result is what we would wish.

Assegnazioni in DataFrame.eval()
In addition to the options just discussed, DataFrame.eval() also allows assignment to any column. Let’s use the DataFrame from before, which has columns ‘A‘, ‘B‘, and ‘C‘:

We can use df.eval() to create a new column ‘D‘ and assign to it a value computed from the other columns:

In the same way, any existing column can be modified:

Variabili locali in DataFrame.eval()
The DataFrame.eval() method supports an additional syntax that lets it work with local Python variables. Consider the following:

The @ character here marks a variable name rather than a column name, and lets you efficiently evaluate expressions involving the two “namespaces”: the namespace of columns, and the namespace of Python objects. Notice that this @ character is only supported by the DataFrame.eval() method, not by the pandas.eval() function, because the pandas.eval() function only has access to the one (Python) namespace.

Metodi di DataFrame.query()
The DataFrame has another method based on evaluated strings, called the query() method. Consider the following:

As with the example used in our discussion of DataFrame.eval(), this is an expression involving columns of the DataFrame. It cannot be expressed using the DataFrame.eval() syntax, however! Instead, for this type of filtering operation, you can use the query() method:

In addition to being a more efficient computation, compared to the masking expression this is much easier to read and understand. Note that the query() method also accepts the @ flag to mark local variables:

Performance: quando usare queste funzioni
When considering whether to use these functions, there are two considerations: computation time and memory use. Memory use is the most predictable aspect. As already mentioned, every compound expression involving NumPy arrays or Pandas DataFrames will result in implicit creation of temporary arrays: For example, this:

Is roughly equivalent to this:

If the size of the temporary DataFrames is significant compared to your available system memory (typically several gigabytes) then it’s a good idea to use an eval() or query() expression. You can check the approximate size of your array in bytes using this:

On the performance side, eval() can be faster even when you are not maxing-out your system memory. The issue is how your temporary DataFrames compare to the size of the L1 or L2 CPU cache on your system (typically a few megabytes in 2016); if they are much bigger, then eval() can avoid some potentially slow movement of values between the different memory caches. In practice, I find that the difference in computation time between the traditional methods and the eval/query method is usually not significant–if anything, the traditional method is faster for smaller arrays! The benefit of eval/query is mainly in the saved memory, and the sometimes cleaner syntax they offer.

We’ve covered most of the details of eval() and query() here; for more information on these, you can refer to the Pandas documentation. In particular, different parsers and engines can be specified for running these queries; for details on this, see the discussion within the “Enhancing Performance” section.

:mrgreen:

cit. & loll – 41

Giovedi è il giorno della prestigiosa rassegna cit.-lollosa, eccola 😎

I’m not actually good at computers
::: mjg59

A note to my younger self (and everyone who reads this)
::: ThePracticalDev

Software is
::: freeuniverser

Please don’t feed the e-mails
::: Brilliant_Ads

if you’re not filling gigabytes of memory with waste and duplicated objects
::: sigfig

The future is here!
::: WorldAndScience

Young programmers are better off not starting with the theoretical foundations of CS
::: paulg

I’m teaching my daughter the ABC and now bored with A is for Apple, X is for X-RAY
::: rebootuser

pensa che ancora nel 2017
::: mogui247

@nim_lang ‘s GC performance metrics
::: FredAtBootstrap

I should have majored in alternative computer science in college, it is WAY easier than normal computer science
::: Whoospy

This is one of the funniest sentences I’ve ever read
::: kateconger

I told folks at Deconstruct that Javascript programming seems way harder/scarier than C/C++ to me
::: sehurlburt

The Real Real Donald!
::: vardi

OH: the allure of regex
::: peterseibel

BREAKING @AjitPaiFCC is moving to kill #netneutrality +let Comcast censor the web
::: dcavedon

I found Jesus!
::: Whoospy

JavaScript 32 – la vita segreta degli oggetti – 2

Continuo da qui, copio qui.

Costruttori
A more convenient way to create objects that derive from some shared prototype is to use a constructor. In JavaScript, calling a function with the new keyword in front of it causes it to be treated as a constructor. The constructor will have its this variable bound to a fresh object, and unless it explicitly returns another object value, this new object will be returned from the call.

An object created with new is said to be an instance of its constructor.

Here is a simple constructor for rabbits. It is a convention to capitalize the names of constructors so that they are easily distinguished from other functions (file co0.js).

function Rabbit(type) {
  this.type = type;
}

var killerRabbit = new Rabbit("killer");
var blackRabbit = new Rabbit("black");

console.log(blackRabbit.type);

Constructors (in fact, all functions) automatically get a property named prototype, which by default holds a plain, empty object that derives from Object.prototype. Every instance created with this constructor will have this object as its prototype. So to add a speak method to rabbits created with the Rabbit constructor, we can simply do this (co1.js):

function Rabbit(type) {
  this.type = type;
}
var blackRabbit = new Rabbit("black");

Rabbit.prototype.speak = function(line) {
  console.log("The " + this.type + " rabbit says '" +
              line + "'");
};

blackRabbit.speak("Doom...");

It is important to note the distinction between the way a prototype is associated with a constructor (through its prototype property) and the way objects have a prototype (which can be retrieved with Object.getPrototypeOf). The actual prototype of a constructor is Function.prototype since constructors are functions. Its prototype property will be the prototype of instances created through it but is not its own prototype.

Sostuituire (override) proprietà derivate
When you add a property to an object, whether it is present in the prototype or not, the property is added to the object itself, which will henceforth have it as its own property. If there is a property by the same name in the prototype, this property will no longer affect the object. The prototype itself is not changed (co2.js).

function Rabbit(type) {
  this.type = type;
}
var killerRabbit = new Rabbit("killer");
var blackRabbit = new Rabbit("black");

Rabbit.prototype.teeth = "small";
console.log(killerRabbit.teeth);

killerRabbit.teeth = "long, sharp, and bloody";
console.log(killerRabbit.teeth);
console.log(blackRabbit.teeth);
console.log(Rabbit.prototype.teeth);

The following diagram sketches the situation after this code has run. The Rabbit and Object prototypes lie behind killerRabbit as a kind of backdrop, where properties that are not found in the object itself can be looked up.

Overriding properties that exist in a prototype is often a useful thing to do. As the rabbit teeth example shows, it can be used to express exceptional properties in instances of a more generic class of objects, while letting the nonexceptional objects simply take a standard value from their prototype.

It is also used to give the standard function and array prototypes a different toString method than the basic object prototype (op0.js).

console.log(Array.prototype.toString ==
            Object.prototype.toString);
console.log([1, 2].toString());

Calling toString on an array gives a result similar to calling .join(",") on it —it puts commas between the values in the array. Directly calling Object.prototype.toString with an array produces a different string. That function doesn’t know about arrays, so it simply puts the word “object” and the name of the type between square brackets (op1.js).

console.log(Object.prototype.toString.call([1, 2]));

:mrgreen:

NumPy – 65 – alte prestazioni di Pandas: eval() e query() – 1

Continuo da qui, copio qui.

As we’ve already seen in previous sections, the power of the PyData stack is built upon the ability of NumPy and Pandas to push basic operations into C via an intuitive syntax: examples are vectorized/broadcasted operations in NumPy, and grouping-type operations in Pandas. While these abstractions are efficient and effective for many common use cases, they often rely on the creation of temporary intermediate objects, which can cause undue overhead in computational time and memory use.

As of version 0.13 (released January 2014), Pandas includes some experimental tools that allow you to directly access C-speed operations without costly allocation of intermediate arrays. These are the eval() and query() functions, which rely on the Numexpr package. In this notebook we will walk through their use and give some rules-of-thumb about when you might think about using them.

Perché query() e eval(): espressioni composte
We’ve seen previously that NumPy and Pandas support fast vectorized operations; for example, when adding the elements of two arrays:

As discussed in Computation on NumPy Arrays: Universal Functions [qui], this is much faster than doing the addition via a Python loop or comprehension:

But this abstraction can become less efficient when computing compound expressions. For example, consider the following expression:

mask = (x > 0.5) & (y < 0.5)

Because NumPy evaluates each subexpression, this is roughly equivalent to the following:

tmp1 = (x > 0.5)
tmp2 = (y < 0.5)
mask = tmp1 & tmp2

In other words, every intermediate step is explicitly allocated in memory. If the x and y arrays are very large, this can lead to significant memory and computational overhead. The Numexpr library gives you the ability to compute this type of compound expression element by element, without the need to allocate full intermediate arrays. The Numexpr documentation has more details, but for the time being it is sufficient to say that the library accepts a string giving the NumPy-style expression you’d like to compute:

The benefit here is that Numexpr evaluates the expression in a way that does not use full-sized temporary arrays, and thus can be much more efficient than NumPy, especially for large arrays. The Pandas eval() and query() tools that we will discuss here are conceptually similar, and depend on the Numexpr package.

pandas.eval() per operazioni efficienti
The eval() function in Pandas uses string expressions to efficiently compute operations using DataFrames. For example, consider the following DataFrames:

To compute the sum of all four DataFrames using the typical Pandas approach, we can just write the sum:

The same result can be computed via pd.eval by constructing the expression as a string:

The eval() version of this expression is about 50% faster (and uses much less memory), while giving the same result:

Operazioni supportate da pd.eval()
As of Pandas v0.16, pd.eval() supports a wide range of operations. To demonstrate these, we’ll use the following integer DataFrames:

operatori aritmetici
pd.eval() supports all arithmetic operators. For example:

operatori di comparazione
pd.eval() supports all comparison operators, including chained expressions:

operatori bitwise
pd.eval() supports the & and | bitwise operators:

In addition, it supports the use of the literal and and or in Boolean expressions:

attributi di oggetti e indici
pd.eval() supports access to object attributes via the obj.attr syntax, and indexes via the obj[index] syntax:

altre operazioni
Other operations such as function calls, conditional statements, loops, and other more involved constructs are currently not implemented in pd.eval(). If you’d like to execute these more complicated types of expressions, you can use the Numexpr library itself.

:mrgreen:

JavaScript 31 – la vita segreta degli oggetti – 1

Continuo da qui, copio qui.

When a programmer says “object”, this is a loaded term. In my profession, objects are a way of life, the subject of holy wars, and a beloved buzzword that still hasn’t quite lost its power.

To an outsider, this is probably a little confusing. Let’s start with a brief history of objects as a programming construct.

Storia degli oggetti
ahemmm... non la racconto, volendo si può leggerla di là. Per quel che serve per JavaScript object è qualsiasi variabile che non rientri in quei tipi più semplici visti precedentemente. Questo mi sembra dovrebbe essere sufficiente per proseguire.

This chapter describes JavaScript’s rather eccentric take on objects and the way they relate to some classical object-oriented techniques.

Metodi
Methods are simply properties that hold function values. This is a simple method (file rb.js):

var rabbit = {};
rabbit.speak = function(line) {
  console.log("The rabbit says '" + line + "'");
};

rabbit.speak("Ciao, eccomi!");

Usually a method needs to do something with the object it was called on. When a function is called as a method—looked up as a property and immediately called, as in object.method() —the special variable this in its body will point to the object that it was called on (rb1.js).

function speak(line) {
  console.log("The " + this.type + " rabbit says '" +
              line + "'");
}
var whiteRabbit = {type: "white", speak: speak};
var fatRabbit = {type: "fat", speak: speak};

whiteRabbit.speak("Oh my ears and whiskers, " +
                  "how late it's getting!");
fatRabbit.speak("I could sure use a carrot right now.");

The code uses the this keyword to output the type of rabbit that is speaking. Recall that the apply and bind methods both take a first argument that can be used to simulate method calls. This first argument is in fact used to give a value to this.

There is a method similar to apply, called call. It also calls the function it is a method of but takes its arguments normally, rather than as an array. Like apply and bind, call can be passed a specific this value (rb2.js).

var fatRabbit = {type: "fat", speak: speak};
function speak(line) {
  console.log("The " + this.type + " rabbit says '" +
              line + "'");
}              

speak.apply(fatRabbit, ["Burp!"]);
speak.call({type: "old"}, "Oh my.");

Prototipi
Watch closely (pr0.js).

var empty = {};
console.log(empty.toString);
console.log(empty.toString());

I just pulled a property out of an empty object. Magic!

Well, not really. I have simply been withholding information about the way JavaScript objects work. In addition to their set of properties, almost all objects also have a prototype. A prototype is another object that is used as a fallback source of properties. When an object gets a request for a property that it does not have, its prototype will be searched for the property, then the prototype’s prototype, and so on.

So who is the prototype of that empty object? It is the great ancestral prototype, the entity behind almost all objects, Object.prototype (pr1.js).

console.log(Object.getPrototypeOf({}) ==
            Object.prototype);
console.log(Object.getPrototypeOf(Object.prototype));

As you might expect, the Object.getPrototypeOf function returns the prototype of an object.

The prototype relations of JavaScript objects form a tree-shaped structure, and at the root of this structure sits Object.prototype. It provides a few methods that show up in all objects, such as toString, which converts an object to a string representation.

Many objects don’t directly have Object.prototype as their prototype, but instead have another object, which provides its own default properties. Functions derive from Function.prototype, and arrays derive from Array.prototype (pr2.js).

console.log(Object.getPrototypeOf(isNaN) ==
            Function.prototype);
console.log(Object.getPrototypeOf([]) ==
            Array.prototype);

Such a prototype object will itself have a prototype, often Object.prototype, so that it still indirectly provides methods like toString.

The Object.getPrototypeOf function obviously returns the prototype of an object. You can use Object.create to create an object with a specific prototype (prr.js).

var protoRabbit = {
  speak: function(line) {
    console.log("The " + this.type + " rabbit says '" +
                line + "'");
  }
};
var killerRabbit = Object.create(protoRabbit);
killerRabbit.type = "killer";
killerRabbit.speak("SKREEEE!");

The “proto” rabbit acts as a container for the properties that are shared by all rabbits. An individual rabbit object, like the killer rabbit, contains properties that apply only to itself—in this case its type—and derives shared properties from its prototype.

Eh sì, siamo arrivati al dunque, siamo javascripters, kwasy 😜
Continua, prossimamente 😎

:mrgreen:

NumPy – 64 – lavorare con Series temporali – 5

Continuo da qui, copio qui.

Esempio: Seattle in bicicletta
As a more involved example of working with some time series data, let’s take a look at bicycle counts on Seattle’s Fremont Bridge. This data comes from an automated bicycle counter, installed in late 2012, which has inductive sensors on the east and west sidewalks of the bridge. The hourly bicycle counts can be downloaded from data.seattle.gov; here is the direct link to the dataset.

As of summer 2016, the CSV can be downloaded as follows:

Once this dataset is downloaded, we can use Pandas to read the CSV output into a DataFrame. We will specify that we want the Date as an index, and we want these dates to be automatically parsed:

For convenience, we’ll further process this dataset by shortening the column names and adding a “Total” column:

Now let’s take a look at the summary statistics for this data:

Visualizzare i dati
We can gain some insight into the dataset by visualizing it. Let’s start by plotting the raw data:

OK, un errore e un ; dimenticato ma ecco

The ~25,000 hourly samples are far too dense for us to make much sense of. We can gain more insight by resampling the data to a coarser grid. Let’s resample by week:

e

This shows us some interesting seasonal trends: as you might expect, people bicycle more in the summer than in the winter, and even within a particular season the bicycle use varies from week to week (likely dependent on weather; see In Depth: Linear Regression [prossimamente] where we explore this further).

Another way that comes in handy for aggregating the data is to use a rolling mean, utilizing the pd.rolling_mean() function. Here we’ll do a 30 day rolling mean of our data, making sure to center the window:

e

The jaggedness of the result is due to the hard cutoff of the window. We can get a smoother version of a rolling mean using a window function–for example, a Gaussian window. The following code specifies both the width of the window (we chose 50 days) and the width of the Gaussian within the window (we chose 10 days):

e

Investigare i dati
While these smoothed data views are useful to get an idea of the general trend in the data, they hide much of the interesting structure. For example, we might want to look at the average traffic as a function of the time of day. We can do this using the GroupBy functionality discussed in Aggregation and Grouping [qui]:

e

The hourly traffic is a strongly bimodal distribution, with peaks around 8:00 in the morning and 5:00 in the evening. This is likely evidence of a strong component of commuter traffic crossing the bridge. This is further evidenced by the differences between the western sidewalk (generally used going toward downtown Seattle), which peaks more strongly in the morning, and the eastern sidewalk (generally used going away from downtown Seattle), which peaks more strongly in the evening.

We also might be curious about how things change based on the day of the week. Again, we can do this with a simple groupby:

e

This shows a strong distinction between weekday and weekend totals, with around twice as many average riders crossing the bridge on Monday through Friday than on Saturday and Sunday.

With this in mind, let’s do a compound GroupBy and look at the hourly trend on weekdays versus weekends. We’ll start by grouping by both a flag marking the weekend, and the time of day:

Now we’ll use some of the Matplotlib tools described in Multiple Subplots [prossimamente] to plot two panels side by side:

e

The result is very interesting: we see a bimodal commute pattern during the work week, and a unimodal recreational pattern during the weekends. It would be interesting to dig through this data in more detail, and examine the effect of weather, temperature, time of year, and other factors on people’s commuting patterns; for further discussion, see my blog post “Is Seattle Really Seeing an Uptick In Cycling?“, which uses a subset of this data. We will also revisit this dataset in the context of modeling in In Depth: Linear Regression [prossimamente].

:mrgreen:

JavaScript 30 – funzioni di ordine superiore – 6

Continuo da qui, copio qui.

Tutti e qualcuno
Arrays also come with the standard methods every and some. Both take a predicate function that, when called with an array element as argument, returns true or false. Just like && returns a true value only when the expressions on both sides are true, every returns true only when the predicate returns true for all elements of the array. Similarly, some returns true as soon as the predicate returns true for any of the elements. They do not process more elements than necessary—for example, if some finds that the predicate holds for the first element of the array, it will not look at the values after that.

Write two functions, every and some, that behave like these methods, except that they take the array as their first argument rather than being a method.

every.js

function every(arr, val) {
  var t = true;
  for (c = 0; c < arr.length; c++) {
    t = t && val(arr[c]);
    if (!t) break;
  }
  return t;
}

console.log(every([NaN, NaN, NaN], isNaN));
console.log(every([NaN, NaN, 4], isNaN));

some.js

function some(arr, val) {
  var t = false;
  for (c = 0; c < arr.length; c++) {
    t = val(arr[c]);
    if (t) break;
  }
  return t;
}

console.log(some([NaN, 3, 4], isNaN));
console.log(some([2, 3, 4], isNaN));

Sì, lo so che si potrebbero scrivere più brevi ma così mi sembrano più chiare; anche perché sono niubbo (assay) 😜 E in una vita precedente ho passato tanto (troppo) tempo con il debug 😜

:mrgreen:

NumPy – 63 – lavorare con Series temporali – 4

Continuo da qui, copio qui.

Riorganizzare, spostare e visualizzare
The ability to use dates and times as indices to intuitively organize and access data is an important piece of the Pandas time series tools. The benefits of indexed data in general (automatic alignment during operations, intuitive data slicing and access, etc.) still apply, and Pandas provides several additional time series-specific operations.

We will take a look at a few of those here, using some stock price data as an example. Because Pandas was developed largely in a finance context, it includes some very specific tools for financial data. For example, the accompanying pandas-datareader package (installable via conda install pandas-datareader), knows how to import financial data from a number of available sources, including Yahoo finance, Google Finance, and others. Here we will load Google’s closing price history:

Uh! installo:

ed ecco:

For simplicity, we’ll use just the closing price:

We can visualize this using the plot() method, after the normal Matplotlib setup boilerplate [prossimamente]:

e

Riorganizzare e convertire le frequenze
One common need for time series data is resampling at a higher or lower frequency. This can be done using the resample() method, or the much simpler asfreq() method. The primary difference between the two is that resample() is fundamentally a data aggregation, while asfreq() is fundamentally a data selection.

Taking a look at the Google closing price, let’s compare what the two return when we down-sample the data. Here we will resample the data at the end of business year:

e

Notice the difference: at each point, resample reports the average of the previous year, while asfreq reports the value at the end of the year.

For up-sampling, resample() and asfreq() are largely equivalent, though resample has many more options available. In this case, the default for both methods is to leave the up-sampled points empty, that is, filled with NA values. Just as with the pd.fillna() function discussed previously, asfreq() accepts a method argument to specify how values are imputed. Here, we will resample the business day data at a daily frequency (i.e., including weekends):

e

The top panel is the default: non-business days are left as NA values and do not appear on the plot. The bottom panel shows the differences between two strategies for filling the gaps: forward-filling and backward-filling.

Spostamento del tempo
Another common time series-specific operation is shifting of data in time. Pandas has two closely related methods for computing this: shift() and tshift() In short, the difference between them is that shift() shifts the data, while tshift() shifts the index. In both cases, the shift is specified in multiples of the frequency.

Here we will both shift() and tshift() by 900 days;

Nella REPL di IPython posso controllare passo-passo mentre apprendo, cosa che viene meno facendo uno script; certo che questo screenshot è abnorme.

Sì, dimenticato un import ma rimediato al volo 😜

We see here that shift(900) shifts the data by 900 days, pushing some of it off the end of the graph (and leaving NA values at the other end), while tshift(900) shifts the index values by 900 days.

A common context for this type of shift is in computing differences over time. For example, we use shifted values to compute the one-year return on investment for Google stock over the course of the dataset:

e

This helps us to see the overall trend in Google stock: thus far, the most profitable times to invest in Google have been (unsurprisingly, in retrospect) shortly after its IPO, and in the middle of the 2009 recession.

Rolling windows
yep, non so tradurlo
Rolling statistics are a third type of time series-specific operation implemented by Pandas. These can be accomplished via the rolling() attribute of Series and DataFrame objects, which returns a view similar to what we saw with the groupby operation (see Aggregation and Grouping [qui]). This rolling view makes available a number of aggregation operations by default.

For example, here is the one-year centered rolling mean and standard deviation of the Google stock prices:

e

As with group-by operations, the aggregate() and apply() methods can be used for custom rolling computations.

Per saperne di più
This section has provided only a brief summary of some of the most essential features of time series tools provided by Pandas; for a more complete discussion, you can refer to the “Time Series/Date” section of the Pandas online documentation.

Another excellent resource is the textbook Python for Data Analysis by Wes McKinney (OReilly, 2012). Although it is now a few years old, it is an invaluable resource on the use of Pandas. In particular, this book emphasizes time series tools in the context of business and finance, and focuses much more on particular details of business calendars, time zones, and related topics.

As always, you can also use the IPython help functionality to explore and try further options available to the functions and methods discussed here. I find this often is the best way to learn a new Python tool.

Non so voi ma per me Pandas & Jake rockzs! 🚀 assay 💥

:mrgreen:

SICP – cap. 2 – Strutture gerarchiche – 32 – esercizi

Continuo da qui, copio qui.

Exercise 2.26: Suppose we define x and y to be two lists:

(define x (list 1 2 3))
(define y (list 4 5 6))

What result is printed by the interpreter in response to evaluating each of the following expressions:

(append x y)
(cons x y)
(list x y)

Esercizio atipico; verifica se siamo stati attenti alle lezioni precedenti. Davvero mi sarebbe piaciuto tantissimo seguire questo corso in classe, quando ero giovane. Cioè allora non c’era ma non è una scusa sufficiente 😜
Senza aprire la REPL (davvero, parola di Giovane Marmotta (emerita)):

(1 2 3 4 5 6)
((1 2 3) 4 5 6)
((1 2 3) (4 5 6))

Verifico

Vedo i miei nerds.
Bill the Lizard spiega bene:
The append procedure takes the elements from two lists and produces a new list. When given two lists as parameters, the cons procedure returns a list whose first element is the first parameter list and whose remaining elements are the elements of the second list (we saw this at the beginning of section 2.2.1 Representing Sequences [qui]). Finally the list procedure simply wraps its parameters in a new list without doing any merge or append operations on them. The returned list just has two lists as its elements.
Bill rockz! 🚀, gli altri nerds questa volta di meno, allora manco li cito 😯

:mrgreen: