SICP – OOPS! correggo una sconsideratezza

cs212f

Ahemmm… non so come cominciare… davvero😦

Sapete com’è alle volte l’impulsività …
O potrei anche prenderla alla lunga, citare precedenti, il fratello di Prometeo, per esempio. Ecco, io ero stato avvertito ma ho fatto davvero come Epimeteo. Adesso vi conto.

SICP richiede tutta l’attenzione di cui sei capace, anzi di più. Ti distrai un attimino (OK, lo so è silly ma a volte ci vuole) e fai cose che poi ti accorgi che sono –come dire– non esattamente razionali. E te l’avevano detto, poche righe prima:

This procedure is instructive as a prototypical tree recursion, but it is a terrible way to compute Fibonacci numbers because it does so much redundant computation. […]  To get an idea of how bad this is, one can show that the value of Fib(n) grows exponentially with n.

Ma l’ho fatto, qui, ho lanciato l’esecuzione di (fib 100); siccome durava un po’ (poi dirò quanto) ho scritto:

Nota provvisoria: l’elaborazione è tuttora in corso, dura da quasi un giorno intero; aggiorno e metto l’immagine appena possibile. Inoltre: il bello di avere quattro CPU, usate pochissimo, ai miei tempi invece…😳

s22-1 –appena pronta la metto

Dove s22-1.png doveva essere lo screenshot che non credo ci sarà (sì, potrei GIMPare ma non ne sono capace è contro la mia etica).

Dopo più di un giorno intero, complice lo stacco per recarmi a un appuntamento mi è venuto da ripensarci, proprio come Epimeteo. E sì, forse, vuoi vedere che … Insomma tornato a casa ho fatto qualche prova:

ncalls

Atz! se cresce ncalls, il numero di volte che fib viene chiamata.
Ho ricavato la serie di ncalls per fib per i primi valori, quelli computabili, li ho inseriti in un programma di interpolazione esponenziale disponibile online, ab-Exponential regression Calculator – High accuracy calculation, e ho avuto la conferma della mia epimenicità.

regInsomma, la faccio breve, il calcolo di (fib 100) sarebbe durato per un tempo pari ai 2/3 di quello intercorso dal Big Bang. Troppo, per me. ho killato l’elaborazione. Ho anche verificato che i parametri ricavati dalla regressione fossero corretti, per valori piccoli di n:

s20-40-50

Adesso se non fossi (temporaneamente almeno) vaccinato dall’epimeticità mi verrebbe da dire: “ma ‘spetta ‘n attimino: 50 è metà di 100 …”.
‘Tento che c’è la progressione esponenziale, brutta bestia, catastrofica. E la prossima volta pensaci prima😉 nèh!

:mrgreen:

Rust – il linguaggio – 51

Sergio Gridelli

Sergio Gridelli

Oggi qui: /usr/local/share/doc/rust/html/book/choosing-your-guarantees.html, continuando la mia rassegna da qui.

Scegliere le condizioni (garanzie)

One important feature of Rust is that it lets us control the costs and guarantees of a program.

There are various “wrapper type” abstractions in the Rust standard library which embody a multitude of tradeoffs between cost, ergonomics, and guarantees. Many let one choose between run time and compile time enforcement. This section will explain a few selected abstractions in detail.

Before proceeding, it is highly recommended that one reads about ownership and borrowing in Rust.

Vengono di seguito esaminati i vati tipi di puntatori, già incontrati nella lunga serie di post precedenti. Riporto solo l’elenco: Box<T>, &T, &mut T, *const T, *mut T e Rc<T>.

Si passa poi a esaminare tipi più specifici come Cell<T>, RefCell<T> e i tipi sincroni Arc<T>, Mutex<T>, RwLock<T> e le loro composizioni.

Se si usa Rust questa pagina del manuale è fondamentale; per me basta sapere che c’è, se dovesse servirmi in futuro😉

Si passa quindi a FFI, qui: /usr/local/share/doc/rust/html/book/ffi.html.

Foreign Function Interface

Uhmmm… oltre che molto specifico l’argomento è già stato trattato in modo comprensibile anche agli umani come me, copiando TheKeng3r, qui e qui.

:mrgreen:

SICP – cap. 1 – Costruire astrazioni tramite procedure – 13

20160131-topolino3140-stupore

Oggi qui, continuando da qui. Ancora ricorsività, ecco… –uhmmm, traduco o no? ehmmm no–

Tree recursion

Another common pattern of computation is called tree recursion. As an example, consider computing the sequence of Fibonacci numbers, in which each number is the sum of the preceding two:

In general, the Fibonacci numbers can be defined by the rule

Fib(n) = 0                     se n = 0 |
         1                     se n = 1 |
         Fib(n-1) + Fib(n-2) | se n > 1.

We can immediately translate this definition into a recursive procedure for computing Fibonacci numbers:

(define (fib n)
  (cond ((= n 0) 0)
        ((= n 1) 1)
        (else (+ (fib (- n 1))
                 (fib (- n 2))))))

Ecco la figura per n = 5:

ch1-Z-G-13

Consider the pattern of this computation. To compute (fib 5), we compute (fib 4) and (fib 3). To compute (fib 4), we compute (fib 3) and (fib 2). In general, the evolved process looks like a tree, as shown in figure. Notice that the branches split into two at each level (except at the bottom); this reflects the fact that the fib procedure calls itself twice each time it is invoked.

This procedure is instructive as a prototypical tree recursion, but it is a terrible way to compute Fibonacci numbers because it does so much redundant computation. Notice in the figure that the entire computation of (fib 3) — almost half the work — is duplicated. In fact, it is not hard to show that the number of times the procedure will compute (fib 1) or (fib 0) (the number of leaves in the above tree, in general) is precisely Fib(n + 1). To get an idea of how bad this is, one can show that the value of Fib(n) grows exponentially with n. More precisely (see exercise 1.13 [prossimamente]), Fib(n) is the closest integer to ϕn/√5, where

ϕ = (1 + √5) / 2 ≈ 1.618

is the golden ratio, which satisfies the equation

ϕ2 = ϕ + 1

Nota: il codice Unicode per ϕ è 03D5 che per Linux ottendo con S+C-u + codice. il codice per è 2248, quello per è 221A, vedi elenco qui.

Thus, the process uses a number of steps that grows exponentially with the input. On the other hand, the space required grows only linearly with the input, because we need keep track only of which nodes are above us in the tree at any point in the computation. In general, the number of steps required by a tree-recursive process will be proportional to the number of nodes in the tree, while the space required will be proportional to the maximum depth of the tree.

We can also formulate an iterative process for computing the Fibonacci numbers. The idea is to use a pair of integers a and b, initialized to Fib(1) = 1 and Fib(0) = 0, and to repeatedly apply the simultaneous transformations

a ← a + b
b ← a

Nota: il simbolo lo ottengo con AltGr-y; poi cercherò l’Unicode e posterò una tabella riassuntiva.

It is not hard to show that, after applying this transformation n times, a and b will be equal, respectively, to Fib(n + 1) and Fib(n). Thus, we can compute Fibonacci numbers iteratively using the procedure

(define (fib n)
  (fib-iter 1 0 n))

(define (fib-iter a b count)
  (if (= count 0)
      b
      (fib-iter (+ a b) a (- count 1))))

s21

This second method for computing Fib(n) is a linear iteration. The difference in number of steps required by the two methods — one linear in n, one growing as fast as Fib(n) itself — is enormous, even for small inputs.

Provo i due metodi, calcolando con entrambi (fib 100). Basta inserire l’istruzione #lang racket come prima riga del file e, ovviamente, (fib 100) alla fine. Il risultato è sconvolgente:

Nota provvisoria: l’elaborazione è tuttora in corso, dura da quasi un giorno intero; aggiorno e metto l’immagine appena possibile. Inoltre: il bello di avere quattro CPU, usate pochissimo, ai miei tempi invece…😳

s22-1 –appena pronta la metto

NO, terribile sconsideratezza, vedi qui: SICP – OOPS! correggo una sconsideratezza.

contro

s22-2

One should not conclude from this that tree-recursive processes are useless. When we consider processes that operate on hierarchically structured data rather than numbers, we will find that tree recursion is a natural and powerful tool. But even in numerical operations, tree-recursive processes can be useful in helping us to understand and design programs. For instance, although the first fib procedure is much less efficient than the second one, it is more straightforward, being little more than a translation into Lisp of the definition of the Fibonacci sequence. To formulate the iterative algorithm required noticing that the computation could be recast as an iteration with three state variables.

Esempio: Counting change

“Counting change” si può tradurre come? l’ho visto più volte ma sempre in inglese, ho carenze linguistiche😦

It takes only a bit of cleverness to come up with the iterative Fibonacci algorithm. In contrast, consider the following problem: How many different ways can we make change of $ 1.00, given half-dollars, quarters, dimes, nickels, and pennies? More generally, can we write a procedure to compute the number of ways to change any given amount of money?

This problem has a simple solution as a recursive procedure. Suppose we think of the types of coins available as arranged in some order. Then the following relation holds:

  • the number of ways to change amount a using n kinds of coins equals
  • the number of ways to change amount a using all but the first kind of coin, plus
  • the number of ways to change amount a – d using all n kinds of coins, where d is the denomination of the first kind of coin.

To see why this is true, observe that the ways to make change can be divided into two groups: those that do not use any of the first kind of coin, and those that do. Therefore, the total number of ways to make change for some amount is equal to the number of ways to make change for the amount without using any of the first kind of coin, plus the number of ways to make change assuming that we do use the first kind of coin. But the latter number is equal to the number of ways to make change for the amount that remains after using a coin of the first kind.

Thus, we can recursively reduce the problem of changing a given amount to the problem of changing smaller amounts using fewer kinds of coins. Consider this reduction rule carefully, and convince yourself that we can use it to describe an algorithm if we specify the following degenerate cases:

  • If a is exactly 0, we should count that as 1 way to make change.
  • If a is less than 0, we should count that as 0 ways to make change.
  • If n is 0, we should count that as 0 ways to make change.

We can easily translate this description into a recursive procedure:

(define (count-change amount)
  (cc amount 5))
(define (cc amount kinds-of-coins)
  (cond ((= amount 0) 1)
        ((or (< amount 0) (= kinds-of-coins 0)) 0)
        (else (+ (cc amount
                     (- kinds-of-coins 1))
                 (cc (- amount
                        (first-denomination kinds-of-coins))
                     kinds-of-coins)))))
(define (first-denomination kinds-of-coins)
  (cond ((= kinds-of-coins 1) 1)
        ((= kinds-of-coins 2) 5)
        ((= kinds-of-coins 3) 10)
        ((= kinds-of-coins 4) 25)
        ((= kinds-of-coins 5) 50)))

(The first-denomination procedure takes as input the number of kinds of coins available and returns the denomination of the first kind. Here we are thinking of the coins as arranged in order from largest to smallest, but any order would do as well.) We can now answer our original question about changing a dollar:

s23

Count-change generates a tree-recursive process with redundancies similar to those in our first implementation of fib. (It will take quite a while for that 292 to be computed.) On the other hand, it is not obvious how to design a better algorithm for computing the result, and we leave this problem as a challenge. The observation that a tree-recursive process may be highly inefficient but often easy to specify and understand has led people to propose that one could get the best of both worlds by designing a “smart compiler” that could transform tree-recursive procedures into more efficient procedures that compute the same result.

John D. Cook ha trattato questo problema in un post: Making change, dove si trovano ulteriori chiarimenti e links.

Se invece del dollaro usiamo l’euro basta passare 6 a cc e cambiare la cond di first-denomination:

s24

:mrgreen:

Rust – il linguaggio – 50

FoLug

Proseguo, oggi qui: /usr/local/share/doc/rust/html/book/error-handling.html.

Gestione degli errori

Panico (mio, non panic!) perché: This chapter is very long, mostly because we start at the very beginning with sum types and combinators, and try to motivate the way Rust does error handling incrementally. As such, programmers with experience in other expressive type systems may want to jump around.

Segue un lungo indice molto dettagliato (la documentazione di Rust è davvero ottima) che non copio; c’è tutto là😀
C’è anche, per finire, un esempio pratico (case study), molto interessante, da fare se si decide che Rust è da usarsi.

E alla fine il riassunto del capitolo😀

Riassunto

Since this chapter is long, it is useful to have a quick summary for error handling in Rust. These are some good “rules of thumb.” They are emphatically not commandments. There are probably good reasons to break every one of these heuristics!

  • If you’re writing short example code that would be overburdened by error handling, it’s probably just fine to use unwrap (whether that’s Result::unwrap [/usr/local/share/doc/rust/html/std/result/enum.Result.html#method.unwrap], Option::unwrap [/usr/local/share/doc/rust/html/std/option/enum.Option.html#method.unwrap] or preferably Option::expect [/usr/local/share/doc/rust/html/std/option/enum.Option.html#method.expect]). Consumers of your code should know to use proper error handling. (If they don’t, send them here!)
  • If you’re writing a quick ‘n’ dirty program, don’t feel ashamed if you use unwrap. Be warned: if it winds up in someone else’s hands, don’t be surprised if they are agitated by poor error messages!
  • If you’re writing a quick ‘n’ dirty program and feel ashamed about panicking anyway, then use either a String or a Box<Error + Send + Sync> for your error type (the Box<Error + Send + Sync> type is because of the available From impls [/usr/local/share/doc/rust/html/std/convert/trait.From.html]).
  • Otherwise, in a program, define your own error types with appropriate From and Error [/usr/local/share/doc/rust/html/std/error/trait.Error.html] impls to make the try! macro more ergonomic.
  • If you’re writing a library and your code can produce errors, define your own error type and implement the std::error::Error trait. Where appropriate, implement From to make both your library code and the caller’s code easier to write. (Because of Rust’s coherence rules, callers will not be able to impl From on your error type, so your library should do it.)
  • Learn the combinators defined on Option [/usr/local/share/doc/rust/html/std/option/enum.Option.html] and Result [/usr/local/share/doc/rust/html/std/result/enum.Result.html]. Using them exclusively can be a bit tiring at times, but I’ve personally found a healthy mix of try! and combinators to be quite appealing. and_then, map and unwrap_or are my favorites.

Rust è OK ma chissà se lo userò😳
:mrgreen:

Visto nel Web – 231

Puntuale come le tasse ecco un’altra puntata di visto nel Web.

CgMgQmcXEAAoqtt

Spry Image Model
un approccio non-standard alla programmazione
#:linguaggi di programmazione
::: Roads Less Taken

Quesito con la Susi n. 932
sarebbe da approfondire, per chi è interessato alla matematica, avendo tempo
#:algoritmi, codice
::: Muapna

Popular Firefox Add-Ons Open Millions To New Attack
#:sicurezza, spionaggio
::: Slashdot

How San Francisco Hazed a Tech Bro
#:economia
::: Slashdot

When one upvote is worth a thousand visitors
#:Web, Internet
::: The Next Web

CekKwYvXIAIaKhJ

In memoria di #DegliAntoni
#:protagonisti
::: quinta

Panama Papers: ecco come sono stati hackerati i dati
#:Web, Internet #:sicurezza, spionaggio
::: Tech Economy

ranger e’ un file manager da riga di comando con preview integrata
#:tools, componenti software
::: daw985

“Full Stack Lisp” has been published
This is a very early work in progress version of Full Stack Lisp. It is incomplete and possibly contains incorrect information; ma chissà…
#:manuali, how to #:lisp e derivati
::: lispcoder

Smartphone ownership rates skyrocket in many emerging countries but digital divide remains
#:dispositivi mobili
::: pewinternet

caps

Worst practices should be hard
interessantissimo anche la sola parte non Haskell
#:linguaggi di programmazione
::: Haskell for all

Introducing Keras 1.0
Keras is a Deep Learning library for Python, that is simple, modular, and extensible
#:algoritmi, codice
::: Keras

5 Ways Computer Hackers Remain Anonymous
#:sicurezza, spionaggio
::: Ian Sutherland

Windows 10, le app Linux con interfaccia grafica
#:sistemi operativi
::: web news

The Best Algorithm No One Knows About
#:algoritmi, codice
::: Kerf

CeVXS3KXEAQDM8p

Be prepared, Vim 8.0 is coming
#:tools, componenti software
::: VimLinks

EU Unveils Plan To Force Facebook, Google and Amazon To Pay Their Fair Share of Tax
#:ditte
::: Slashdot

I FOUND THE BUG!
#:umorismo
::: thek3nger

Firefox diventerà Chrome? Assolutamente no!
#:Web, Internet
::: Chimera Revo ::: Slashdot

Bash on Windows: The Good, the Bad and the Ugly
#:sistemi operativi
::: Web Reflection

Umm al-Maa

Former Reuters journalist Matthew Keys sentenced to 2 years for a 40-minute web defacement
#:sicurezza, spionaggio
::: Boing Boing

Obama Is Threatening To Veto the GOP’s Latest Assault On Net Neutrality
#:Web, Internet
::: Slashdot

The secret rules of the internet
#:Web, Internet
::: The Verge

MicroPython 1.7 released! Now with cross-compiler
#:linguaggi di programmazione
::: gvanrossum

Man Deletes His Entire Company With One Line of Bad Code
sarà vero? avrà certamente un back-up aggiornato, vero?
#:Web, Internet
::: Slashdot ::: Slashdot

muro_austria

Kite looks pretty cool: machine-learning applied to programming in Python
#:linguaggi di programmazione
::: gvanrossum

Voltron: teaching kids distributed computing since 1984
#:umorismo
::: idiosynchris

New edition of Harper’s Practical Foundations of Programming Languages is out
#:manuali, how to
::: lambda_calculus

Practical #Unix Manuals: Free ebook on writing man pages
#:manuali, how to
::: Donearm

David MacKay sadly passed away
#:protagonisti
::: marcodelmastro ::: MSR_Future ::: mark_lynas ::: TimHarford

found on moon

Foia, oblio e reputazione
#:Web, Internet
::: manteblog

Obama Forms Commission To Bolster US Cyber Security
#:sicurezza, spionaggio
::: Slashdot ::: Slashdot

Conway’s Game of Life for Web and Android with ClojureScript
#:algoritmi, codice
::: Vladimir Iakovlev

Vivere altrove – Esino Lario
#:Web, Internet
::: la Stampa

BetaGo: not quite as good as AlphaGo (yet) but open source!
#:linguaggi di programmazione
::: alexjc

CgGkN6YVAAAZ3_1

Cicero pro domo sua
#:sicurezza, spionaggio #:ditte
::: SIAMO GEEK

Announcing Rust 1.8
#:linguaggi di programmazione
::: The Rust Programming Language

Why functional programming matters
#:programmazione funzionale
::: CompSciFact

Togliete QuickTime da Windows ADESSO
#:sicurezza, spionaggio
::: SIAMO GEEK

A younger Richard Stallman, in Balkan folk costume, dancing with Lisp Machine
#:storia #:umorismo
::: Google+

#OnThisDay in 1959, John McCarthy first unveiled the LISP programming language
#:lisp e derivati
::: Atmel ::: dastels

celeste

Rust – il linguaggio – 49

CUrS1_gWcAAlgRM

Da qui oggi sono qui: /usr/local/share/doc/rust/html/book/concurrency.html.

Concurrency

Concurrency and parallelism are incredibly important topics in computer science, and are also a hot topic in industry today. Computers are gaining more and more cores, yet many programmers aren’t prepared to fully utilize them.

Rust’s memory safety features also apply to its concurrency story too.

Before we talk about the concurrency features that come with Rust, it’s important to understand something: Rust is low-level enough that the vast majority of this is provided by the standard library, not by the language. This means that if you don’t like some aspect of the way Rust handles concurrency, you can implement an alternative way of doing things. mio is a real-world example of this principle in action.

Background: Send e Sync

Concurrency is difficult to reason about. In Rust, we have a strong, static type system to help us reason about our code. As such, Rust gives us two traits to help us make sense of code that can possibly be concurrent.

Send

The first trait we’re going to talk about is Send [/usr/local/share/doc/rust/html/std/marker/trait.Send.html]. When a type T implements Send, it indicates that something of this type is able to have ownership transferred safely between threads.

This is important to enforce certain restrictions. For example, if we have a channel connecting two threads, we would want to be able to send some data down the channel and to the other thread. Therefore, we’d ensure that Send was implemented for that type.

In the opposite way, if we were wrapping a library with FFI [/usr/local/share/doc/rust/html/book/ffi.html] that isn’t threadsafe, we wouldn’t want to implement Send, and so the compiler will help us enforce that it can’t leave the current thread.

Sync

The second of these traits is called Sync. When a type T implements Sync [/usr/local/share/doc/rust/html/std/marker/trait.Sync.html], it indicates that something of this type has no possibility of introducing memory unsafety when used from multiple threads concurrently through shared references. This implies that types which don’t have interior mutability [/usr/local/share/doc/rust/html/book/mutability.html] are inherently Sync, which includes simple primitive types (like u8) and aggregate types containing them.

For sharing references across threads, Rust provides a wrapper type called Arc<T>. Arc<T> implements Send and Sync if and only if T implements both Send and Sync. For example, an object of type Arc<RefCell<U>> cannot be transferred across threads because RefCell [/usr/local/share/doc/rust/html/book/choosing-your-guarantees.html#refcellt] does not implement Sync, consequently Arc<RefCell<U>> would not implement Send.

These two traits allow you to use the type system to make strong guarantees about the properties of your code under concurrency. Before we demonstrate why, we need to learn how to create a concurrent Rust program in the first place!

Threads

Rust’s standard library provides a library for threads, which allow you to run Rust code in parallel. Here’s a basic example of using std::thread:

fn main() {
    thread::spawn(|| {
        println!("Hello from a thread!");
    });
}

The thread::spawn() method accepts a closure, which is executed in a new thread. It returns a handle to the thread, that can be used to wait for the child thread to finish and extract its result:

use std::thread;

fn main() {
    let handle = thread::spawn(|| {
        "Hello from a thread!"
    });

    println!("{}", handle.join().unwrap());
}

rs49-0

Many languages have the ability to execute threads, but it’s wildly unsafe. There are entire books about how to prevent errors that occur from shared mutable state. Rust helps out with its type system here as well, by preventing data races at compile time. Let’s talk about how you actually share things between threads.

Safe Shared Mutable State

Dai, mica si può tradurre😉
Due to Rust’s type system, we have a concept that sounds like a lie: “safe shared mutable state.” Many programmers agree that shared mutable state is very, very bad.

Com’è che si dice?

Shared mutable state is the root of all evil. Most languages attempt to deal with this problem through the ‘mutable’ part, but Rust deals with it by solving the ‘shared’ part.

(Rust’s auto-cit.).

The same ownership system [/usr/local/share/doc/rust/html/book/ownership.html] that helps prevent using pointers incorrectly also helps rule out data races, one of the worst kinds of concurrency bugs.

As an example, here is a Rust program that would have a data race in many languages. It will not compile:

use std::thread;
use std::time::Duration;

fn main() {
    let mut data = vec![1, 2, 3];

    for i in 0..3 {
        thread::spawn(move || {
            data[i] += 1;
        });
    }

    thread::sleep(Duration::from_millis(50));
}

rs49-1

Rust knows this wouldn’t be safe! If we had a reference to data in each thread, and the thread takes ownership of the reference, we’d have three owners!

So, we need some type that lets us have more than one reference to a value and that we can share between threads, that is it must implement Sync.

We’ll use Arc<T>, Rust’s standard atomic reference count type, which wraps a value up with some extra runtime bookkeeping which allows us to share the ownership of the value between multiple references at the same time.

The bookkeeping consists of a count of how many of these references exist to the value, hence the reference count part of the name.

The Atomic part means Arc<T> can safely be accessed from multiple threads. To do this the compiler guarantees that mutations of the internal count use indivisible operations which can’t have data races.

use std::thread;
use std::sync::Arc;
use std::time::Duration;

fn main() {
    let mut data = Arc::new(vec![1, 2, 3]);

    for i in 0..3 {
        let data = data.clone();
        thread::spawn(move || {
            data[i] += 1;
        });
    }

    thread::sleep(Duration::from_millis(50));
}

rs49-2

We now call clone() on our Arc<T>, which increases the internal count. This handle is then moved into the new thread.

And… still gives us an error😦

Arc<T> assumes one more property about its contents to ensure that it is safe to share across threads: it assumes its contents are Sync. This is true for our value if it’s immutable, but we want to be able to mutate it, so we need something else to persuade the borrow checker we know what we’re doing.

It looks like we need some type that allows us to safely mutate a shared value, for example a type that can ensure only one thread at a time is able to mutate the value inside it at any one time.

For that, we can use the Mutex<T> type!

Here’s the working version:

use std::sync::{Arc, Mutex};
use std::thread;
use std::time::Duration;

fn main() {
    let data = Arc::new(Mutex::new(vec![1, 2, 3]));

    for i in 0..3 {
        let data = data.clone();
        thread::spawn(move || {
            let mut data = data.lock().unwrap();
            data[i] += 1;
        });
    }

    thread::sleep(Duration::from_millis(50));
}

rs49-3

Note that the value of i is bound (copied) to the closure and not shared among the threads.

Also note that lock method of Mutex has this signature:

fn lock(&self) -> LockResult<MutexGuard<T>>

and because Send is not implemented for MutexGuard<T>, the guard cannot cross thread boundaries, ensuring thread-locality of lock acquire and release.

Let’s examine the body of the thread more closely:

thread::spawn(move || {
    let mut data = data.lock().unwrap();
    data[i] += 1;
});

First, we call lock(), which acquires the mutex’s lock. Because this may fail, it returns an Result<T, E>, and because this is just an example, we unwrap() it to get a reference to the data. Real code would have more robust error handling here. We’re then free to mutate it, since we have the lock.

Lastly, while the threads are running, we wait on a short timer. But this is not ideal: we may have picked a reasonable amount of time to wait but it’s more likely we’ll either be waiting longer than necessary or not long enough, depending on just how much time the threads actually take to finish computing when the program runs.

A more precise alternative to the timer would be to use one of the mechanisms provided by the Rust standard library for synchronizing threads with each other. Let’s talk about one of them: channels.

Channels

Here’s a version of our code that uses channels for synchronization, rather than waiting for a specific time:

use std::sync::{Arc, Mutex};
use std::thread;
use std::sync::mpsc;

fn main() {
    let data = Arc::new(Mutex::new(0));

    let (tx, rx) = mpsc::channel();

    for _ in 0..10 {
        let (data, tx) = (data.clone(), tx.clone());

        thread::spawn(move || {
            let mut data = data.lock().unwrap();
            *data += 1;

            tx.send(()).unwrap();
        });
    }

    for _ in 0..10 {
        rx.recv().unwrap();
    }
}

rs49-4

We use the mpsc::channel() method to construct a new channel. We just send a simple () down the channel, and then wait for ten of them to come back.

While this channel is just sending a generic signal, we can send any data that is Send over the channel!

use std::thread;
use std::sync::mpsc;

fn main() {
    let (tx, rx) = mpsc::channel();

    for i in 0..10 {
        let tx = tx.clone();

        thread::spawn(move || {
            let answer = i * i;

            tx.send(answer).unwrap();
        });
    }

    for _ in 0..10 {
        println!("{}", rx.recv().unwrap());
    }
}

rs49-5

Here we create 10 threads, asking each to calculate the square of a number (i at the time of spawn()), and then send() back the answer over the channel.

Panics

A panic! will crash the currently executing thread. You can use Rust’s threads as a simple isolation mechanism:

fn main() {
    use std::thread;
    
    let handle = thread::spawn(move || {
        panic!("oops!");
    });
    
    let result = handle.join();
    
    println!("{}", result.is_err());
    
}

rs49-6

Thread.join() gives us a Result back, which allows us to check if the thread has panicked or not.

Bello😀 uno dei motivi per usare Rust😀
:mrgreen:

SICP – cap. 1 – Costruire astrazioni tramite procedure – esercizi – 12

sg0

Continuando da qui oggi esercizi, qui.

Exercise 1.9: Each of the following two procedures defines a method for adding two positive integers in terms of the procedures inc, which increments its argument by 1, and dec, which decrements its argument by 1.

(define (+ a b)
  (if (= a 0)
      b
      (inc (+ (dec a) b))))

(define (+ a b)
  (if (= a 0)
      b
      (+ (dec a) (inc b))))

Using the substitution model, illustrate the process generated by each procedure in evaluating (+ 4 5). Are these processes iterative or recursive?

La parte facile è definire le due procedure inc e dec:

(define (inc n)
  (+ n 1)) 

(define (dec n)
  (- n 1))

poi viene la parte impegnativa. Vero che si applica solo quanto detto in precedenza ma cedo la parola a Bill the Lizard, qui.
Uh! mitico Bill😀 che ci rimanda anche a una pagina esplicativa, SICP se non stai più che attento ti mette nei guai ogni momento😳

Per contro Ken Dyck è molto sintetico; riporta quanto si ottiene sviluppando con carta e penna le procedure passo-passo.
Idem per S-sol ma con una nota finale esplicativa, molto chiara (a posteriori, almeno per me, da solo non c’ero arrivato).

Exercise 1.10The following procedure computes a mathematical function called Ackermann’s function.

s20

ma non vale eseguirla così😳 se non per conferma, nèh!

Consider the following procedures, where A is the procedure defined above:

(define (f n) (A 0 n))

(define (g n) (A 1 n))

(define (h n) (A 2 n))

(define (k n) (* 5 n n))

Give concise mathematical definitions for the functions computed by the procedures f, g, and h for positive integer values of n. For example, (k n) computes 5n2.

Allora confesso di aver sbirciato Bill e sì, si può (deve) far fare il calcoli al computer. Siccome c’è tutto di là non ripeto.
Ken è sintetico, bravo😀
S-sol didascalico, impressionante😀

SICP è come dire… e pensare che qualche anno fa al Poli era un esame intro😉

:mrgreen:

Rust – il linguaggio – 48

stella

Oggi qui: /usr/local/share/doc/rust/html/book/iterators.html, continuando da qui.

Iteratori

Let’s talk about loops.

Remember Rust’s for loop? Here’s an example:

fn main() {
    for x in 0..10 {
        print!("{} ", x);
    }
    println!("");
}

rs48-0

Now that you know more Rust, we can talk in detail about how this works. Ranges (the 0..10) are ‘iterators’. An iterator is something that we can call the .next() method on repeatedly, and it gives us a sequence of things.

Like this:

fn main() {
    let mut range = 0..10;
    
    loop {
        match range.next() {
            Some(x) => {
                print!("{} ", x);
            },
            None => { break }
        }
    }
     println!("");
}

rs48-1

We make a mutable binding to the range, which is our iterator. We then loop, with an inner match. This match is used on the result of range.next(), which gives us a reference to the next value of the iterator. next returns an Option<i32>, in this case, which will be Some(i32) when we have a value and None once we run out. If we get Some(i32), we print it out, and if we get None, we break out of the loop.

This code sample is basically the same as our for loop version. The for loop is just a handy way to write this loop/match/break construct.

for loops aren’t the only thing that uses iterators, however. Writing your own iterator involves implementing the Iterator trait. While doing that is outside of the scope of this guide, Rust provides a number of useful iterators to accomplish various tasks. But first, a few notes about limitations of ranges.

Ranges are very primitive, and we often can use better alternatives. Consider the following Rust anti-pattern: using ranges to emulate a C-style for loop. Let’s suppose you needed to iterate over the contents of a vector. You may be tempted to write this:

fn main() {
    let nums = vec![1, 2, 3];
    
    for i in 0..nums.len() {
        println!("{}", nums[i]);
    }
}

rs48-2

This is strictly worse than using an actual iterator. You can iterate over vectors directly, so write this:

fn main() {
    let nums = vec![1, 2, 3];
    
    for num in &nums {
        println!("{}", num);
    }
}

rs48-3

There are two reasons for this. First, this more directly expresses what we mean. We iterate through the entire vector, rather than iterating through indexes, and then indexing the vector. Second, this version is more efficient: the first version will have extra bounds checking because it used indexing, nums[i]. But since we yield a reference to each element of the vector in turn with the iterator, there’s no bounds checking in the second example. This is very common with iterators: we can ignore unnecessary bounds checks, but still know that we’re safe.

There’s another detail here that’s not 100% clear because of how println! works. num is actually of type &i32. That is, it’s a reference to an i32, not an i32 itself. println! handles the dereferencing for us, so we don’t see it. This code works fine too:

fn main() {
    let nums = vec![1, 2, 3];
    
    for num in &nums {
        println!("{}", *num);
    }
}

rs48-4

Now we’re explicitly dereferencing num. Why does &nums give us references? Firstly, because we explicitly asked it to with &. Secondly, if it gave us the data itself, we would have to be its owner, which would involve making a copy of the data and giving us the copy. With references, we’re just borrowing a reference to the data, and so it’s just passing a reference, without needing to do the move.

So, now that we’ve established that ranges are often not what you want, let’s talk about what you do want instead.

There are three broad classes of things that are relevant here: iterators, iterator adaptors, and consumers. Here’s some definitions:

  • iterators give you a sequence of values.
  • iterator adaptors operate on an iterator, producing a new iterator with a different output sequence.
  • consumers operate on an iterator, producing some final set of values.

Let’s talk about consumers first, since you’ve already seen an iterator, ranges.

Consumers

No, non ho idea di come trardurlo😉
A consumer operates on an iterator, returning some kind of value or values. The most common consumer is collect(). This code doesn’t quite compile, but it shows the intention:

let one_to_one_hundred = (1..101).collect();

As you can see, we call collect() on our iterator. collect() takes as many values as the iterator will give it, and returns a collection of the results. So why won’t this compile? Rust can’t determine what type of things you want to collect, and so you need to let it know. Here’s the version that does compile:

fn main() {
    let one_to_one_hundred = (1..101).collect::<Vec>();
    println!("{} {} {}", one_to_one_hundred[0], 
                         one_to_one_hundred[41], 
                         one_to_one_hundred[99]);
}

rs48-5

If you remember, the ::<> syntax allows us to give a type hint, and so we tell it that we want a vector of integers. You don’t always need to use the whole type, though. Using a _ will let you provide a partial hint:

fn main() {
    let one_to_one_hundred = (1..101).collect::<Vec>();
    println!("{} {} {}", one_to_one_hundred[0], 
                         one_to_one_hundred[41], 
                         one_to_one_hundred[99]);
}

rs48-6

This says “Collect into a Vec<T>, please, but infer what the T is for me.” _ is sometimes called a “type placeholder” for this reason.

collect() is the most common consumer, but there are others too. find() is one:

fn main() {
    let greater_than_forty_two = (0..100)
                                 .find(|x| *x > 42);
    
    match greater_than_forty_two {
        Some(_) => println!("Found a match!"),
        None => println!("No match found :("),
    }
}

rs48-7

find takes a closure, and works on a reference to each element of an iterator. This closure returns true if the element is the element we’re looking for, and false otherwise. find returns the first element satisfying the specified predicate. Because we might not find a matching element, find returns an Option rather than the element itself.

Another important consumer is fold. Here’s what it looks like:

fn main() {
    let sum = (1..4).fold(0, |sum, x| sum + x);
    println!("{}", sum);
}

rs48-8

fold() is a consumer that looks like this: fold(base, |accumulator, element| ...). It takes two arguments: the first is an element called the base. The second is a closure that itself takes two arguments: the first is called the accumulator, and the second is an element. Upon each iteration, the closure is called, and the result is the value of the accumulator on the next iteration. On the first iteration, the base is the value of the accumulator.

Okay, that’s a bit confusing. Let’s examine the values of all of these things in this iterator:

tabella

We called fold() with these arguments:

.fold(0, |sum, x| sum + x);

So, 0 is our base, sum is our accumulator, and x is our element. On the first iteration, we set sum to 0, and x is the first element of nums, 1. We then add sum and x, which gives us 0 + 1 = 1. On the second iteration, that value becomes our accumulator, sum, and the element is the second element of the array, 2. 1 + 2 = 3, and so that becomes the value of the accumulator for the last iteration. On that iteration, x is the last element, 3, and 3 + 3 = 6, which is our final result for our sum. 1 + 2 + 3 = 6, and that’s the result we got.

Whew. fold can be a bit strange the first few times you see it, but once it clicks, you can use it all over the place. Any time you have a list of things, and you want a single result, fold is appropriate.

Consumers are important due to one additional property of iterators we haven’t talked about yet: laziness. Let’s talk some more about iterators, and you’ll see why consumers matter.

Iterators

As we’ve said before, an iterator is something that we can call the .next() method on repeatedly, and it gives us a sequence of things. Because you need to call the method, this means that iterators can be lazy and not generate all of the values upfront. This code, for example, does not actually generate the numbers 1-99, instead creating a value that merely represents the sequence:

let nums = 1..100;

Since we didn’t do anything with the range, it didn’t generate the sequence. Let’s add the consumer:

let nums = (1..100).collect::<Vec<i32>>();

Now, collect() will require that the range gives it some numbers, and so it will do the work of generating the sequence.

Ranges are one of two basic iterators that you’ll see. The other is iter(). iter() can turn a vector into a simple iterator that gives you each element in turn:

let nums = vec![1, 2, 3];

for num in nums.iter() {
   println!("{}", num);
}

(già fatto girare, vedi sopra, terzo screenshot).

These two basic iterators should serve you well. There are some more advanced iterators, including ones that are infinite.

That’s enough about iterators. Iterator adaptors are the last concept we need to talk about with regards to iterators. Let’s get to it!

Iterator adaptors

Iterator adaptors take an iterator and modify it somehow, producing a new iterator. The simplest one is called map:

(1..100).map(|x| x + 1);

map is called upon another iterator, and produces a new iterator where each element reference has the closure it’s been given as an argument called on it. So this would give us the numbers from 2-100. Well, almost! If you compile the example, you’ll get a warning:

fn main() {
    (1..100).map(|x| x + 1);
}

rs48-9

Laziness strikes again! That closure will never execute. This example doesn’t print any numbers:

fn main() {
    (1..100).map(|x| println!("{}", x));
}

rs48-10

If you are trying to execute a closure on an iterator for its side effects, just use for instead.

There are tons of interesting iterator adaptors. take(n) will return an iterator over the next n elements of the original iterator. Let’s try it out with an infinite iterator:

fn main() {
    for i in (1..).take(5) {
        print!("{} ", i);
    }
    println!("");
}

rs48-11

filter() is an adapter that takes a closure as an argument. This closure returns true or false. The new iterator filter() produces only the elements that the closure returns true for:

fn main() {
    for i in (1..100).filter(|&x| x % 2 == 0) {
        print!("{} ", i);
    }
    println!("");
}

rs48-12

This will print all of the even numbers between one and a hundred. (Note that because filter doesn’t consume the elements that are being iterated over, it is passed a reference to each element, and thus the filter predicate uses the &x pattern to extract the integer itself.)

You can chain all three things together: start with an iterator, adapt it a few times, and then consume the result. Check it out:

fn main() {
    for i in (1..)
        .filter(|&x| x % 2 == 0)
        .filter(|&x| x % 3 == 0)
        .take(5)
        .collect::<Vec>() {
        print!("{} ", i);}
    println!("");
    
}

rs48-13

This is just a small taste of what iterators, iterator adaptors, and consumers can help you with. There are a number of really useful iterators, and you can write your own as well. Iterators provide a safe, efficient way to manipulate all kinds of lists. They’re a little unusual at first, but if you play with them, you’ll get hooked. For a full list of the different iterators and consumers, check out the iterator module documentation [/usr/local/share/doc/rust/html/std/iter/index.html].

:mrgreen:

Rust – il linguaggio – 47

florence

Proseguendo da qui oggi eccomi qui: /usr/local/share/doc/rust/html/book/documentation.html.

Documentazione

Documentation is an important part of any software project, and it’s first-class in Rust. Let’s talk about the tooling Rust gives you to document your project. Vero😀 io ultimamente considero solo le cose ben documentate (e aggiornate), meglio se senza bugs.

rustdoc

The Rust distribution includes a tool, rustdoc, that generates documentation. rustdoc is also used by Cargo through cargo doc.
Documentation can be generated in two ways: from source code, and from standalone Markdown files.

Documentare il codice sorgente

The primary way of documenting a Rust project is through annotating the source code. You can use documentation comments for this purpose:

/// Constructs a new `Rc`.
///
/// # Examples
///
/// ```
/// use std::rc::Rc;
///
/// let five = Rc::new(5);
/// ```
pub fn new(value: T) -> Rc {
    // implementation goes here
}

Produce un file .html come quelli che sto esaminando, anzi uno di quelli proprio😀

The first thing to notice about this annotation is that it uses /// instead of //. The triple slash indicates a documentation comment.

Documentation comments are written in Markdown.

Rust keeps track of these comments, and uses them when generating documentation. This is important when documenting things like enums:

/// The `Option` type. See [the module level documentation](index.html) for more.
enum Option {
    /// No value
    None,
    /// Some value `T`
    Some(T),

The above works, but this does not:

/// The `Option` type. See [the module level documentation](index.html) for more.
enum Option {
    None, /// No value
    Some(T), /// Some value `T`
}

You’ll get an error:

hello.rs:4:1: 4:2 error: expected ident, found `}`
hello.rs:4 }
           ^

This unfortunate error is correct: documentation comments apply to the thing after them, and there’s nothing after that last comment.

Seguono una serie di consigli che non riporto, anche se impo, nèh!😀
Come pure non riporto come produrre la documentazione con rustdoc o cargo test.

Documentare i moduli

Rust has another kind of doc comment, //!. This comment doesn’t document the next item, but the enclosing item. In other words:

mod foo {
    //! This is documentation for the `foo` module.
    //!
    //! # Examples

    // ...

This is where you’ll see //! used most often: for module documentation. If you have a module in foo.rs, you’ll often open its code and see this:

//! A module for using `foo`s.
//!
//! The `foo` module contains a lot of useful functionality blah blah blah

Check out RFC 505 for full conventions around the style and format of documentation.

Altre documentazioni

All of this behavior works in non-Rust source files too. Because comments are written in Markdown, they’re often .md files.

When you write documentation in Markdown files, you don’t need to prefix the documentation with comments. For example:

/// # Examples
///
/// ```
/// use std::rc::Rc;
///
/// let five = Rc::new(5);
/// ```

Seguono poi tutta una serie di dritte, troppo specifiche per il mio esame, tanto sono di là😀
In conclusione: i tools per produrre documentazione con Rust rockzs!😀

:mrgreen:

SICP – cap. 1 – Procedure e i processi che queste generano – 11

Marco Bruno a Ischia

Marco Bruno a Ischia

Proseguo con SICP, continuando da qui oggi eccomi qui.

We have now considered the elements of programming: We have used primitive arithmetic operations, we have combined these operations, and we have abstracted these composite operations by defining them as compound procedures. But that is not enough to enable us to say that we know how to program. […] We lack the experience to predict the consequences of making a move (executing a procedure).

Mica posso copiare tutto anche se è detto davvero bene, SICP rockz!😀

The ability to visualize the consequences of the actions under consideration is crucial to becoming an expert programmer, just as it is in any synthetic, creative activity.
To become experts, we must learn to visualize the processes generated by various types of procedures. Only after we have developed such a skill can we learn to reliably construct programs that exhibit the desired behavior.

In this section we will examine some common “shapes” for processes generated by simple procedures. We will also investigate the rates at which these processes consume the important computational resources of time and space. The procedures we will consider are very simple.

Ricorsività lineare e iterazione

ch1-Z-G-7

Iniziamo considerando la funzione fattoriale, definita come

n! = n * (n - 1) * (n - 2) * ... 3 * 2 * 1

Ci sono molti modi per calcolare i fattoriali. Uno è di utilizzare l’osservazione che n! è uguale a n volte (n - 1)! per ogni intero positivo:

n! = n * [(n - 1) * (n - 2) * ... 3 * 2 * 1] = n * (n - 1)!

Allora possiamo idea: uso l’inglese, inutile tradurre😉

Thus, we can compute n! by computing (n - 1)! and multiplying the result by n. If we add the stipulation that 1! is equal to 1, this observation translates directly into a procedure:

(define (factorial n)
  (if (= n 1)
      1
      (* n (factorial (- n 1)))))

che per n = 6 è illustrato nella figura lì sopra.

Now let’s take a different perspective on computing factorials. We could describe a rule for computing n! by specifying that we first multiply 1 by 2, then multiply the result by 3, then by 4, and so on until we reach n. More formally, we maintain a running product, together with a counter that counts from 1 up to n. We can describe the computation by saying that the counter and the product simultaneously change from one step to the next according to the rule

product ← counter · product

counter ← counter + 1

and stipulating that n! is the value of the product when the counter exceeds n.

ch1-Z-G-10

Once again, we can recast our description as a procedure for computing factorials:

(define (factorial n)
  (fact-iter 1 1 n))

(define (fact-iter product counter max-count)
  (if (> counter max-count)
      product
      (fact-iter (* counter product)
                 (+ counter 1)
                 max-count)))

Nota: Vedere la nota 29 (che non riporto) che semplifica il codice usando i blocchi introdotti nel post precedente; qui non vengono usati per focalizzare la procedura corrente.

La figura visualizza il caso n = 6.

Compare the two processes. From one point of view, they seem hardly different at all. Both compute the same mathematical function on the same domain, and each requires a number of steps proportional to n to compute n!. Indeed, both processes even carry out the same sequence of multiplications, obtaining the same sequence of partial products. On the other hand, when we consider the “shapes” of the two processes, we find that they evolve quite differently.

Consider the first process. The substitution model reveals a shape of expansion followed by contraction, indicated by the arrow in figure [la prima]. The expansion occurs as the process builds up a chain of deferred operations (in this case, a chain of multiplications). The contraction occurs as the operations are actually performed. This type of process, characterized by a chain of deferred operations, is called a recursive process. Carrying out this process requires that the interpreter keep track of the operations to be performed later on. In the computation of n!, the length of the chain of deferred multiplications, and hence the amount of information needed to keep track of it, grows linearly with n (is proportional to n), just like the number of steps. Such a process is called a linear recursive process.

By contrast, the second process does not grow and shrink. At each step, all we need to keep track of, for any n, are the current values of the variables product, counter, and max-count. We call this an iterative process. In general, an iterative process is one whose state can be summarized by a fixed number of state variables, together with a fixed rule that describes how the state variables should be updated as the process moves from state to state and an (optional) end test that specifies conditions under which the process should terminate. In computing n!, the number of steps required grows linearly with n. Such a process is called a linear iterative process.

The contrast between the two processes can be seen in another way. In the iterative case, the program variables provide a complete description of the state of the process at any point. If we stopped the computation between steps, all we would need to do to resume the computation is to supply the interpreter with the values of the three program variables. Not so with the recursive process. In this case there is some additional “hidden” information, maintained by the interpreter and not contained in the program variables, which indicates “where the process is” in negotiating the chain of deferred operations. The longer the chain, the more information must be maintained. Di questo se ne parlerà ancora in futuro.

In contrasting iteration and recursion, we must be careful not to confuse the notion of a recursive process with the notion of a recursive procedure. When we describe a procedure as recursive, we are referring to the syntactic fact that the procedure definition refers (either directly or indirectly) to the procedure itself. But when we describe a process as following a pattern that is, say, linearly recursive, we are speaking about how the process evolves, not about the syntax of how a procedure is written. It may seem disturbing that we refer to a recursive procedure such as fact-iter as generating an iterative process. However, the process really is iterative: Its state is captured completely by its three state variables, and an interpreter need keep track of only three variables in order to execute the process.

One reason that the distinction between process and procedure may be confusing is that most implementations of common languages (including Ada, Pascal, and C) are designed in such a way that the interpretation of any recursive procedure consumes an amount of memory that grows with the number of procedure calls, even when the process described is, in principle, iterative. As a consequence, these languages can describe iterative processes only by resorting to special-purpose “looping constructs” such as do, repeat, until, for, and while. The implementation of Scheme we shall consider in chapter 5 does not share this defect. It will execute an iterative process in constant space, even if the iterative process is described by a recursive procedure. An implementation with this property is called tail-recursive. With a tail-recursive implementation, iteration can be expressed using the ordinary procedure call mechanism, so that special iteration constructs are useful only as syntactic sugar.

Ecco perché Scheme (il Lisp, Racket, Clojure &co sono diversi, più belli)😀

OK😀 pausa; poi ci saranno gli esercizi.:mrgreen:

Iscriviti

Ricevi al tuo indirizzo email tutti i nuovi post del sito.

Segui assieme ad altri 89 follower