Archivi Categorie: Racket

Globale o locale?

Jake 💥 rockz e oggi ci racconta di un caso che neanche Dirk Gentlyquesto.

Lo riscrivo con una piccola modifica (l’istruzione print per vedere il risultato), file e0.py:

def g(x):
  def f():
    x += 1
  f()

print(g(0))

Err! 😡, e come dice Jake: That was a fun one to explain 😀. Dico subito che da solo non ci sono arrivato; poi mi sono detto che dovevo pensarci ma…

Provo a fare come avrei dovuto. Invece di operare sulla variabile (globale) x nella f() ritorno il valore di x modificato, così (file e1.py):

def g(x):
  def f():
    return x + 1
  f()

print(g(0))

OOPS! per Python è OK ma non per me, correggo (e2.py):

def g(x):
  def f():
    return x + 1
  return f()

print(g(0))

Chiaro? Torno al tweet, nel thread c’è la dritta, il link a –indovina– la documentazione ufficiale di Python, qui: Why am I getting an UnboundLocalError when the variable has a value?

La spiegazione è diversa (più esaustiva) della mia, in sintesi:

This is because when you make an assignment to a variable in a scope, that variable becomes local to that scope and shadows any similarly named variable in the outer scope. Since the last statement in foo assigns a new value to x, the compiler recognizes it as a local variable. Consequently when the earlier print(x) attempts to print the uninitialized local variable and an error results.

Logico. E con certi linguaggi –nuovi, di moda– le variabili, quando ci sono, sono quelle, restano tali, niente modifiche. Per esempio Haskell –lasciato come esercizio. È da tanto che volevo dire questa frase 😊.

Invece in un’altra famiglia di linguaggi –sexy, i miei preferiti– non serve scrivere “return“, la funzione è un valore (file e3.rkt):

(define (g x)
  (define (f)
    (+ 1 x))
    (f))

(displayln (g 0))

Ma nessun lisper userebbe la funzione f(), viene chiamata una volta sola, per questo c’è lambda (e4.rkt):

(define (g x)
  ((lambda (t) (+ 1 t)) x))

(displayln (g 0))

Non mi piace, inutilmente contorta, basta (e5.rkt):

(define (g x)
  (+ 1 x))

(displayln (g 0))

Dove ho semplificato (troppo? f e g in un programma reale conterranno altre istruzioni) il codice. Ma era ridondante, lambda si usa quando serve.

Poi ci ho ripensato ancora (tutta la notte) e sono giunto a una conclusione che forse vale solo per me ma penso che il codice del tweet sia sbagliato come progettazione. Se x dev’essere globale si deve dichiarare, così (e6.py):

def g(x):
  def f():
    global x
    x += 1
  f()

# main
x = 0
g(0)
print(x)

Ovviamente il parametro in g() è inutile e dannoso come si scoprirà nel debug della prima revisione (e7.py):

def g(x):
  def f():
    global x
    x += 1
  f()

# main
x = 0
g(42) # <- ====
print(x)

Sweet, il Lisp e le parentesi

Io Twitter lo lovvo, assay ❤️ e quando trovo roba che m’ispira poi ne scrivo qui sul blog. Come il recente post Ricorsività e specialità.

Devo vedere Sweet concludevo e allora ecco cosa ho trovato.

La home di Sweet su SourceForge lo presenta come Readable Lisp/S-expressions with infix, functions, and indentation, brought to you by dwheeler.

Uh! chi è l’autore? Eccolo: David A. Wheeler, trovato via Twitter dove posta poco ma rockz! 💥

Le caratteristiche del progetto sono:

  • Infix notation, e.g., {3 + 4} in addition to (+ 3 4)
  • Optional traditional function notation, e.g., f(...) in addition to (f ...)
  • Optional indentation support greatly reduces the need for parentheses
  • Works on any data or program (it is general and homoiconic)

Poi c’è la pagina della Wiki, dice tutto (ahemmm…) e riporta un esempio molto simile a quello di Michael da cui sono partito nel post precedente.

La versione ortodossa della funzione fattoriale:

(define (factorial n)
  (if (<= n 1)
    1
    (* n (factorial (- n 1)))))

e quella Sweet:

define factorial(n)
  if {n <= 1}
    1
    {n * factorial{n - 1}}

Tolte tutte le parentesi tonde tranne quelle per il passaggio degli argomenti. Uso delle graffe per la notazione infissa.

È più leggibile? forse, dipende dall’abitudine.
Personalmente penso a come costruisco una funzione Lisp (in realtà per me Racket) dal di dentro, le parentesi le apro sempre in copia, così () e sulla carta. Poi nell’ambiente DrRacket, ma anche tanti altri editors, viene indicata la corrispondente aperta/chiusa.

Anche sulla notazione infissa rispetto alla prefissa sono indeciso:

ecco, non è la notazione cui siamo abituati, quella che ci insegnano nella scuola; invece:

😉 Che poi è questione di convenzione, esempio:

e

e anche

Ammetto che dc oggi è solo una provocazione. O no? le calcolatrici HP, la mitica 11c.

Concordo con Jesse Alama, qui (non ho trovato il cinguettio cui si riferisce), e qui, con un opportuno adattamento: sed 's/God/FSM/'.

Ricorsività e specialità

Dell’ottimo Michael Burge 💥 non so molto; lo si trova su Twitter, ha un blog pieno di cose troppo nerdose per me. È nuovo, giovane immagino e per me un maestro. Ecco un suo tweet, muto, cioè subito via con il codice.

Ma funzionerà? mi sembra troppo semplice, forse è meta…

Non resta che provare

* rec $ ghci
GHCi, version 8.0.2: http://www.haskell.org/ghc/  :? for help
Prelude> fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
Prelude> fibs !! 8
21
Prelude> fibs !! 12
144
Prelude> fibs !! 0
0

La prima riga definisce la funzione fibs che crea la lista infinita dei numeri di Fibonacci fornendo i valori dei primi 2 e definendo il valore del successivo come (+) dei valori della lista stessa. !! estrae l’elemento indicato nella lista. Semplice vero? l’ho capita anch’io. E siccome Haskell è lazy –OK, non voglio ripetermi.

OK, ma io preferisco Racket allora ecco. È comodo eseguirlo nel suo ambiente di sviluppo, DrRacket, perché devo caricare il linguaggio lazy:

Ma io sono della vecchia scuola, quella della REPL e allora ecco…

* rec $ racket
Welcome to Racket v6.11.
> (require lazy)
> (define fibs (cons 0 (cons 1 (map + fibs (rest fibs)))))
> (displayln (list-ref fibs 8))
21
> ^D
* rec $

OK. È la tradudione letterale del programma Haskell. Più verboso. Ma Michael (lo sapete che si è fatto un linguaggio tutto suo, Pyramid) non si ferma qui, va oltre. Per procedere devo acquisire il modulo sweet-exp e qui è comodo davvero DrRacket operando da menu: File/Install Package...

Fatto e allora (questa volta la REPL non basta):

Nota: il tweet contiene un errore, manca la parentesi aperta prima di +.

Nota per me: devo informarmi su Sweet, non sono l’unico anche Jesse 💥

Come se non bastasse ecco William 💥

Sì, avendo tempo ci sarebbe da vedere anche Cur.

SICP – e ora?

Continuo da qui.

Da un po’ di tempo mi trovo meno coinvolto con (l’ottimo) SICP. Anche gli esercizi, ultimamente mi sono limitato a seguire Bill the Lizard e altri schemers più bravi di me: SICP-Solutions e Drewiki.

Ecco Bill non è andato oltre con gli esercizi e a breve abbandonerà anche Drewiki. Resta sicp-ex, sempre ottimo. Poi ci sarebbero anche altri, io ho trovato questi:

AbdulFattaah Popoola, SICPBook – Solutions to SICP book exercises. Per quel che ho visto mette solo il codice senza commenti.

Phoenine, collins562, SICP-solutions. Anche qui solo codice, niente commenti.

Ivan Jovanovic, sicp. Anche qui solo il codice, mi sembra OK.

Poi c’è Gwern che devo ancora esaminare, per via di Haskell. Ha abbandonato SICP, passando ad altro. Ma ne riparlerò.

Quindi –almeno per il momento– non continuerò con SICP. Anche se è ottimo, ma va affrontato da giovane, quando si comincia. Non abbandono Scheme, cioè Racket, mi piace parecchio, anzi di più.

E poi chissà… 🤩

SICP – cap. 2 – sets e recupero informazioni – 88 – esercizio

Continuo da qui, copio qui.

Exercise 2.66: Implement the lookup procedure for the case where the set of records is structured as a binary tree, ordered by the numerical values of the keys.

Io quando ci sono gli esercizi chiamo Bill the Lizard 😋

The final part of [the] section asks us to consider a database that contains records, each of which has a key and some data. If the database is represented as an unordered list, a record can be looked up by its key using the following procedure.

(define (lookup given-key set-of-records)
  (cond ((null? set-of-records) false)
        ((equal? given-key (key (car set-of-records)))
         (car set-of-records))
        (else (lookup given-key (cdr set-of-records)))))

We can define simple procedures for making a record out of a key and its data, and for extracting the key and data from an existing record in order to test the procedure above.

(define (key record) (car record))
(define (data record) (cdr record))
(define (make-record key data) (cons key data))

(define database
  (list (make-record 1 'Bill)
        (make-record 2 'Joe)
        (make-record 3 'Frank)
        (make-record 4 'John)))

We can start by including the list->tree and partial-tree procedures given for exercise 2.64 [qui], along with a few required supporting procedures.

(define (entry tree) (car tree))
(define (left-branch tree) (cadr tree))
(define (right-branch tree) (caddr tree))
(define (make-tree entry left right)
  (list entry left right))
  
(define (list->tree elements)
  (car (partial-tree elements (length elements))))

(define (partial-tree elts n)
  (if (= n 0)
      (cons '() elts)
      (let ((left-size (quotient (- n 1) 2)))
        (let ((left-result (partial-tree elts left-size)))
          (let ((left-tree (car left-result))
                (non-left-elts (cdr left-result))
                (right-size (- n (+ left-size 1))))
            (let ((this-entry (car non-left-elts))
                  (right-result (partial-tree (cdr non-left-elts)
                                              right-size)))
              (let ((right-tree (car right-result))
                    (remaining-elts (cdr right-result)))
                (cons (make-tree this-entry left-tree right-tree)
                      remaining-elts))))))))

Raccolgo tutto il codice in es-2.66.rkt.

This makes it easier to convert the existing database to one structured as a binary tree.

Finally, we can write the new implementation of lookup using element-of-set? as a guide.

(define (lookup given-key set-of-records)
  (cond ((null? set-of-records) #f)
        ((= given-key (key (car set-of-records)))
         (car set-of-records))
        ((< given-key (key (car set-of-records))) 
         (lookup given-key (left-branch set-of-records))) 
        ((> given-key (key (car set-of-records)))
         (lookup given-key (right-branch set-of-records)))))

sicp-ex propone, senza spiegazione, lo stesso codice di Bill.
Più articolata la versione di Drewiki.

Seguendo Bill, oltre che miei nuovi interessi, devo rivedere il mio progetto relativo a SICP 😯

🤩

SICP – cap. 2 – sets e recupero informazioni – 87

Continuo da qui, copio qui.

We have examined options for using lists to represent sets and have seen how the choice of representation for a data object can have a large impact on the performance of the programs that use the data. Another reason for concentrating on sets is that the techniques discussed here appear again and again in applications involving information retrieval.

Consider a data base containing a large number of individual records, such as the personnel files for a company or the transactions in an accounting system. A typical data-management system spends a large amount of time accessing or modifying the data in the records and therefore requires an efficient method for accessing records. This is done by identifying a part of each record to serve as an identifying key. A key can be anything that uniquely identifies the record. For a personnel file, it might be an employee’s ID number. For an accounting system, it might be a transaction number. Whatever the key is, when we define the record as a data structure we should include a key selector procedure that retrieves the key associated with a given record.

Now we represent the data base as a set of records. To locate the record with a given key we use a procedure lookup, which takes as arguments a key and a data base and which returns the record that has that key, or false if there is no such record. lookup is implemented in almost the same way as element-of-set? [post precdenti]. For example, if the set of records is implemented as an unordered list, we could use

(define (lookup given-key set-of-records)
  (cond ((null? set-of-records) false)
        ((equal? given-key 
                 (key (car set-of-records)))
         (car set-of-records))
        (else 
         (lookup given-key 
                 (cdr set-of-records)))))

Of course, there are better ways to represent large sets than as unordered lists. Information-retrieval systems in which records have to be “randomly accessed” are typically implemented by a tree-based method, such as the binary-tree representation discussed previously. In designing such a system the methodology of data abstraction can be a great help. The designer can create an initial implementation using a simple, straightforward representation such as unordered lists. This will be unsuitable for the eventual system, but it can be useful in providing a “quick and dirty” data base with which to test the rest of the system. Later on, the data representation can be modified to be more sophisticated. If the data base is accessed in terms of abstract selectors and constructors, this change in representation will not require any changes to the rest of the system.

🤢

SICP – cap. 2 – sets come alberi binari – 86

Continuo da qui, copio qui.

Exercise 2.65: Use the results of Exercise 2.63 [qui] and Exercise 2.64 [qui] to give Θ(n) implementations of union-set and intersection-set for sets implemented as (balanced) binary trees.

Passo –al solito– la parola a Bill the Lizard 💥

We implemented union-set for the unordered list representation of sets back in exercise 2.59 [qui]. This implementation had to check all elements of one set for each element of the other, so it’s complexity was O(n2), quite poor. We improved on this in exercise 2.62 [qui] when we wrote an implementation of union-set for the ordered list representation of sets, which was O(n). The text supplied a similar implementation of intersection-set that was also O(n). We could use these ordered set implementations as a guide to writing efficient implementations of union-set and intersection-set for balanced binary trees, but that wouldn’t require the results of the previous two exercises. Instead, we can use the O(n) implementations of all of the procedures we’ve built so far to perform the following steps:

  • Convert the balanced binary trees to ordered lists.
  • Perform the desired operation (union-set or intersection-set).
  • Convert the resulting ordered set back to a balanced binary tree.
(define (union-set tree1 tree2)
  (define (union-list set1 set2)
    (cond ((null? set1) set2)
          ((null? set2) set1)
          ((= (car set1) (car set2))
           (cons (car set1) (union-list (cdr set1) (cdr set2))))
          ((< (car set1) (car set2)) 
           (cons (car set1) (union-list (cdr set1) set2))) 
          (else (cons (car set2) (union-list set1 (cdr set2))))))
  (list->tree (union-list (tree->list-2 tree1)
                          (tree->list-2 tree2))))

(define (intersection-set tree1 tree2)
  (define (intersection-list set1 set2)
    (if (or (null? set1) (null? set2))
        '()    
        (let ((x1 (car set1)) (x2 (car set2)))
          (cond ((= x1 x2)
                 (cons x1
                       (intersection-list (cdr set1)
                                          (cdr set2))))
                ((< x1 x2)
                 (intersection-list (cdr set1) set2))
                ((< x2 x1) 
                 (intersection-list set1 (cdr set2)))))))
  (list->tree (intersection-list (tree->list-2 tree1)
                                 (tree->list-2 tree2))))

In the implementations above, I’ve just defined the earlier ordered set implementations of union-set and intersection-set as helper functions named union-list and intersection-list. With these helper functions, all union-set and intersection-set need to do is convert from tree to list and back from list to tree. We can define a few balanced trees to test that these new implementations work as expected.

Raccolgo il codice nel file ex-2-65.rkt. Servono anche list->tree, partial-tree, make-tree, tree->list-1 e tree->list-2.

OK anche sicp-ex e Drewiki.

🤢

SICP – cap. 2 – sets come alberi binari – 85 – esercizio

Continuo da qui, copio qui.

Exercise 2.64: The following procedure list->tree converts an ordered list to a balanced binary tree. The helper procedure partial-tree takes as arguments an integer n and list of at least n elements and constructs a balanced tree containing the first n elements of the list. The result returned by partial-tree is a pair (formed with cons) whose car is the constructed tree and whose cdr is the list of elements not included in the tree.

(define (list->tree elements)
  (car (partial-tree 
        elements (length elements))))

(define (partial-tree elts n)
  (if (= n 0)
      (cons '() elts)
      (let ((left-size 
             (quotient (- n 1) 2)))
        (let ((left-result 
               (partial-tree 
                elts left-size)))
          (let ((left-tree 
                 (car left-result))
                (non-left-elts 
                 (cdr left-result))
                (right-size 
                 (- n (+ left-size 1))))
            (let ((this-entry 
                   (car non-left-elts))
                  (right-result 
                   (partial-tree 
                    (cdr non-left-elts)
                    right-size)))
              (let ((right-tree 
                     (car right-result))
                    (remaining-elts 
                     (cdr right-result)))
                (cons (make-tree this-entry 
                                 left-tree 
                                 right-tree)
                      remaining-elts))))))))

Write a short paragraph explaining as clearly as you can how partial-tree works. Draw the tree produced by list->tree for the list (1 3 5 7 9 11).

What is the order of growth in the number of steps required by list->tree to convert a list of n elements?

Qua non ci provo; chiamo Bill the Lizard 💥

The partial-tree procedure works by dividing the list into three parts, a center element (the root node of the tree), everything before the center element, and everything after the center element. All the elements before the center element are then passed to a recursive call to partial-tree to create the left branch of the tree, and all the elements after the center element are passed recursively to partial-tree to create the right branch. These recursive call continue until no elements are remaining, and the balanced binary tree is assembled.

The tree produced by list->tree for the list (1 3 5 7 9 11) is:

To verify this, we can simply call the procedure.

Next we’re asked what is the order of growth in the number of steps required by list->tree to convert a list of n elements? The procedure only needs to visit each element of the list once, and it only performs a cons for each element it visits, so the number of steps is proportional to the size of the list, or O(n).

sicp-ex risponde in modo simile, più conciso.
Risultato simile per Drewiki.
Insomma sono solo io che devo applicarmi di più 😐

🤢

SICP – cap. 2 – sets come alberi binari – 84 . esercizio

Continuo da qui, copio qui.

Exercise 2.63: Each of the following two procedures converts a binary tree to a list.

(define (tree->list-1 tree)
  (if (null? tree)
      '()
      (append 
       (tree->list-1 
        (left-branch tree))
       (cons (entry tree)
             (tree->list-1 
              (right-branch tree))))))

(define (tree->list-2 tree)
  (define (copy-to-list tree result-list)
    (if (null? tree)
        result-list
        (copy-to-list 
         (left-branch tree)
         (cons (entry tree)
               (copy-to-list 
                (right-branch tree)
                result-list)))))
  (copy-to-list tree '()))
  • Do the two procedures produce the same result for every tree? If not, how do the results differ? What lists do the two procedures produce for the trees in Figure 2.16 [post precedente]?
  • Do the two procedures have the same order of growth in the number of steps required to convert a balanced tree with n elements to a list? If not, which one grows more slowly?

Al solito passo la parola a Bill the Lizard.

The tree->list-1 procedure checks to see if the tree passed in is null, and if so returns an empty list. If the tree is not null, it creates a list by appending the left branch of the tree, the element at the root node, and the right branch of the tree. Elements of the left and right branches are flattened into lists using recursive calls to tree->list-1. The tree->list-2 procedure defines a helper function copy-to-list that takes the tree and a result-list as arguments. When the tree is null, it returns the result-list that was passed in. The copy-to-list helper function also uses recursive calls to the left and right branches of the tree while building the final result list. These two procedures will produce the same results for every tree.

We’re asked to test the two procedures on the trees in figure 2.16.

Inserisco il codice, completo di quello del post precedente, in es-2.63.rkt.

We can see from these results that both procedures return an in-order traversal for every tree.

We’re also asked if the two procedures have the same order of growth for a balanced tree, and if not, which one grows more slowly?

We can see from the results above and from inspecting the two procedures that each node of the tree is visited one time by each algorithm. What happens at each of those n steps is subtly different though. The second procedure simply calls cons at each step, which we’ll assume is a constant-time operation, so the tree->list-2 procedure has a time complexity of O(n). The first procedure calls append at each step, which we saw in section 2.2.1 has the following definition:

(define (append list1 list2)
  (if (null? list1)
      list2
      (cons (car list1) (append (cdr list1) list2))))

From this definition we can see that the order of growth of append is proportional to the first list argument that’s passed in. In the case of tree->list-1, the first list argument is the left branch of the tree, which is about half of a node’s elements for a balanced tree. This means that for each recursive call, approximately half of the number of nodes will be in the first list argument as in the previous call. Since the number of elements is cut in half on each of the n calls to append, the tree->list-1 procedure has a complexity of O(nlog n) for a balanced tree.

Già detto che 👽 Bill 💥 rockz?

Simile ma meno didascalica la soluzione di sicp-ex.
Descrittiva quella di Drewiki.

🤢

SICP – cap. 2 – sets come alberi binari – 83

Continuo da qui, copio qui.

We can do better than the ordered-list representation by arranging the set elements in the form of a tree. Each node of the tree holds one element of the set, called the “entry” at that node, and a link to each of two other (possibly empty) nodes. The “left” link points to elements smaller than the one at the node, and the “right” link to elements greater than the one at the node. Figure 2.16 shows some trees that represent the set {1, 3, 5, 7, 9, 11}. The same set may be represented by a tree in a number of different ways. The only thing we require for a valid representation is that all elements in the left subtree be smaller than the node entry and that all elements in the right subtree be larger.

Figure 2.16: Various binary trees that represent the set {1, 3, 5, 7, 9, 11}.

The advantage of the tree representation is this: Suppose we want to check whether a number x is contained in a set. We begin by comparing x with the entry in the top node. If x is less than this, we know that we need only search the left subtree; if x is greater, we need only search the right subtree. Now, if the tree is “balanced,” each of these subtrees will be about half the size of the original. Thus, in one step we have reduced the problem of searching a tree of size n to searching a tree of size n/2 . Since the size of the tree is halved at each step, we should expect that the number of steps needed to search a tree of size n grows as Θ(log ⁡n). (Halving the size of the problem at each step is the distinguishing characteristic of logarithmic growth, as we saw previously.) For large sets, this will be a significant speedup over the previous representations.

We can represent trees by using lists. Each node will be a list of three items: the entry at the node, the left subtree, and the right subtree. A left or a right subtree of the empty list will indicate that there is no subtree connected there. We can describe this representation by the following procedures:

Note: We are representing sets in terms of trees, and trees in terms of lists—in effect, a data abstraction built upon a data abstraction. We can regard the procedures entry, left-branch, right-branch, and make-tree as a way of isolating the abstraction of a “binary tree” from the particular way we might wish to represent such a tree in terms of list structure.

(define (entry tree) (car tree))
(define (left-branch tree) (cadr tree))
(define (right-branch tree) (caddr tree))
(define (make-tree entry left right)
  (list entry left right))

Now we can write the element-of-set? procedure using the strategy described above:

(define (element-of-set? x set)
  (cond ((null? set) false)
        ((= x (entry set)) true)
        ((< x (entry set))
         (element-of-set? 
          x 
          (left-branch set))) 
        ((> x (entry set))
         (element-of-set? 
          x 
          (right-branch set)))))

Adjoining an item to a set is implemented similarly and also requires Θ(log⁡ n) steps. To adjoin an item x, we compare x with the node entry to determine whether x should be added to the right or to the left branch, and having adjoined x to the appropriate branch we piece this newly constructed branch together with the original entry and the other branch. If x is equal to the entry, we just return the node. If we are asked to adjoin x to an empty tree, we generate a tree that has x as the entry and empty right and left branches. Here is the procedure:

(define (adjoin-set x set)
  (cond ((null? set) (make-tree x '() '()))
        ((= x (entry set)) set)
        ((< x (entry set))
         (make-tree
          (entry set)
          (adjoin-set x (left-branch set))
          (right-branch set)))
        ((> x (entry set))
         (make-tree
          (entry set)
          (left-branch set)
          (adjoin-set x (right-branch set))))))

The above claim that searching the tree can be performed in a logarithmic number of steps rests on the assumption that the tree is “balanced,” i.e., that the left and the right subtree of every tree have approximately the same number of elements, so that each subtree contains about half the elements of its parent. But how can we be certain that the trees we construct will be balanced? Even if we start with a balanced tree, adding elements with adjoin-set may produce an unbalanced result. Since the position of a newly adjoined element depends on how the element compares with the items already in the set, we can expect that if we add elements “randomly” the tree will tend to be balanced on the average. But this is not a guarantee. For example, if we start with an empty set and adjoin the numbers 1 through 7 in sequence we end up with the highly unbalanced tree shown in Figure 2.17. In this tree all the left subtrees are empty, so it has no advantage over a simple ordered list. One way to solve this problem is to define an operation that transforms an arbitrary tree into a balanced tree with the same elements. Then we can perform this transformation after every few adjoin-set operations to keep our set in balance. There are also other ways to solve this problem, most of which involve designing new data structures for which searching and insertion both can be done in Θ(log n) steps. ( Examples of such structures include B-trees and red-black trees. There is a large literature on data structures devoted to this problem. See Cormen et al. 1990.)

Figure 2.17: Unbalanced tree produced by adjoining 1 through 7 in sequence.

🤢