NumPy – 59 – operazioni con stringhe vettorializzate – 2

Continuo da qui, copio qui.

Esempio: un database di ricette
These vectorized string operations become most useful in the process of cleaning up messy, real-world data. Here I’ll walk through an example of that, using an open recipe database compiled from various sources on the Web. Our goal will be to parse the recipe data into ingredient lists, so we can quickly find a recipe based on some ingredients we have on hand.

The scripts used to compile this can be found [on GitHub], and the link to the current version of the database is found there as well.

As of Spring 2016, this database is about 30 MB, and can be downloaded and unzipped with these commands:

OOPS! vuoto; googlando l’ho trovato qua.


The database is in JSON format, so we will try pd.read_json to read it:

Oops! We get a ValueError mentioning that there is “trailing data.” Searching for the text of this error on the Internet, it seems that it’s due to using a file in which each line is itself a valid JSON, but the full file is not. Let’s check if this interpretation is true:

Yes, apparently each line is a valid JSON, so we’ll need to string them together. One way we can do this is to actually construct a string representation containing all these JSON entries, and then load the whole thing with pd.read_json:

We see there are nearly 200,000 recipes, and 17 columns. Let’s take a look at one row to see what we have:

There is a lot of information there, but much of it is in a very messy form, as is typical of data scraped from the Web. In particular, the ingredient list is in string format; we’re going to have to carefully extract the information we’re interested in. Let’s start by taking a closer look at the ingredients:

The ingredient lists average 250 characters long, with a minimum of 0 and a maximum of nearly 10,000 characters!

Just out of curiousity, let’s see which recipe has the longest ingredient list:

That certainly looks like an involved recipe.

We can do other aggregate explorations; for example, let’s see how many of the recipes are for breakfast food:

Or how many of the recipes list cinnamon as an ingredient:

This is the type of essential data exploration that is possible with Pandas string tools. It is data munging like this that Python really excels at.

Un semplice suggeritore di ricette
Let’s go a bit further, and start working on a simple recipe recommendation system: given a list of ingredients, find a recipe that uses all those ingredients. While conceptually straightforward, the task is complicated by the heterogeneity of the data: there is no easy operation, for example, to extract a clean list of ingredients from each row. So we will cheat a bit: we’ll start with a list of common ingredients, and simply search to see whether they are in each recipe’s ingredient list. For simplicity, let’s just stick with herbs and spices for the time being:

We can then build a Boolean DataFrame consisting of True and False values, indicating whether this ingredient appears in the list:

Now, as an example, let’s say we’d like to find a recipe that uses parsley, paprika, and tarragon. We can compute this very quickly using the query() method of DataFrames, discussed in High-Performance Pandas: eval() and query() [prossimamente]:

We find only 10 recipes with this combination; let’s use the index returned by this selection to discover the names of the recipes that have this combination:

Now that we have narrowed down our recipe selection by a factor of almost 20,000, we are in a position to make a more informed decision about what we’d like to cook for dinner.

Avanti con le ricette!
Hopefully this example has given you a bit of a flavor (ba-dum!) for the types of data cleaning operations that are efficiently enabled by Pandas string methods. Of course, building a very robust recipe recommendation system would require a lot more work! Extracting full ingredient lists from each recipe would be an important piece of the task; unfortunately, the wide variety of formats used makes this a relatively time-consuming process. This points to the truism that in data science, cleaning and munging of real-world data often comprises the majority of the work, and Pandas provides the tools that can help you do this efficiently.

:mrgreen:

Advertisements
Post a comment or leave a trackback: Trackback URL.

Trackbacks

Rispondi

Inserisci i tuoi dati qui sotto o clicca su un'icona per effettuare l'accesso:

Logo WordPress.com

Stai commentando usando il tuo account WordPress.com. Chiudi sessione / Modifica )

Foto Twitter

Stai commentando usando il tuo account Twitter. Chiudi sessione / Modifica )

Foto di Facebook

Stai commentando usando il tuo account Facebook. Chiudi sessione / Modifica )

Google+ photo

Stai commentando usando il tuo account Google+. Chiudi sessione / Modifica )

Connessione a %s...

%d blogger hanno fatto clic su Mi Piace per questo: