Various methods of importing data in Python

pandas is the Python library/package of choice to import, wrangle, and manipulate datasets. The datasets come in various forms; the most frequent being in the .csv format. The delimiter (a special character that separates the values in a dataset) in a CSV file is a comma. Now we will look at the various methods in which you can read a dataset in Python.

Case 1 – reading a dataset using the read_csv method

Open an IPython Notebook by typing ipython notebook in the command line.

Download the Titanic dataset from the shared Google Drive folder (any of .xls or .xlsx would do). Save this file in a CSV format and we are good to go. This is a very popular dataset that contains information about the passengers travelling on the famous ship Titanic on the fateful sail that saw it sinking. If you wish to know more about this dataset, you can go to the Google Drive folder and look for it.

A common practice is to share a variable description file with the dataset describing the context and significance of each variable. Since this is the first dataset we are encountering in this book, here is the data description of this dataset to get a feel of how data description files actually look like:

Note

VARIABLE DESCRIPTIONS:
pclass          Passenger Class
                (1 = 1st; 2 = 2nd; 3 = 3rd)
survival        Survival
                (0 = No; 1 = Yes)
name            Name
sex             Sex
age             Age
sibsp           Number of Siblings/Spouses Aboard
parch           Number of Parents/Children Aboard
ticket          Ticket Number
fare            Passenger Fare
cabin           Cabin
embarked        Port of Embarkation
                (C = Cherbourg; Q = Queenstown; S = Southampton)
boat            Lifeboat
body            Body Identification Number
home.dest       Home/Destination

The following code snippet is enough to import the dataset and get you started:

 import pandas as pd
 data = pd.read_csv('E:/Personal/Learning/Datasets/Book/titanic3.csv')

The read_csv method

The name of the method doesn't unveil its full might. It is a kind of misnomer in the sense that it makes us think that it can be used to read only CSV files, which is not the case. Various kinds of files, including .txt files having delimiters of various kinds can be read using this method.

Let's learn a little bit more about the various arguments of this method in order to assess its true potential. Although the read_csv method has close to 30 arguments, the ones listed in the next section are the ones that are most commonly used.

The general form of a read_csv statement is something similar to:

pd.read_csv(filepath, sep=', ', dtype=None, header=None, skiprows=None, index_col=None, skip_blank_lines=TRUE, na_filter=TRUE)

Now, let us understand the significance and usage of each of these arguments one by one:

  • filepath: filepath is the complete address of the dataset or file that you are trying to read. The complete address includes the address of the directory in which the file is stored and the full name of the file with its extension. Remember to use a forward slash (/) in the directory address. Later in this chapter, we will see that the filepath can be a URL as well.
  • sep: sep allows us to specify the delimiter for the dataset to read. By default, the method assumes that the delimiter is a comma (,). The various other delimiters that are commonly used are blank spaces ( ), tab (|), and are called space delimiter or tab demilited datasets. This argument of the method also takes regular expressions as a value.
  • dtype: Sometimes certain columns of the dataset need to be formatted to some other type, in order to apply certain operations successfully. One example is the date variables. Very often, they have a string type which needs to be converted to date type before we can use them to apply date-related operations. The dtype argument is to specify the data type of the columns of the dataset. Suppose, two columns a and b, of the dataset need to be formatted to the types int32 and float64; it can be achieved by passing {'a':np.float64, 'b'.np.int32} as the value of dtype. If not specified, it will leave the columns in the same format as originally found.
  • header: The value of a header argument can be an integer or a list. Most of the times, datasets have a header containing the column names. The header argument is used to specify which row to be used as the header. By default, the first row is the header and it can be represented as header =0. If one doesn't specify the header argument, it is as good as specifying header=0. If one specifies header=None, the method will read the data without the header containing the column names.
  • names: The column names of a dataset can be passed off as a list using this argument. This argument will take lists or arrays as its values. This argument is very helpful in cases where there are many columns and the column names are available as a list separately. We can pass the list of column names as a value of this argument and the column names in the list will be applied.
  • skiprows: The value of a skiprows argument can be an integer or a list. Using this argument, one can skip a certain number of rows specified as the value of this argument in the read data, for example skiprows=10 will read in the data from the 11th row and the rows before that will be ignored.
  • index_col: The value of an index_col argument can be an integer or a sequence. By default, no row labels will be applied. This argument allows one to use a column, as the row labels for the rows in a dataset.
  • skip_blank_lines: The value of a skip_blank_lines argument takes Boolean values only. If its value is specified as True, the blank lines are skipped rather than interpreting them as NaN (not allowed/missing values; we shall discuss them in detail soon) values. By default, its value is set to False.
  • na_filter: The value of a na-filter argument takes Boolean values only. It detects the markers for missing values (empty strings and NA values) and removes them if set to False. It can make a significant difference while importing large datasets.

Use cases of the read_csv method

The read_csv method can be put to a variety of uses. Let us look at some such use cases.

Passing the directory address and filename as variables

Sometimes it is easier and viable to pass the directory address and filename as variables to avoid hard-coding. More importantly so, when one doesn't want to hardcode the full address of the file and intend to use this full address many times. Let us see how we can do so while importing a dataset.

import pandas as pd
path = 'E:/Personal/Learning/Datasets/Book'
filename = 'titanic3.csv'
fullpath = path+'/'+filename
data = pd.read_csv(fullpath)

For such cases, alternatively, one can use the following snippet that uses the path.join method in an os package:

import pandas as pd
import os
path = 'E:/Personal/Learning/Datasets/Book'
filename = 'titanic3.csv'
fullpath = os.path.join(path,filename)
data = pd.read_csv(fullpath)

One advantage of using the latter method is that it trims the lagging or leading white spaces, if any, and gives the correct filename.

Reading a .txt dataset with a comma delimiter

Download the Customer Churn Model.txt dataset from the Google Drive folder and save it on your local drive. To read this dataset, the following code snippet will do:

import pandas as pd
data = read_csv('E:/Personal/Learning/Datasets/Book/Customer Churn Model.txt')

As you can see, although it's a text file, it can be read easily using the read_csv method without even specifying any other argument of the method.

Specifying the column names of a dataset from a list

We just read the Customer Churn Model.txt file in the last segment with the default column names. But, what if we want to rename some or all of the column names? Or, what if the column names are not there already and we want to assign names to columns from a list (let's say, available in a CSV file).

Look for a CSV file called Customer Churn Columns.csv in the Google Drive and download it. I have put English alphabets as placeholders for the column names in this file. We shall use this file to create a list of column names to be passed on to the dataset. You can change the names in the CSV files, if you like, and see how they are incorporated as column names.

The following code snippet will give the name of the column names of the dataset we just read:

import pandas as pd
data = pd.read_csv('E:/Personal/Learning/Datasets/Book/Customer Churn Model.txt')
data.columns.values

If you run it on one of the IDEs, you should get the following screenshot as the output:

Specifying the column names of a dataset from a list

Fig. 2.2: The column names in the Customer Churn Model.txt dataset

This basically lists all the column names of the dataset. Let us now go ahead and change the column names to the names we have in the Customer Churn Columns.csv file.

data_columns = pd.read_csv('E:/Personal/Learning/Predictive Modeling Book/Book Datasets/Customer Churn Columns.csv')
data_column_list = data_columns['Column_Names'].tolist()
data=pd.read_csv('E:/Personal/Learning/Predictive Modeling Book/Book Datasets/Customer Churn Model.txt',header=None,names=data_column_list)
data.columns.values

The output after running this snippet should look like the following screenshot (if you haven't made any changes to the values in the Customer Churn Columns.csv file):

Specifying the column names of a dataset from a list

Fig. 2.3: The column names in the Customer Churn Columnsl.txt dataset which have been passed to the data frame data

The key steps in this process are:

  • Sub-setting the particular column (containing the column names) and converting it to a list—done in the second line
  • Passing the header=None and names=name of the list containing the column names(data_column_list in this case) in the read_csv method

If some of the terms, such as sub-setting don't make sense now, just remember that it is an act of selecting a combination of particular rows or columns of the dataset. We will discuss this in detail in the next chapter.

Case 2 – reading a dataset using the open method of Python

pandas is a very robust and comprehensive library to read, explore, and manipulate a dataset. But, it might not give an optimal performance with very big datasets as it reads the entire dataset, all at once, and blocks the majority of computer memory. Instead, you can try one of the Python's file handling methods—open. One can read the dataset line by line or in chunks by running a for loop over the rows and delete the chunks from the memory, once they have been processed. Let us look at some of the use case examples of the open method.

Reading a dataset line by line

As you might be aware that while reading a file using the open method, we can specify to use a particular mode that is read, write, and so on. By default, the method opens a file in the read-mode. This method can be useful while reading a big dataset, as this method reads data line-by-line (not at once, unlike what pandas does). You can read datasets into chunks using this method.

Let us now go ahead and open a file using the open method and count the number of rows and columns in the dataset:

data=open('E:/Personal/Learning/Predictive Modeling Book/Book Datasets/Customer Churn Model.txt','r')
cols=data.next().strip().split(',')
no_cols=len(data.next().strip().split(','))

A couple of points about this snippet:

  • 'r' has been explicitly mentioned and hence the file will be opened in the read mode. To open it in the write mode, one needs to pass 'w' in place of 'r'.
  • The next method navigates the computer memory to the line next to the header. The strip method is used to remove all the trailing and leading blank spaces from the line. The split method breaks down a line into chunks separated by the argument provided to the split method. In this case, it is ','.

Finding the number of the rows is a bit tedious, but here lies the key trick to reading a huge file in chunks:

counter=0

main_dict={}
for col in cols:
    main_dict[col]=[]

Basically, we are doing the following two tasks in the preceding code snippet:

  • Defining a counter variable that will increment its value by 1 on passing each line and hence will count the number of rows/lines at the end of the loop
  • Defining a dictionary called main_dict with column names as the keys and the values in the columns as the values of the dictionary

Now, we are all set to run a for loop over the lines in the dataset to determine the number of rows in the dataset:

for line in data:
    values = line.strip().split(',')
    for i in range(len(cols)):
        main_dict[cols[i]].append(values[i])
    counter += 1

print "The dataset has %d rows and %d columns" % (counter,no_cols)

The explanation of the code-snippet is as follows:

  1. Running a for loop over the lines in the dataset and splitting the lines in the values by ','. These values are nothing but the values contained in each column for that line (row).
  2. Running a second for loop over the columns for each line and appending the column values to the main_dict dictionary, which we defined in the previous step. So, for each key of the main_dict dictionary, all the column values are appended together. Each key of the main_dict becomes the column name of the dataset, while the values of each key in the dictionary are the values in each column.
  3. Printing the number of rows and columns of the dataset that are contained in counter and no_cols respectively.

The main_dict dictionary, in a way, contains all the information in the dataset; hence, it can be converted to a data frame, as we have read already in this chapter that a dictionary can be converted to a data frame using the DataFrame method in pandas. Let us do that:

import pandas as pd
df=pd.DataFrame(main_dict)
print df.head(5)

This process can be repeated after a certain number of lines, say 10000 lines, for a large file; it can be read in and processed in chunks.

Changing the delimiter of a dataset

Earlier in this chapter, we said that juggling and managing delimiters is a great skill to master. Let us see one example of how we can change the delimiter of a dataset.

The Customer Churn Model.txt has comma (',') as a delimiter. It looks something similar to the following screenshot:

Changing the delimiter of a dataset

Fig. 2.4: A chunk of Customer Churn Model.txt dataset with default delimiter comma (',')

Note that, any special character can be a delimiter. Let us change the delimiter to a 'slash t' ('/t'):

infile='E:/Personal/Learning/Datasets/Book/Customer Churn Model.txt'
outfile='E:/Personal/Learning/Datasets/Book/Tab Customer Churn Model.txt'
with open(infile) as infile1:
  with open(outfile,'w') as outfile1:
    for line in infile1:
      fields=line.split(',')
      outfile1.write('/t'.join(fields))

This code snippet will generate a file called Tab Customer Churn Model.txt in the specified directory. The file will have a '/t' delimiter and will look something similar to the following screenshot:

Changing the delimiter of a dataset

Fig. 2.5: A chunk of Tab Customer Churn Model.txt with changed delimiter ('/t')

The code snippet can be explained as follows:

  1. Creating two variables called infile and outfile. The infile variable is the one whose delimiter we wish to change and outfile is the one in which we will write the results after changing the delimiter.
  2. The infile is opened in the read mode, while outfile is opened in the write mode.
  3. The lines in the infile are split based on the existing delimiter that is ',' and the chunks are called fields. Each line will have several fields (equal to the number of columns).
  4. The lines in the outfile are created by joining the fields of each line separated by the new delimiter of our choice that is '/t'.
  5. The file is written into the directory specified in the definition of the outfile.

To demonstrate this, the read_csv method, as described earlier, can be used to read datasets that have a delimiter other than a comma, we will try to read the dataset with a '/t' delimiter, we just created:

import pandas as pd
data=pd.read_csv('E:/Personal/Learning/Predictive Modeling Book/Book Datasets/Tab Customer Churn Model.txt',sep='/t')

Case 3 – reading data from a URL

Several times, we need to read the data directly from a web URL. This URL might contain the data written in it or might contain a file which has the data. For example, navigate to this website, http://winterolympicsmedals.com/ which lists the medals won by various countries in different sports during the Winter Olympics. Now type the following address in the URL address bar: http://winterolympicsmedals.com/medals.csv.

A CSV file will be downloaded automatically. If you choose to download it manually, saving it and then specifying the directory path for the read_csv method is a time consuming process. Instead, Python allows us to read such files directly from the URL. Apart from the significant saving in time, it is also beneficial to loop over the files when there are many such files to be downloaded and read in.

A simple read_csv statement is required to read the data directly from the URL:

import pandas as pd
medal_data=pd.read_csv('http://winterolympicsmedals.com/medals.csv')

Alternatively, to work with URLs to get data, one can use a couple of Python packages, which we have not used till now, that is csv and urllib. The readers can go to the documentation of the packages to learn more about these packages. It is sufficient to know that csv provides a range of methods to handle the CSV files, while urllib is used to navigate and access information from the URL. Here is how it can be done:

import csv
import urllib2

url='http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data'
response=urllib2.urlopen(url)
cr=csv.reader(response)

for rows in cr:
  print rows

The working of the preceding code snippet can be explained in the following two points:

  1. The urlopen method of the urllib2 library creates a response that can be read in using the reader method of the csv library.
  2. This instance is an iterator and can be iterated over its rows.

The csv module is very helpful in dealing with CSV files. It can be used to read the dataset row by row, or in other words, iterate over the dataset among other things. It can be used to write to CSV files as well.

Case 4 – miscellaneous cases

Apart from the standard cases described previously, there are certain less frequent cases of data file handling that might need to be taken care of. Let's have a look at two of them.

Reading from an .xls or .xlsx file

Go to the Google Drive and look for .xls and .xlsx versions of the Titanic dataset. They will be named titanic3.xls and titanic3.xlsx. Download both of them and save them on your computer. The ability to read Excel files with all its sheets is a very powerful technique available in pandas. It is done using a read_excel method, as shown in the following code:

import pandas as pd
data=pd.read_excel('E:/Personal/Learning/Predictive Modeling Book/Book Datasets/titanic3.xls','titanic3')

import pandas as pd
data=pd.read_excel('E:/Personal/Learning/Predictive Modeling Book/Book Datasets/titanic3.xlsx','titanic3')

It works with both, .xls and .xlsx files. The second argument of the read_excel method is the sheet name that you want to read in.

Another available method to read a delimited data is read_table. The read_table is exactly similar to read_csv with certain default arguments for its definition. In some sense, read_table is a more generic form of read_csv.

Writing to a CSV or Excel file

A data frame can be written in a CSV or an Excel file using a to_csv or to_excel method in pandas. Let's go back to the df data frame that we created in Case 2 – reading a dataset using the open method of Python. This data frame can be exported to a directory in a CSV file, as shown in the following code:

df.to_csv('E:/Personal/Learning/Predictive Modeling Book/Book Datasets/Customer Churn Model.csv'

Or to an Excel file, as follows:

df.to_excel('E:/Personal/Learning/Predictive Modeling Book/Book Datasets/Customer Churn Model.csv'
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset