How to EFFICIENTLY print entire DataFrame? [SOLVED]


Python Pandas

Default Printing Behavior of Pandas DataFrame

When working with DataFrames in libraries such as pandas in Python, the default behavior when printing is to display a truncated view of the DataFrame. This is especially true for large DataFrames. The truncation ensures that the displayed data is concise and fits within a specific screen or console width and height, making it easier for users to get a quick overview without being overwhelmed by an extensive amount of data.

By default, when you print a DataFrame, pandas will:

  • Display the first 5 rows and the last 5 rows of the DataFrame.
  • Display a limited number of columns based on the width of your console.
  • Indicate the number of rows and columns that are hidden due to truncation.

Output:

    A    B  ...  AW    AX
0   1  101  ...  ...  5001
1   2  102  ...  ...  5002
2   3  103  ...  ...  5003
3   4  104  ...  ...  5004
4   5  105  ...  ...  5005
.. ..  ...  ...  ...   ...
95  96 196  ...  ...  5096
96  97 197  ...  ...  5097
97  98 198  ...  ...  5098
98  99 199  ...  ...  5099
99  100 200 ...  ...  5100

[100 rows x 50 columns]

 

Changing Display Options

pandas provides a plethora of options to adjust the display settings of DataFrames. This ensures you can tailor the displayed output to your specific needs, especially when working within various environments, such as Jupyter notebooks, IDEs, or terminal-based interfaces.

 

Adjusting Row and Column Display Limits:

When you have a large DataFrame, pandas will truncate the output by default. However, you can modify the maximum number of rows and columns to be displayed using the pd.set_option method.

pd.set_option('display.max_rows', 10)  # Display up to 10 rows

After this, when you print a DataFrame, it will show up to 10 rows. If the DataFrame has more than that, it will truncate the view as per the new settings.

pd.set_option('display.max_columns', 5)  # Display up to 5 columns

If a DataFrame has more than 5 columns, it will truncate the view to only display the first and last columns with the defined limit.

 

Displaying Maximum Width of Columns:

Sometimes, columns with string values can have very long entries. By default, pandas might truncate these strings to ensure they fit within the display. However, you can adjust this setting to display strings to their maximum length.

pd.set_option('display.max_colwidth', None)  # Display full string without truncation

Alternatively, you can set it to a specific value, like 100, to display up to 100 characters in each column.

pd.set_option('display.max_colwidth', 100)  # Display up to 100 characters per column

 

Handling Large DataFrames

When working with vast amounts of data, effectively managing and displaying the contents of large DataFrames becomes paramount. This is not just about ensuring the data is presented in a clear and concise manner but also about ensuring that the underlying operations remain performant.

Considerations for Printing Extensive DataFrames:

  • Truncated Views: One of the default behaviors in libraries like pandas is to provide a truncated view of large DataFrames. While this is useful to get a quick glimpse, it's essential to be aware that not all data is being shown.
  • Memory Usage: Printing a large DataFrame can take up a significant amount of memory, especially if every value in the DataFrame is rendered at once. It's advisable to be cautious when trying to display the entire content of huge DataFrames.
  • Relevance of Data: When dealing with vast datasets, it's often the case that not all data points are relevant for a given analysis or debugging session. Before printing or displaying, filter the data to what's necessary.
  • Using Sampling Methods: Instead of printing the entire DataFrame, consider using methods like head(), tail(), or sample() to view a small, representative subset of the data.

Imagine you have a large DataFrame generated from a dataset of e-commerce transactions, with columns like TransactionID, Product, Customer, Amount, Date, and Review.

import pandas as pd
import numpy as np

# Generate a sample DataFrame
np.random.seed(0)
df = pd.DataFrame({
    'TransactionID': range(1, 100001),
    'Product': np.random.choice(['Laptop', 'Mobile', 'Headphones', 'Monitor'], 100000),
    'Customer': np.random.choice(['Alice', 'Bob', 'Charlie', 'David', 'Eve'], 100000),
    'Amount': np.random.randint(50, 2000, 100000),
    'Date': pd.date_range(start='2021-01-01', periods=100000, freq='H'),
    'Review': np.random.choice([f'Review {i}' for i in range(1, 501)], 100000)
})

Considerations for Printing Extensive DataFrames:

  • Truncated Views: If you try print(df), pandas will display only a truncated view, showing the top and bottom rows.
  • Memory Usage: Displaying this DataFrame in a UI-intensive environment like Jupyter Notebook might be taxing and can consume considerable memory.
  • Relevance of Data: If you're only interested in 'Laptop' transactions, it's better to filter: laptop_df = df[df['Product'] == 'Laptop'].
  • Using Sampling Methods: Instead of printing the entire DataFrame, use:
print(df.head())
print(df.sample(5))

Potential Performance Implications:

  • Rendering Lag: If you attempt to display the entire DataFrame in Jupyter (df), it may cause lag or even freeze the interface momentarily.
  • Memory Spikes: Using operations that might expand this DataFrame's size (like certain joins or pivots) can cause significant memory usage spikes.
  • Computational Overhead: Suppose you want to highlight all transactions above $1000, Running the below on the large df will take some time due to the computational overhead.
df.style.applymap(lambda x: 'background-color: yellow' if isinstance(x, (int, float)) and x > 1000 else '')
  • Resource Allocation: If you were to keep multiple copies of this DataFrame or continually render it, you'd tie up more system resources.
  • Storage Considerations: Writing this DataFrame to a CSV or Excel might result in a relatively large file, quickly consuming storage space.

 

Output Formatting

Presenting data in a more readable or desired format is often crucial when analyzing or sharing the data. pandas offers options to customize the display of DataFrames for better readability and precision.

 

Setting Precision for Numerical Data

For numerical data, controlling the number of decimal places can aid in presenting the data more succinctly without overwhelming the reader with too many digits.

You can use the display.precision option in pandas to set the desired precision for floating-point numbers in the DataFrame.

pd.set_option('display.precision', 2)

Now, any floating-point number in the DataFrame will be displayed with two decimal places.

For instance, if your DataFrame contains a column with values [3.14159, 2.71828], they will be displayed as [3.14, 2.72] after setting the precision to 2.

 

Displaying Non-Truncated Strings:

As you work with text data, you might encounter columns with long strings. By default, pandas might truncate these strings for concise display. However, sometimes you might need to view the full string.

You can use the display.max_colwidth option to adjust the maximum width of columns. Setting it to None will display the full string without truncation.

pd.set_option('display.max_colwidth', None)

For instance, if you have a DataFrame column with a value "This is a long string that might get truncated in default display settings.", after setting the above option, the full string will be displayed without being cut off.

 

Custom Styling for Better Visibility

The visualization of data plays a pivotal role in understanding and interpreting the information it contains. While pandas is primarily known for data manipulation, it also offers a rich set of tools for customizing how data is displayed, making it easier to highlight important aspects and make insights more evident.

 

Using the style Property to Customize Appearance:

pandas provides the style property on a DataFrame, which returns a Styler object. This object has useful methods to apply styles on the data.

You can use the background_gradient method to apply a gradient color to the data based on its value. This is particularly useful for numerical data to easily visualize high and low values.

df.style.background_gradient(cmap='Blues')

This will apply a blue color gradient based on the values in the DataFrame.

 

Highlighting Specific Cells or Rows:

You can use various methods provided by the Styler object to highlight specific cells or rows based on certain conditions.

The highlight_max method can be used to highlight the highest values in each column.

df.style.highlight_max(color='yellow')

Similarly, you can use the highlight_min method to focus on the lowest values.

df.style.highlight_min(color='lightblue')

The applymap method can be used for more complex, element-wise styling based on custom conditions. In this example, cells with values greater than 1000 will have a red background.

def highlight_large_amounts(val):
    return 'background-color: red' if val > 1000 else ''

df.style.applymap(highlight_large_amounts)

While applymap works element-wise, the apply method can be used for styling across entire rows or columns. This example will highlight the maximum value in each row with a yellow background.

def highlight_max_in_row(row):
    is_max = row == row.max()
    return ['background-color: yellow' if cell else '' for cell in is_max]

df.style.apply(highlight_max_in_row, axis=1)

 

Printing Specific Sections of a DataFrame

Often, when working with large datasets, it's neither feasible nor desirable to view the entire DataFrame. Instead, you might want to focus on specific sections to get a quick snapshot or to validate certain data points. Here's how you can achieve this using pandas:

 

1. Previewing the Top and Bottom Rows:

By default, the head() method displays the top 5 rows of the DataFrame. However, you can specify the number of rows you want to view.

print(df.head())       # Shows top 5 rows
print(df.head(10))     # Shows top 10 rows

Similar to head(), the tail() method displays the last 5 rows by default. You can specify the number of rows to view.

print(df.tail())       # Shows last 5 rows
print(df.tail(10))     # Shows last 10 rows

 

2. Random Sampling:

The sample() method lets you randomly select rows from the DataFrame.

print(df.sample())         # Randomly picks 1 row
print(df.sample(5))        # Randomly picks 5 rows

 

3. Conditional Display:

Printing specific parts of a DataFrame based on conditions can be incredibly useful for focused data analysis.

Let's say you want to view rows where the Age is above 30.

print(df[df['Age'] > 30])

To view rows where the Age is above 30 and the Department is 'HR':

print(df[(df['Age'] > 30) & (df['Department'] == 'HR')])

 

Efficient ways to view or print large DataFrames without lag or system crashes

Handling large DataFrames can be challenging, especially if you attempt to display them directly, as it can slow down or even crash your Python environment due to the sheer volume of data. Here are some efficient methods to preview or inspect parts of a large DataFrame without facing such issues:

 

1. Chunked Reading:

Instead of reading a large file all at once, consider reading it in chunks.

chunk_size = 50000  # Number of rows per chunk
chunks = pd.read_csv('large_dataset.csv', chunksize=chunk_size)
for chunk in chunks:
    print(chunk.head())  # Preview the first few rows of each chunk

 

2. Memory Optimization:

When reading from a file, using the dtype parameter can help optimize memory usage.

column_types = {
    'column1': 'int32',
    'column2': 'float32',
    'column3': 'category'
}
df = pd.read_csv('large_dataset.csv', dtype=column_types)

 

3. Sparse Data Structures:

If your DataFrame contains a lot of missing or zero values, converting it to a sparse format can save memory.

sparse_df = df.astype(pd.SparseDtype())

 

4. Selective Column Loading:

If you only need specific columns, don't load the entire DataFrame.

required_columns = ['column1', 'column3']
df = pd.read_csv('large_dataset.csv', usecols=required_columns)

 

5. Use .info() for an Overview:

The .info() method provides a concise summary of the DataFrame, showing data types, non-null values, and memory usage.

df.info()

 

6. Random Sampling:

Instead of displaying the entire DataFrame, inspect a random sample.

print(df.sample(1000))  # Randomly selects 1000 rows

 

7. Utilize Dask:

Dask is a parallel computing library that integrates with pandas, allowing you to work with large DataFrames in a way that utilizes multiple cores of your machine.

import dask.dataframe as dd
dask_df = dd.read_csv('large_dataset.csv')
print(dask_df.head())

 

8. Limit Output Display in Notebooks:

If you're working in Jupyter or another notebook environment, set display options to limit the number of rows/columns displayed.

pd.set_option('display.max_rows', 50)
pd.set_option('display.max_columns', 20)

 

Exporting the DataFrame

While in-depth analysis and visualization often happen within your chosen programming environment, there often arises the need to share the results or data with others who might not be using the same tools. This is where exporting your DataFrame to various file formats becomes essential. With pandas, you can seamlessly export your data to a range of commonly used file formats.

Writing to CSV:

CSV (Comma Separated Values) is one of the most popular formats for data interchange. It's simple, widely supported, and often used for data import/export in various tools.

df.to_csv('filename.csv', index=False)

The index=False parameter ensures that the DataFrame's index isn't written to the file. If you want the index, you can omit this parameter.

 

Writing to Excel:

For stakeholders or colleagues who primarily work with spreadsheets, exporting your DataFrame to an Excel format can be extremely helpful.

Before using the Excel functions in pandas, you might need to install the openpyxl package:

pip install openpyxl

Then:

df.to_excel('employees.xlsx', sheet_name='Employees', index=False)

This will create an Excel file named employees.xlsx with our data in a sheet named 'Employees'.

 

Writing to SQL Database:

If you're integrating your analysis with a database system, pandas allows you to write your DataFrame directly to a SQL database. First, install the necessary package:

pip install sqlalchemy

Now, you can use the following code to write to an SQLite database:

from sqlalchemy import create_engine

engine = create_engine('sqlite:///db.sqlite3')
df.to_sql('table_name', con=engine, index=False, if_exists='replace')

This example uses SQLite as a database, but pandas supports various database systems through SQLAlchemy.

 

Using set_option() Method:

The set_option() method in pandas allows you to customize various display options for your DataFrame. The most common use case is to adjust the maximum number of rows and columns displayed.

# display all the  rows
pandas.set_option('display.max_rows', None)

# display all the  columns
pandas.set_option('display.max_columns', None)

# set width  - 100
pandas.set_option('display.width', 100)

# set column header -  left
pandas.set_option('display.colheader_justify', 'left')

# set precision - 5
pandas.set_option('display.precision', 5)

 

Display entire DataFrame with Tabulate:

One of the challenges faced when working with DataFrames is presenting the data in a manner that is both legible and aesthetically pleasing. The tabulate library bridges this gap by allowing users to represent pandas DataFrames in various text formats. Here, we'll explore some of the popular formats provided by tabulate.

To begin, ensure you have tabulate installed:

pip install tabulate

Next, create a sample DataFrame for demonstration:

import pandas as pd

data = {
    'Name': ['Alice', 'Bob', 'Charlie', 'David'],
    'Age': [25, 30, 35, 40],
    'Department': ['HR', 'Finance', 'Engineering', 'Marketing']
}
df = pd.DataFrame(data)

Display in Plain Text Format with Tabulate:

from tabulate import tabulate
print(tabulate(df, headers='keys', tablefmt='plain'))

Display in PostgreSQL (psql) Style with Tabulate:

print(tabulate(df, headers='keys', tablefmt='psql'))

Display in GitHub Markdown Format with Tabulate:

print(tabulate(df, headers='keys', tablefmt='github'))

Display in reStructuredText (RST) Format with Tabulate:

print(tabulate(df, headers='keys', tablefmt='rst'))

Display in TSV (Tab-Separated Values) Format with Tabulate:

print(tabulate(df, headers='keys', tablefmt='tsv'))

Display in HTML Format with Tabulate:

print(tabulate(df, headers='keys', tablefmt='html'))

 

Summary

DataFrames, an integral part of pandas, serve as one of the most powerful tools in the realm of data analysis. The flexibility they offer in representing, manipulating, and displaying data is unparalleled. However, working with DataFrames, especially large ones, necessitates a nuanced understanding of various display and management techniques.

This article delved deep into the world of DataFrames, guiding the reader from understanding the default printing behavior of pandas, where DataFrames are often truncated for brevity, to various techniques to modify this behavior, such as adjusting display options and employing the tabulate library for improved formatting.

Further, the piece shed light on strategies to view specific sections of a DataFrame, be it the top, bottom, a random sample, or conditional slices. These techniques are vital for focused data inspection without being overwhelmed by voluminous information.

A significant emphasis was on managing and efficiently displaying large DataFrames, a challenge many data analysts face. The article addressed this with techniques ranging from chunked reading, selective column loading, memory optimization, to the introduction of parallel computing with Dask.

 

References

Display entire dataframe

 

Deepak Prasad

Deepak Prasad

Deepak Prasad is the founder of GoLinuxCloud, bringing over a decade of expertise in Linux, Python, Go, Laravel, DevOps, Kubernetes, Git, Shell scripting, OpenShift, Networking, and Security. His extensive experience spans development, DevOps, networking, and security, ensuring robust and efficient solutions for diverse projects.

Certifications and Credentials:

  • Certified Kubernetes Application Developer (CKAD)
  • Go Developer Certification
  • Linux Foundation Certified System Administrator (LFCS)
  • Certified Ethical Hacker (CEH)
  • Python Institute PCAP (Certified Associate in Python Programming)
You can connect with him on his LinkedIn profile and join his Facebook and LinkedIn page.

Can't find what you're searching for? Let us assist you.

Enter your query below, and we'll provide instant results tailored to your needs.

If my articles on GoLinuxCloud has helped you, kindly consider buying me a coffee as a token of appreciation.

Buy GoLinuxCloud a Coffee

For any other feedbacks or questions you can send mail to admin@golinuxcloud.com

Thank You for your support!!

Leave a Comment