Skip to content Skip to footer

Pandas for Data Engineers. Advanced techniques to process and load… | by 💡Mike Shakhomirov | Feb, 2024


Advanced techniques to process and load data efficiently

AI-generated image using Kandinsky

In this story, I would like to talk about things I like about Pandas and use often in ETL applications I write to process data. We will touch on exploratory data analysis, data cleansing and data frame transformations. I will demonstrate some of my favourite techniques to optimize memory usage and process large amounts of data efficiently using this library. Working with relatively small datasets in Pandas is rarely a problem. It handles data in data frames with ease and provides a very convenient set of commands to process it. When it comes to data transformations on much bigger data frames (1Gb and more) I would normally use Spark and distributed compute clusters. It can handle terabytes and petabytes of data but probably will also cost a lot of money to run all that hardware. That’s why Pandas might be a better choice when we have to deal with medium-sized datasets in environments with limited memory resources.

Pandas and Python generators

In one of my previous stories I wrote about how to process data efficiently using generators in Python [1].

It’s a simple trick to optimize the memory usage. Imagine that we have a huge dataset somewhere in external storage. It can be a database or just a simple large CSV file. Imagine that we need to process this 2–3 TB file and apply some transformation to each row of data in this file. Let’s assume that we have a service that will perform this task and it has only 32 Gb of memory. This will limit us in data loading and we won’t be able to load the whole file into the memory to split it line by line applying simple Python split(‘\n’) operator. The solution would be to process it row by row and yield it each time freeing the memory for the next one. This can help us to create a constantly streaming flow of ETL data into the final destination of our data pipeline. It can be anything — a cloud storage bucket, another database, a data warehouse solution (DWH), a streaming topic or another…



Source link