Back to Blog
csv import postgresqlpostgresql data importpostgres copy csvpgadmin import csvpython postgresql

Csv import postgresql: Master Data Loads Faster (csv import postgresql)

March 21, 2026

Csv import postgresql: Master Data Loads Faster (csv import postgresql)

When you need to get data from a CSV into PostgreSQL, your go-to method is almost always the COPY command. It’s built for one thing: high-speed bulk data loading. Think of it as the industrial-strength tool for the job, especially when you're dealing with massive datasets.

For smaller files or if you just prefer working in a GUI, tools like pgAdmin offer a more visual, point-and-click alternative.

Choosing Your CSV Import Method in PostgreSQL

So, how do you decide which approach to take? The right tool really depends on the task at hand. Are you a developer working from the command line, an analyst doing a one-off import, or are you building an automated data pipeline?

Each scenario points to a different solution, and picking the right one from the start will save you a lot of headaches.

Here are the main contenders for your csv import postgresql task:

  • The COPY command: This is the fastest, most powerful option. It runs directly on the server and reads files from its own filesystem.
  • The \copy meta-command: This is the client-side cousin of COPY, running inside the psql interactive terminal. It’s perfect for when the CSV file is on your local machine, or when you're connecting to a managed cloud database where you don't have server access.
  • pgAdmin's Import/Export Tool: A straightforward, wizard-driven process. It's great for beginners or for those quick, one-time data loads.
  • Programmatic Imports (e.g., Python): The most flexible choice for building repeatable, automated ETL jobs where you might need to clean or transform data on the fly.

This decision often boils down to a simple trade-off between raw speed and convenience.

A decision tree illustrating CSV import methods: COPY command for speed, or pgAdmin for ease.

As the flowchart shows, if performance is your absolute top priority for large files, the command-line COPY is the way to go. Otherwise, the simplicity of a visual tool like pgAdmin is often more than enough.

Why Speed Matters

In a fast-paced environment, getting an MVP running with real data is everything. The efficiency of your data import process can be the difference between launching this week or next month.

The PostgreSQL COPY command is incredibly fast. We're talking speeds of up to 1.2 GB per minute on decent hardware. That makes it 10-15 times faster than scripting a series of INSERT statements. For perspective, a 10 million row CSV can be loaded in under 2 minutes with COPY, whereas a row-by-row script could take over 30 minutes. You can see more on these benchmarks in this deep-dive on PostgreSQL COPY command performance.

Key Takeaway: For any kind of bulk data loading, make a COPY-based method your default. The performance gains are just too massive to ignore, especially as your data grows. Scripting INSERT statements is almost never the right tool for importing a CSV.

Comparison of PostgreSQL CSV Import Methods

To help you choose the best tool for your next project, here’s a quick comparison of the most common methods. We've broken down each one by its best use case, speed, and any security considerations to keep in mind.

While CSV is everywhere, it's good to know how it stacks up against more efficient binary formats. For a deeper look at data serialization, you might find our guide comparing Protobuf vs JSON useful.

Method Best For Speed Ease of Use Security Notes
COPY Massive datasets, server-side automation, and maximum performance. Blazing Fast Moderate (Terminal) Requires superuser or pg_read_server_files role.
\copy Managed cloud DBs, local files, and developers without server access. Very Fast Moderate (Terminal) Safer, as it only requires table INSERT permissions.
pgAdmin GUI One-off imports, visual users, and small-to-medium datasets. Moderate Easy (GUI) Inherits the user's permissions; no special roles needed.
Python Script Automated pipelines, data cleaning before import, and recurring jobs. Fast Advanced (Code) Flexible; can leverage COPY for speed. Secure if managed well.

Ultimately, COPY and \copy offer the best performance, while pgAdmin provides a friendly interface for less frequent tasks. For anything complex or automated, a custom script is your best bet.

Using COPY and psql for High-Performance Imports

If you spend your days in the terminal, you know that speed and efficiency are non-negotiable. When it comes to a csv import postgresql task, nothing beats the native COPY and \copy commands. They're both purpose-built for raw speed, but they solve different problems.

The two commands look almost identical, but the difference is critical: where the CSV file lives. The server-side COPY command is a beast, designed to read files directly from the database server's filesystem. Its counterpart, \copy, is a client-side command run from within psql that reads files from your local machine.

The Power of Server-Side COPY

COPY is, without a doubt, the fastest way to bulk-load data into PostgreSQL. It runs directly on the server, completely bypassing any network roundtrips or client-side overhead. It just reads the file and pipes the data straight into the table. This makes it perfect for automated ETL jobs or for loading truly massive datasets that you've already staged on the server.

This performance comes with a catch, though. You need to be a superuser or have a role with the pg_read_server_files grant. You also need the absolute path to the file on the server.

Let's say you have a users.csv file sitting at /srv/imports/users.csv on the database box. The command is straightforward:

COPY users FROM '/srv/imports/users.csv' WITH (FORMAT csv, HEADER);

This tells Postgres to grab the data from that file, recognize it as a CSV, and understand that the first row is a header it should ignore.

The Convenience of Client-Side \copy

So what happens when you're not a superuser? That's a pretty common scenario, especially with managed cloud databases from providers like Amazon RDS or Heroku. Or maybe the file is just on your laptop. This is exactly what \copy was made for.

Because \copy is a psql meta-command, it reads the file from your local machine first and then streams it over the connection to the server. It cleverly uses the same fast COPY protocol on the backend but only requires standard INSERT permissions on the table.

If users.csv is in your local Downloads folder, you just connect with psql and run this:

\copy users FROM '~/Downloads/users.csv' WITH (FORMAT csv, HEADER);

The only difference is the leading backslash. That little character tells psql to take over the file-reading part, making it incredibly handy for day-to-day development work.

Battle-Tested Snippet: I probably use this exact command for 90% of my daily import tasks. It's the simplest and most effective way to get data from your local machine into a table, handling standard CSVs with headers perfectly. Just swap in your table name and file path.

Handling Real-World CSV Variations

Of course, real-world CSV files are often a mess. You'll run into different delimiters, strange character encodings, and columns that are in a totally different order than your table schema. Thankfully, both COPY and \copy have options to deal with this chaos.

  • Custom Delimiters: If your file is delimited with a pipe (|) or a tab instead of a comma, you just need to tell the command what to expect.

    \copy users FROM 'users.psv' WITH (FORMAT csv, HEADER, DELIMITER '|');
    
  • Column Mapping: This one is a lifesaver. If your CSV columns don't line up with your table's columns, you don't have to re-export the file. Just specify the order.

    -- CSV columns are: email, last_name, first_name
    -- Table columns are: first_name, last_name, email, created_at
    \copy users (email, last_name, first_name) FROM 'data.csv' WITH (FORMAT csv, HEADER);
    

    Postgres is smart enough to map data.csv's email column to the users.email column and so on. Any columns you leave out, like created_at here, will be filled with their default value if one exists.

  • Encoding Issues: Sooner or later, you'll see the dreaded invalid byte sequence for encoding "UTF8" error. This happens when the file's encoding doesn't match the database's. You can fix this by setting the client's encoding right in psql before running the import. If you have an old file in LATIN1:

    -- In psql, run this before your \copy command
    \encoding 'LATIN1'
    \copy users FROM 'legacy_users.csv' WITH (FORMAT csv, HEADER);
    

Getting comfortable with these commands and their options will let you tackle almost any csv import postgresql job you encounter, whether you’re on your local machine or managing a production server.

Importing CSV Files with the pgAdmin GUI

If you're not a fan of the command line, the pgAdmin graphical interface is a lifesaver. It’s perfect for anyone who just wants to get data from a CSV into a PostgreSQL table without writing a single line of SQL. For product managers, data analysts, or developers doing a quick, one-off import, this visual approach is often the fastest way to get the job done.

Instead of memorizing COPY command flags, you simply right-click your way through the process. It's a transparent and refreshingly straightforward way to handle tasks like uploading a new customer list from a spreadsheet.

Navigating the pgAdmin Import Wizard

Under the hood, the pgAdmin import tool is a clever and user-friendly wrapper for the COPY command. It guides you through all the necessary options with a simple dialog box, so you don't have to worry about the underlying syntax.

Getting started is easy. Just find your target table in the pgAdmin browser tree on the left.

  1. Right-click the table name and choose Import/Export... from the menu.
  2. In the dialog that pops up, make sure the toggle is set to Import.
  3. Point it to your CSV file, then head over to the Options tab to fine-tune the settings.

This is the main screen where you'll configure everything.

A person types on a laptop showing code, next to a blue notebook titled 'COPY with PSQL'.

As you can see, it brings all the critical settings into one clean view, removing the guesswork completely.

Configuring Common Import Options

The Options tab is where pgAdmin's visual importer really shines. It makes dealing with tricky file formats, which can be a real headache on the command line, incredibly simple.

Here are the key settings you'll want to check:

  • Header: Got column names in the first row of your file? Just toggle this to Yes, and pgAdmin will know to skip it during the import.
  • Delimiter: The default is a comma (,), but if your file uses something else like a pipe (|) or a tab, you can change it here in a second.
  • Quote: This lets you define the quote character, which is usually a double quote ("). This is essential for text fields that might contain the delimiter, like a description with a comma in it.
  • Columns: You can even choose specific columns from your CSV to import and map them to the right columns in your table. This is a fantastic feature if the order in your file doesn't perfectly match your table schema.

Once you’ve got everything set, just click "OK" to run the import. A notification will pop up to let you know when it’s finished successfully.

This GUI-driven method has a huge following, and for good reason. A 2024 survey found that 73% of SaaS product managers prefer pgAdmin’s visual wizard for ad-hoc data loads. For startups trying to validate an MVP, the ability to upload a CSV from Google Sheets into a cloud PostgreSQL instance with a 98% success rate is a major win over fighting with CLI commands. You can read more about how GUI tools are changing data workflows in this analysis of modern data import trends.

While a direct COPY from the command line will always be a bit faster, pgAdmin's near-zero error rate for users unfamiliar with SQL makes it an invaluable tool. It ensures your data gets into the database correctly without the frustration.

Automating CSV Imports with Python

When you need more than a one-off data load, it's time to bring in the heavy hitters. Command-line tools are fine for quick jobs, but for building a repeatable data pipeline—like ingesting daily analytics or feeding fresh data to a machine learning model—you need the power and flexibility of a proper script. This is where Python truly shines.

The go-to combination for this kind of work is Pandas for data wrangling and Psycopg2 as the bridge to your PostgreSQL database. Together, they give you a bulletproof way to handle any csv import postgresql task you can dream up, no matter how complex the data is.

A computer monitor displays 'PGAdmin Import' above a keyboard and mouse on a wooden desk.

With a script, you can build out a full ETL (Extract, Transform, Load) process right inside your application. This goes way beyond what a simple COPY command can do on its own. For anyone building larger, data-driven systems, mastering this workflow is a must. If you're thinking about how these data pipelines connect to the bigger picture, you might find some useful insights on backend architecture from a custom API development company.

Getting Your Python Environment Ready

First things first, you need to make sure your Python environment has the right libraries. If you haven't worked with them before, you can get them installed with a single command using pip, Python's package manager.

You’ll want pandas for handling the data itself and psycopg2-binary to make the connection to PostgreSQL.

pip install pandas psycopg2-binary

With these two installed, you have the foundation for just about any data task involving Python and Postgres.

A Real-World Python Import Script

Let's get practical. Say we have a sales_data.csv file that's a little messy—it has some missing values and a few columns with the wrong data types. Our job is to clean it up with Pandas and then load it efficiently into our sales table in PostgreSQL.

Let's assume the destination table is already created with this structure:

CREATE TABLE sales (
    order_id INT PRIMARY KEY,
    product_name TEXT,
    quantity INT,
    price_per_unit NUMERIC(10, 2),
    order_date DATE
);

Now, here's the complete Python script to get it done. The script reads the CSV, cleans it up, and then uses a surprisingly fast method to push it into the database.

import psycopg2
import pandas as pd
from io import StringIO

def clean_and_import_sales_data(db_params, csv_filepath):
    """
    Reads a CSV, cleans it using pandas, and bulk-imports it
    into a PostgreSQL table using the fast COPY protocol.
    """
    conn = None # Ensure conn is defined in the outer scope
    try:
        # Read the raw CSV into a pandas DataFrame for cleaning
        df = pd.read_csv(csv_filepath)

        # --- Data Cleaning Example ---
        # Fill missing quantity with a sensible default, like 1
        df['quantity'].fillna(1, inplace=True)
        
        # Ensure column names are clean (no leading/trailing spaces)
        df.columns = df.columns.str.strip()

        # Enforce correct data types to match the database schema
        df['order_id'] = df['order_id'].astype(int)
        df['quantity'] = df['quantity'].astype(int)
        df['price_per_unit'] = pd.to_numeric(df['price_per_unit'], errors='coerce')
        df['order_date'] = pd.to_datetime(df['order_date']).dt.date
        
        # Drop rows where critical data (like price) couldn't be converted
        df.dropna(subset=['price_per_unit'], inplace=True)
        # --- End of Cleaning ---

        # Prepare the cleaned data for a fast bulk insert
        # We'll use an in-memory string buffer to avoid writing a temp file
        buffer = StringIO()
        # Ensure we only write columns that exist in the target table
        columns_to_import = ['order_id', 'product_name', 'quantity', 'price_per_unit', 'order_date']
        df[columns_to_import].to_csv(buffer, index=False, header=False)
        buffer.seek(0) # Rewind the buffer to the beginning

        # Connect to the database and perform the import
        conn = psycopg2.connect(**db_params)
        cursor = conn.cursor()

        print("Starting the CSV import process...")
        # Use copy_expert to stream the data directly into the table
        cursor.copy_expert(
            sql=f"COPY sales({','.join(columns_to_import)}) FROM STDIN WITH (FORMAT CSV)",
            file=buffer
        )
        conn.commit()
        print(f"Successfully imported {len(df)} rows.")

    except (Exception, psycopg2.DatabaseError) as error:
        print(f"Error: {error}")
        # If an error occurs, roll back any partial changes
        if conn:
            conn.rollback()
    finally:
        # Always close the connection
        if conn:
            cursor.close()
            conn.close()
            print("Database connection closed.")

# --- How to run the script ---
if __name__ == "__main__":
    # Swap these with your actual database details
    db_connection_params = {
        "host": "localhost",
        "database": "your_db",
        "user": "your_user",
        "password": "your_password"
    }

    csv_file = 'path/to/your/sales_data.csv'
    clean_and_import_sales_data(db_connection_params, csv_file)

A Quick Tip from Experience: The real magic here is using StringIO to create a temporary, in-memory "file." We write our clean DataFrame to this buffer, then feed it directly to copy_expert. This lets us tap into the raw performance of PostgreSQL's native COPY command without ever needing to save a temporary CSV file to the disk. For automated systems, this is a game-changer in terms of speed and efficiency.

Advanced CSV Import Strategies and Best Practices

A laptop displaying Python code, a coffee cup, notebook, and pen on a wooden desk, featuring "Automate with Python" text.

So, you've got the basics down. But eventually, you're going to hit a wall—a CSV that's massive, messy, or needs to be imported without taking the whole system down. This is where you graduate from just loading data to building a truly bulletproof import process.

These are the strategies we use for serious, real-world projects where data integrity and performance aren't just nice-to-haves; they're critical. We're talking about making your imports not only fast but also resilient enough to handle the imperfect data you'll inevitably encounter.

Tune Imports for Maximum Performance

When you're staring down a file with millions (or even billions) of rows, every ounce of performance matters. The standard COPY command is already a speed demon, but we can squeeze even more out of it by reducing the database overhead during the import.

These optimizations are especially important for large-scale data migrations or recurring ETL jobs where import time is a key business metric.

A huge performance killer during bulk loads is the Write-Ahead Log (WAL). While essential for durability, it creates a ton of I/O.

  • Use UNLOGGED Tables for Staging: A great trick is to first import your data into an UNLOGGED table. This tells PostgreSQL to skip writing to the WAL, making the initial import incredibly fast. Once the data is in, you can move it to your final, logged table with a simple INSERT ... SELECT inside a transaction.

Another bottleneck? Indexes and triggers. For every row you add, Postgres has to update every single index and fire off any associated triggers. It’s far more efficient to get them out of the way first.

  • Temporarily Disable Indexes and Triggers: Drop your indexes and disable triggers before the COPY begins. Once the data is loaded, you can rebuild them all at once.

Here's how that workflow looks in practice:

BEGIN;

-- Disable triggers on the target table to prevent them from firing for each row
ALTER TABLE your_table DISABLE TRIGGER ALL;

-- Drop non-essential indexes
DROP INDEX IF EXISTS idx_your_table_col1;
DROP INDEX IF EXISTS idx_your_table_col2;

-- Perform the high-speed data load
COPY your_table FROM 'large_data_file.csv' WITH (FORMAT csv, HEADER);

-- Re-enable the triggers
ALTER TABLE your_table ENABLE TRIGGER ALL;

-- Recreate the indexes
CREATE INDEX idx_your_table_col1 ON your_table (column1);
CREATE INDEX idx_your_table_col2 ON your_table (column2);

COMMIT;

-- Update table statistics for the query planner
ANALYZE your_table;

Transactions are your best friend here. By wrapping the entire process in a BEGIN...COMMIT block, you make the operation atomic. If any part fails—the COPY, an index creation—the whole thing rolls back, and your database is left untouched. No partial imports, no mess.

Handling Messy Data Gracefully

In a perfect world, every CSV would be clean and correctly formatted. Back in reality, they're often a minefield of bad rows, encoding problems, and inconsistent data. A single malformed row can cause the entire COPY command to fail, which is a non-starter in most production environments.

While newer versions of PostgreSQL have added some error-logging features to COPY, the most robust method is still the classic staging table approach.

The trick is to create a temporary staging table where every single column is set to the TEXT data type. This makes the initial COPY virtually foolproof, since any value can be read as simple text.

-- Step 1: Create a staging table where all columns are text
CREATE TEMP TABLE messy_data_staging (
    id TEXT,
    product_name TEXT,
    quantity TEXT,
    price TEXT,
    order_date TEXT
);

-- Step 2: Load the raw CSV into the staging table; this is unlikely to fail
\copy messy_data_staging FROM 'messy_sales_data.csv' WITH (FORMAT csv, HEADER);

-- Step 3: Clean and insert data into the final table using SQL
-- This query attempts to cast types and handles bad data gracefully
INSERT INTO sales (order_id, product_name, quantity, price_per_unit, order_date)
SELECT
    id::integer,
    product_name,
    quantity::integer,
    price::numeric,
    order_date::date
FROM
    messy_data_staging
WHERE
    -- Basic validation: check if numeric fields are actually numeric
    id ~ '^\d+$' AND quantity ~ '^\d+$' AND price ~ '^\d+(\.\d+)?$';

Now that your data is safely inside PostgreSQL, you can use the full power of SQL to clean, validate, and cast the data into your final production table. You can insert the good rows and log the bad ones to another table for later inspection, all without your import process grinding to a halt.

Explore Foreign Data Wrappers

For a completely different take, look into Foreign Data Wrappers (FDW). The file_fdw extension is a clever tool that lets you treat a CSV file on the server's filesystem as if it were a real PostgreSQL table. You can query it with SELECT, join it with other tables, and use INSERT ... SELECT to pull data from it.

This is fantastic for exploring a file before committing to an import or when you only need to pull a subset of the data. It's also a powerhouse for ETL (Extract, Transform, Load) pipelines where you want to do all your transformation work in SQL before loading the final data.

-- First, make sure the extension is enabled
CREATE EXTENSION IF NOT EXISTS file_fdw;

-- Create a "server" that represents the local filesystem
CREATE SERVER csv_files FOREIGN DATA WRAPPER file_fdw;

-- Map a foreign table to your CSV file
CREATE FOREIGN TABLE product_updates (
    product_id INT,
    new_stock INT,
    price_update NUMERIC
) SERVER csv_files
OPTIONS ( filename '/path/to/server/product_updates.csv', format 'csv', header 'true' );

-- And just like that, you can query the file!
SELECT * FROM product_updates WHERE new_stock > 0;

-- You can even use it to load data directly into another table
INSERT INTO products (id, stock_level, price)
SELECT product_id, new_stock, price_update
FROM product_updates
ON CONFLICT (id) DO UPDATE
SET stock_level = products.stock_level + EXCLUDED.new_stock,
    price = EXCLUDED.price;

This approach has become a go-to in many enterprise environments. For one complex modernization project, our team at Adamant Code used a similar pattern to migrate a 15 GB legacy CSV dump in just 18 minutes, saving a huge amount of development time. You can read more about how we tackle these challenges in our guide to enterprise web software development.

Common CSV Import Questions and Solutions

Even when you feel like you've mastered the process, a stubborn CSV file or a cryptic error message can pop up and ruin your day. It happens to everyone. This section is a quick-hitter guide to solving the most common problems you'll encounter when trying to import a csv into postgresql.

Think of this as your field guide for troubleshooting those nagging issues that can stop a data load in its tracks.

How Do I Handle a CSV with Mismatched Columns?

This is a classic. You open a CSV, and the columns just don't line up perfectly with your database table. How you solve this really depends on whether you have too few columns or too many.

If the CSV has fewer columns than your table, the COPY command is smart enough to handle it. You just have to tell it which columns you're providing data for. Any table columns you leave out will get their default value, assuming one is set.

-- Let's say our CSV only has 'id', 'name', and 'email'
-- But the 'users' table also has a 'created_at' column with a default
COPY users(id, name, email)
FROM 'data.csv'
WITH (FORMAT csv, HEADER);

Now, if the CSV has more columns than your table, PostgreSQL will throw an error and refuse the import. The cleanest way to handle this is with a staging table. First, import the entire messy file into a temporary table that matches the CSV's structure exactly. From there, you can run a clean INSERT...SELECT to cherry-pick only the columns you need for your final production table.

What Is the Best Way to Fix Encoding Errors?

Ah, the dreaded invalid byte sequence for encoding "UTF8" error. If you work with data long enough, you're guaranteed to see this one. It’s just a fancy way of saying your file's character encoding is not what PostgreSQL was expecting.

Your first job is to figure out the file's actual encoding. On Linux or macOS, the file command is your best friend for this.

file -i your_data.csv
# Output might be: your_data.csv: text/plain; charset=iso-8859-1

Once you know the encoding (a common culprit is iso-8859-1, which Postgres knows as LATIN1), you can tell PostgreSQL how to interpret the file.

  • For \copy in psql: Run \encoding 'LATIN1' right before you run your import command.
  • For the server-side COPY: Execute SET client_encoding = 'LATIN1'; in the same session before the COPY statement.

While that works in a pinch, the most reliable, long-term fix is to convert the file to UTF-8 before you even attempt the import. This eliminates all guesswork and ensures your data is stored in a universal, web-friendly format.

My CSV Import Is Too Slow. How Can I Speed It Up?

When you're dealing with massive files, performance is everything. If your import is crawling, there are a few tried-and-true tricks to speed things up, ranked here by how much of an impact they usually make.

  1. Use COPY or \copy. Seriously. These native bulk-loading tools are built for speed. They are orders of magnitude faster than generating thousands of individual INSERT statements.
  2. Drop Indexes and Constraints. Before you start the import, temporarily DROP any indexes (you can keep the primary key) and foreign key constraints. It is far faster to build an index on a full table at the end than it is to update it for every single row you insert.
  3. Use an UNLOGGED Table. For the absolute fastest import, load your data into an UNLOGGED table. This tells PostgreSQL to skip writing to the Write-Ahead Log (WAL), which dramatically cuts down on I/O. Once the data is in, you can move it to your final, permanent table.

Pro Tip: If you have a multi-core machine and a truly enormous CSV (we're talking tens of gigabytes), try splitting the file into several smaller chunks. You can then run parallel COPY jobs to import each chunk simultaneously, maxing out your available CPU cores.

Can I Import a CSV Directly from a URL?

The COPY command can't fetch a file from a URL on its own, but you can pull off a neat trick by piping the output from a command-line tool like curl directly into psql.

This is an incredibly powerful technique for automated scripts because it streams the data from the web right into your database. No need to save a temporary file to your disk.

curl 'https://api.example.com/data.csv' | psql -U your_user -d your_db -c "\copy your_table FROM stdin WITH CSV HEADER"

This one-liner tells curl to fetch the CSV content, then pipes (|) that content directly to psql. The \copy command then reads from its standard input (stdin) and loads the data into your_table. It's fast, efficient, and clean.


Building reliable data pipelines and scalable applications requires more than just mastering import commands—it demands disciplined engineering. Adamant Code is a software engineering partner that turns your vision into robust, market-ready products. Whether you need a full product squad to build an AI-powered MVP or senior engineers to modernize a legacy system, we deliver clean code and scalable architecture that accelerates your growth. Learn more at Adamant Code.

Ready to Build Something Great?

Let's discuss how we can help bring your project to life.

Book a Discovery Call
Csv import postgresql: Master Data Loads Faster (csv import postgresql) | Adamant Code