Skip to content
Go back

Loading CSV Data in Rails: Then vs Now

Updated:
Loading CSV Data in Rails: Then vs Now

Note (2025): This post was originally written in the Rails 2.3.8 era. While the overall idea of loading CSV data into Rails models remains relevant, the code shown is outdated. FasterCSV has been merged into Ruby’s built-in CSV class, and Rails now provides bulk insert methods (insert_all, upsert_all) that make the process far more efficient. What follows is both the legacy approach and the modern Rails way.


The Legacy Approach (Rails 2.x)

Back in the Rails 2.3.8 days, when we needed to load data from multiple files into our models. Each file represented a dataset, and every row needed to be mapped to attributes in the model.

Example workflow:

  1. Map file names to models.
  2. Iterate over rows in each CSV.
  3. Save records row by row.
files = {
  "users.csv" => User,
  "products.csv" => Product
}

files.each do |file, model|
  FasterCSV.foreach(file, :headers => true) do |row|
    model.create!(row.to_hash)
  end
end

This worked, but it was:


The Modern Rails Way (Rails 7/8)

In modern Rails (7+), you can rely on built-in CSV and bulk insert methods:

require "csv"

files = {
  "users.csv" => User,
  "products.csv" => Product
}

files.each do |file, model|
  rows = CSV.read(file, headers: true).map(&:to_h)
  model.insert_all(rows)
end

Why this is better:

Quick Comparison

FeatureRails 2.x (Legacy)Rails 7/8 (Modern)
CSV ParsingFasterCSV gemRuby’s built-in CSV class
Row → Hash ConversionCustom helperrow.to_h built-in
Saving Datamodel.create! per rowinsert_all / upsert_all
PerformanceSlow (N queries)Fast (bulk insert in 1 query)

Takeaway


You might also like


Share this post on:

Previous Post
Fixing Homebrew & Zsh Issues After macOS Upgrades
Next Post
Formatting ActiveSupport::Duration Objects in Rails