If you are running a Rails application that is more than a few years old, there is a very high probability that you are using ActiveModel::Serializers (AMS).
You also probably know that AMS is “dead”. The repository has been archived, maintainers have moved on, and 0.10.x is effectively the final version.
Yet, despite this well-known fact, AMS remains one of the most deployed gems in the ecosystem. Why? Because migration is terrifying. Serialization is the final mile of your API; getting it wrong breaks mobile apps, frontends, and third-party integrations. Furthermore, AMS, specifically when configured with caching, is surprisingly fast. It’s hard to justify a rewrite when the “dead” code is still serving requests in sub-millisecond time.
We recently faced this dilemma. Our AMS implementation was working, but it was a dead end. We wanted to move to something modern, maintained, and standards-compliant, but we couldn’t afford to regress on performance.
Here is the story of how we pitted the incumbents against the challengers and why we ultimately chose Alba.
The Contenders
We narrowed our search to three primary options:
ActiveModel::Serializers (The Incumbent)
- Pros: Deeply integrated, supports view caching, “it just works.”
- Cons: Unmaintained, memory-heavy, implicit behavior (“magic”).
- Variants Tested: We tested both “Standard” (instantiating new serializers per request) and “Cached” (fetching pre-computed JSON strings from Memcached).
Blueprinter
- Pros: Declarative, widely used, good documentation.
- Cons: Syntax is slightly different from the “Rails way,” performance is good but generally trails Alba.
Alba
- Pros: Modern, explicitly designed for speed, highly compatible with the Oj JSON library.
- Cons: Newer ecosystem, requires strict definition of resources.
The Benchmark: Real World Data, Not “Hello World”
Micro-benchmarks are often misleading. Serializing a simple User object with a name and email is easy for any library.
To make an informed decision, we created a benchmark representing our heaviest real-world scenario: a full E-commerce Order.
Our test setup involved:
- 1 Order
- 100 Fulfillment Groups (shipments)
- 100 Line Items (spread across groups)
- Associated Data: Adjustments (taxes/promotions), Payments, Customers, and Addresses.
This is a heavy payload. It heavily exercises the serializer’s ability to handle associations (has_many, belongs_to) and nested logic.
The Code
We implemented the exact same JSON structure across all three libraries.
The Alba Implementation: Notice how Alba feels familiar to AMS users but with more explicit control.
class OrderAlbaSerializer
include Alba::Resource
include Alba::Serializer
attributes :id, :state, :total, :subtotal, :created_at
# Relationships are explicit
many :fulfillment_groups, resource: FulfillmentGroupAlbaSerializer
many :line_items, resource: LineItemAlbaSerializer
one :venue
# Computed attributes use a clean block syntax
attribute :seat_description do |order|
"#{order.section.name} - #{order.row.name}"
end
end
The Results
We ran the benchmarks using benchmark-ips (iterations per second). The results were illuminating.
(Note: Relative performance figures based on our internal testing)
- AMS (Cached): ~3,500 i/s 🏆
- Alba (with Oj): ~2,800 i/s 🥈
- Blueprinter: ~2,100 i/s
- AMS (Uncached): ~600 i/s 🐢
The “Caching” Elephant in the Room
The most striking data point is that AMS with Caching was the fastest.
This explains why so many apps are stuck on AMS. When you cache the entire JSON string of a serializer, you are effectively bypassing the serialization work entirely on subsequent hits. It’s hard to beat “fetching a string from RAM.”
However, Alba (backed by the Oj gem) came dangerously close, without caching.
Alba + Oj was nearly 4-5x faster than standard, uncached AMS. It was so fast that it fundamentally changed our engineering calculus.
The Decision: Why We Chose Alba
If AMS Cached is faster, why switch?
1. Speed Without Complexity
Caching is hard. Cache invalidation is one of the two hardest problems in computer science (along with naming things and off-by-one errors).
With AMS, we relied on caching to get acceptable performance. If the cache missed, the user felt it (dropping to 600 i/s).
With Alba, the baseline performance is exceptional. We get ~2,800 i/s every single time. We don’t need to manage complex cache keys. We can simplify our architecture by removing the view caching layer entirely for most endpoints.
2. The Oj Factor
Alba is designed to leverage Oj (Optimized JSON), a C-extension for Ruby that is incredibly fast. By setting:
# config/initializers/alba.rb
Alba.backend = :oj
Alba bypasses much of the Ruby object allocation overhead that slows down other serializers.
3. Safety and Maintenance
AMS 0.10.x is a ghostly dependency. It might break with the next major Rails upgrade. Alba is active, standard-compliant, and its codebase is easy to read and understand.
The Caching Gap (and How We Fixed It)
There was one catch: Alba does not support caching out of the box.
This is by design. The creators of Alba (and indeed, many in the API community) believe that serialization logic should remain pure, and caching should be handled at the HTTP layer or application boundary. (See Rails Issue #41784).
For us, however, fragment caching is critical. We have complex objects (like venues with thousands of seats) where re-serializing everything, even with Alba’s speed, is wasteful.
The Solution: A Lightweight Concern
Since Alba is just Ruby, adding caching back in was surprisingly trivial. We didn’t need the heavy “Cache Adapter” machinery of AMS. We just needed a decorator.
We wrote a simple AlbaCaching concern that wraps the serialize method:
module AlbaCaching
extend ActiveSupport::Concern
class_methods do
def cache(options = {})
@cache_options = options
end
def cache_options
@cache_options
end
end
def serialize(root_key: nil, meta: {})
return super unless self.class.cache_options
# Construct a cache key from the object and serializer name
key_base = object.respond_to?(:cache_key) ? object.cache_key : "#{object.class.name.underscore}/#{object.id}"
cache_key = "#{key_base}/#{self.class.name.underscore}"
ttl = self.class.cache_options[:expires_in] || 1.hour
Rails.cache.fetch(cache_key, expires_in: ttl) do
super
end
end
end
Now, we can just mix this into any serializer that needs it:
class HeavyVenueSerializer
include Alba::Resource
include AlbaCaching
cache expires_in: 30.minutes
attributes :id, :name, :capacity
end
This gave us the best of both worlds: the raw speed of Alba/Oj for 90% of our requests, and the ability to selectively cache heavy fragments where needed, without the bloat of AMS.
Automating the Migration
With the decision made and the caching gap bridged, we faced one final hurdle: the sheer volume of code. We had dozens of serializers to convert.
Rewriting them by hand was too much work.
(Note: If you are on Ruby 3.1+, you should check out the excellent alba_migration gem, which handles much of this automatically. Since our application was still running on an older Ruby version, we had to roll our own solution.)
We wrote a simple “bootstrapping” script (bin/alba_bootstrap.rb) to handle the syntax-swapping drudgery.
It wasn’t an AST parser or a complex transpiler. It was just simple Ruby string manipulation:
# bin/alba_bootstrap.rb
def convert_file(file_path)
content = File.read(file_path)
# 1. Swap Inheritance for Include
content.gsub!(/< ActiveModel::Serializer/, '')
content.sub!(/class (.*?)(\n|$)/) { "class #{$1}\n include Alba::Resource\n" }
# 2. Convert Associations
content.gsub!(/has_many/, 'many')
content.gsub!(/has_one/, 'one')
# 3. Prompt for Manual Logic Review
content.gsub!(/def (.*?)\n(.*?)\n end/m) do |match|
"# TODO: Convert method '#{$1}' to attribute block\n#{match.gsub(/^/, '# ')}"
end
File.write(file_path, content)
end
This script didn’t produce perfect code, but it got us 80% of the way there. It handled the boilerplate, leaving us to focus on the interesting parts: converting complex custom methods into Alba’s clean attribute blocks.
Conclusion
Leaving ActiveModel::Serializers feels like leaving an old apartment. It had its quirks, but it was home.
However, the move to Alba has been a breath of fresh air. We traded a complex, unmaintained caching strategy for raw, highly optimized serialization throughput. The code is cleaner, the benchmarks are solid, and we sleep better knowing our API layer is future-proof.
If you are still holding onto AMS because “it works,” I highly recommend creating a benchmark with your heaviest model and giving Alba a spin. You might find that you don’t need that cache as much as you thought.