Finding banks that rank well on both return on assets and efficiency is one of the fastest ways to narrow a large list of institutions into a smaller set worth deeper review. If a bank generates solid earnings from its asset base while also keeping operating costs under control, that combination usually signals a business model with discipline, pricing power, or unusually good execution.
This article is a methodology explainer, not a live ranking of current winners. The goal is to show how to define the universe, compare banks against true peers, use one efficiency definition, and test whether the result persists across time. That keeps the analysis from turning into a simple leaderboard that changes with every quarterly filing cycle.
For consistency, this workflow uses the UBPR / FFIEC efficiency ratio: noninterest expense / (net interest income + noninterest income – securities gains/losses). Lower is better. Setting that definition before you rank banks avoids mixing unlike ratios later.
Methodology note: The approach below uses public UBPR ratio data and peer-group comparisons from the FFIEC, with an example persistence window of Q4 2023-Q4 2025.[1] Because rankings change each quarter, this post explains the method rather than naming current winners.
How this screen works
The useful version is a peer-group composite, not a raw bank ranking. Start with each bank’s UBPR peer group, standardize ROA and efficiency within that group, and combine the two scores. A high combined score means the bank is above peers on profitability and cost control at the same time.
- Define the peer set: Compare community banks with community banks, regionals with regionals, and banks with similar lending and revenue profiles.
- Standardize ROA: Convert ROA to a Z-score within the peer group so the result reflects relative profitability, not just raw size or business mix.
- Standardize efficiency: Multiply the efficiency ratio by -1 before standardizing, so lower expense burden becomes a higher score.
- Combine the scores: Add the ROA Z-score and the adjusted efficiency Z-score to create a simple composite.
- Apply persistence: Require the composite to clear the chosen threshold across multiple quarters, not just one period.
For a current-cycle review, an eight-quarter window such as Q4 2023-Q4 2025 is stricter than a one-quarter snapshot. You can set a threshold, such as a combined Z-score above +1.5 in each quarter, but the exact cutoff should be disclosed with the universe, peer groups, ratio definition, and time period.
After the first pass, decompose each bank’s result into spread economics, earning-asset utilization, fee income contribution, and expense burden. That helps show whether the edge is spread-driven, fee-driven, cost-driven, or simply a temporary accounting effect.
Why ROA and efficiency work well together
ROA and efficiency measure two different but related parts of bank performance. ROA asks how much profit a bank produces from its asset base. The efficiency ratio asks how much it spends to generate revenue. Looking at only one of these numbers creates blind spots. Looking at both gives you a more balanced first screen.
- High ROA with weak efficiency can mean a bank is benefiting from favorable spreads, credit timing, or business mix, but its cost structure may still be mediocre.
- Good efficiency with weak ROA can mean costs are controlled, but the bank lacks pricing power, growth quality, or asset productivity.
- High ROA and good efficiency together often point to repeatable execution, especially when the bank is not relying on one unusual segment or a single accounting period.
That is why these two metrics often belong in the same first-pass review. They are not sufficient by themselves, but they are a credible starting point for separating operating strength from average performance.
What counts as an outlier in a bank screen
The word outlier should be used carefully. In banking, the best performers are often those that rank unusually well relative to their true peer group, not necessarily those with the most extreme raw numbers. A disciplined screen should include the following filters.
- Use peer groups first: Compare banks with similar size, lending mix, geography, and revenue profile. Otherwise you may reward business-model differences instead of genuine operating strength.
- Look for combined ranking: A useful screen identifies banks that rank well on both ROA and the chosen efficiency ratio rather than leading only one metric.
- Check persistence: One good period does not make a bank exceptional. Durable results matter more than a temporary spike.
- Remove obvious distortions: One-time gains, unusual reserve releases, merger integration effects, or major balance-sheet changes can make a bank look stronger or weaker than it really is.
- Keep asset mix in view: Mortgage-heavy, commercial-heavy, fee-heavy, and specialty lenders often deserve separate interpretation even when the reported metrics look familiar.
The result is a method that is still simple enough to use at scale but strict enough to avoid low-quality winners.
Why the efficiency definition matters
Before comparing anything, keep the ratio definition fixed. Three versions are common:
- Industry-standard: noninterest expense / (net interest income + noninterest income).
- UBPR / FFIEC: noninterest expense / (NII + noninterest income – securities gains/losses).
- Rating-agency variants: tax-adjusted versions of the same basic idea.
This article uses the UBPR / FFIEC version because it lines up with public UBPR peer comparisons. The industry-standard version can be useful for internal work, and rating-agency versions may matter for credit analysis, but mixing them in one ranking will distort the result.
Use peer percentiles instead of fixed cutoffs
Quarterly medians and quartiles are useful for orientation, but they are not stable enough to carry the analysis alone. The public starting point is UBPR Ratio Analysis Page 1, which provides peer percentile rankings for US banks.[1] Use those percentiles to see whether a bank is merely above average or consistently near the top of its true peer set.
Berger and Humphrey’s 1997 meta-analysis is older and should not be treated as a current benchmark. It remains relevant here because it explains why cost inefficiency and persistent operating differences matter in banking; in this article, it supports the logic of persistence testing rather than any current-quarter cutoff.[2]
What strong ROA usually tells you
A high ROA can reflect several different strengths, and you need to know which one you are actually seeing.
- Asset pricing strength: The bank may be generating better yields or better spread economics without sacrificing underwriting discipline.
- Revenue mix advantage: Strong fee businesses can support returns even when pure lending spreads are average.
- Balance-sheet discipline: Management may be deploying assets selectively instead of chasing low-quality growth.
- Credit steadiness: Stable loss experience can make profitability look better, but you still need to test whether that stability is structural or cyclical.
ROA is especially useful because it is difficult to maintain without something in the model working. The analytical job is figuring out whether that strength is durable, scalable, and appropriately priced for risk.
What strong efficiency usually tells you
Efficiency can be even more revealing when used properly. A favorable efficiency ratio often signals that management has made deliberate choices about branch footprint, staffing, technology, channel mix, and product focus.
- Operating discipline: Expenses are being controlled relative to revenue generation.
- Scale benefits: The bank may have spread fixed costs across a larger base of assets, deposits, or fee businesses.
- Distribution advantage: Digital acquisition, targeted branch strategy, or a focused business line can reduce the cost to serve customers.
- Revenue quality: In some cases the ratio improves because revenue is more diversified and less dependent on one lending segment.
Efficiency should never be read in isolation. A bank can cut too deep, underinvest, or temporarily improve the metric through short-term actions. Still, when good efficiency and high ROA show up together, the odds increase that you are looking at a genuinely well-run institution.
How to build a credible outlier screen
A practical bank screen should move from broad filtering to closer comparison in a few clear steps.
- Define your bank universe by geography, size, or charter type.
- Exclude institutions with obviously different business models if you want a like-for-like comparison.
- Select the UBPR / FFIEC efficiency ratio and keep that definition fixed.
- Rank the remaining banks on ROA within their peer group.
- Rank the same group on the adjusted efficiency score.
- Identify the overlap where banks score well on both measures.
- Review those names for capital, funding profile, credit mix, and revenue concentration before calling them true outliers.
This keeps the workflow grounded. The goal is not to find one magic number. It is to identify banks that hold up when profitability, expense discipline, peer context, and persistence are reviewed together.
What can create false performance outliers
Many banks look exceptional in a simple ranking but lose that status once you add context. Common false positives include:
- Temporary earnings support: One-off gains, unusual reserve behavior, or favorable timing in a single period.
- Unusually favorable business mix: A specialist model may not be directly comparable to a more diversified peer set.
- Post-merger noise: Revenue and cost metrics can look distorted before integration fully settles.
- Understated risk: Strong current returns can hide a more cyclical or concentrated risk profile.
- Expense deferral: Efficiency can improve temporarily if the bank delays investments that later need to be made.
A good screening process does not eliminate all of these risks, but it helps you spot them before they become part of your conclusion.
What to review after you find the screen winners
Once a bank stands out on both ROA and efficiency, the next question is whether the result is investable, durable, or strategically meaningful. Review these areas next:
| Follow-up area | Why it matters | What to look for |
|---|---|---|
| Capital | Strong returns should be judged alongside resilience | Whether the bank is generating profits with a prudent capital posture |
| Funding | Deposit quality affects durability of earnings | Mix of retail, business, and more rate-sensitive funding sources |
| Credit concentration | High performance can sometimes ride on narrow exposures | Reliance on one geography, sector, or loan category |
| Revenue diversity | Broader income sources often improve resilience | Balance between spread income and fee income |
| Operating model | Efficiency is more durable when tied to structure | Evidence of sustainable cost control, not just temporary cuts |
Who should use this kind of screen
This approach is useful for more than one audience.
- Investors can use it to narrow a large coverage list into banks that deserve deeper valuation and risk work.
- Executives and strategy teams can use it to identify operational peers and spot where competitors are translating structure into stronger results.
- Researchers and analysts can use it to frame a sharper discussion around what high-quality bank performance actually looks like.
The shared benefit is focus. Instead of starting with anecdotal reputations or headline size, you start with two metrics that together reveal whether a bank is converting assets into profits efficiently.
Using Banking as the workflow bridge
A lot of bank screening breaks down because the process is too manual. Analysts build one spreadsheet for ROA, another for cost metrics, then spend time cleaning data instead of interpreting it. A dedicated tool solves the operational problem first.
With the Banking comparison view, you can apply this kind of public-data workflow without rebuilding every comparison manually. Use it after the method is clear: define the universe, keep the efficiency ratio consistent, compare true peers, and inspect the banks that still look credible after the persistence test.
Worked example: a likely false positive
Assume Bank A has a high ROA in the latest quarter and a low efficiency ratio. In a simple ranking, it looks like a leader. The peer-group method asks whether that result survives three checks.
| Check | What might explain the result | Better follow-up |
|---|---|---|
| Peer group | Bank A may have a specialty lending model that naturally produces different spreads and costs | Compare it with banks that share a similar asset and revenue profile |
| Persistence | The latest quarter may include a temporary fee gain, reserve release, or expense timing benefit | Review the same composite score across the full eight-quarter window |
| Risk profile | Higher returns may come with concentration in a geography, loan type, or funding source | Review capital, credit mix, deposit quality, and revenue concentration before drawing a conclusion |
If Bank A still ranks well after those checks, it may deserve deeper work. If the score fades after peer adjustment or disappears across time, it was probably a one-period artifact rather than a durable operator.
Bottom line
Banks that rank well on both ROA and efficiency deserve attention because they often combine earnings quality with operating discipline. But the best screens do not stop at the ranking. They use ROA and efficiency as a first pass, then test whether the apparent strength is supported by funding quality, credit mix, capital strength, and a durable operating model.
If you want to find real bank quality instead of spreadsheet artifacts, use a structured comparison workflow. That makes it easier to screen broadly, compare fairly, and investigate the names that are strong for the right reasons.
Sources
- FFIEC, Uniform Bank Performance Report – public UBPR ratios and peer-group comparison framework.
- Berger and Humphrey, Efficiency of Financial Institutions – foundational review of financial-institution efficiency research, cited here for persistence logic rather than current benchmarks.