Spotlight Exclusives

New Analysis Finds Early, Positive Outcomes for Opportunity Zones

John Lettieri and Kenan Fikri John Lettieri and Kenan Fikri, posted on

Created by the 2017 Tax Cuts and Jobs Acts, Opportunity Zones were designed to offer a new federal incentive to spur investment in low-income and undercapitalized communities. While there has been disagreement about the success of the policy, or even how best to measure success, a new analysis by the Economic Innovation Group of two recent papers on Opportunity Zones finds positive initial outcomes. EIG President and CEO John Lettieri and Director for Research Kenan Fikri spoke with Spotlight recently about the findings. The transcript has been lightly edited for length and clarity.

Let’s start with why you did this analysis.

Lettieri:  It was a good time to review the evidence. For years we’ve been cautioning the policy and research community that we need time to evaluate a policy like Opportunity Zones, given what it’s trying to do and how it’s trying to do it. We’re not going to see overnight changes in capital flows, capital formation, investment activity, business formation, or job creation. All those things are downstream from the early activity that we would expect to see in 2019—which would be the first full year of policy—or in 2020, which would be the first year after regulations were actually implemented. A lot of the early perception of Opportunity Zones was shaped by preliminary evidence and very small sample sizes.

What’s important about the data we have today, through the two studies that we highlight in our brief, is that we’re getting really high-quality, multi-year evidence. We’re getting big samples sizes. In the Treasury and IRS data, for example, we’re talking about almost 4,000 tracts across the country that have seen investment, and that stretches over two and a half years. That’s still early in the lifespan of the policy, but you’re talking about a very large sample of activity—you’re talking about almost $50 billion in investment, and about 25,000 investors. Now we’re dealing with scale that really starts to yield meaningful insights that should prove to be more reliable as a picture of what’s actually happening in Opportunity Zones. The Treasury data takes us through the end of 2020, and we’re here at the end of the first quarter of 2023, so the picture has undoubtedly moved from where that data leaves off. But this gives us the first really clear and meaningful snapshot of what that implementation period of Opportunity Zones looked like. The data from Coyne and Johnson’s study is for 2020, but we now feel confident that we can draw some preliminary conclusions from that data that should be more durable and more useful in projecting where things are going and learning some lessons about how the policy was implemented.

The results of the Wheeler study segue into what happened as a result of the data we see in Coyne and Johnson. What’s so important there is that it’s looking at what we think is the most tangible and meaningful indicator for this stage of the policy, which is permitting. Looking at permitting data across 47 cities and 12,000 census tracts over four years, that’s a large sample—that’s a really meaningful scope. Wheeler looks at permitting data in 47 cities four years before designation and then four years afterwards, which is the upstream signal of what’s going happen later on. The building doesn’t come out of the ground the day after you decide to seek the permit; that kicks off a years-long process. So, it should spark some thinking in the research community: When should we start to expect the results of that showing up in commercial activity, employment, or residential activity? It’s not the day the permit is granted; it’s often years after that happens.

How do you put that all together? This was an experiment, and what we’re looking for early on is: Can we answer threshold questions about the experiment? Here’s how I would lay that out—first, the Opportunity Zones incentive was clearly designed to be more a broad, participatory type of structure, which is trying to draw in a lot of previously inactivated investors and to get broader geographic reach and scale. Did it do those things? Coyne and Johnson tell us yes—25,000 investors, 21,000 individual and 4,000 corporate. That’s an order of magnitude larger than you see with New Markets [Tax Credits] or other types of programs. The $50 billion in equity capital—again, an order of magnitude larger than what we see with other programs. We’re approaching 4,000 tracts spread across every state and every commuting zone, and it took New Markets [Tax Credits] 18 years to reach that kind of geographic spread.

So, on the questions of broad participation, geographic reach, and large scale of capital, we can now say definitively—regardless of what we think about the policy or the downstream effects—that it did actually motivate that kind of activity in the way Congress intended. That tells us something about the design in the long term, and also how we can apply certain lessons to future place-based policy efforts.

On the effect side, we now have answers to the biggest threshold question you want to know about a place-based incentive, which is: Is it rewarding activity that would’ve happened anyway, or is it generating net new activity? And what Wheeler’s data really strongly suggests is a steep change in activity—what he calls a “large and immediate effect on development”—such that you observe a 20% increase in the likelihood of development in an OZ tract in any given month. That’s a very large effect. That very strongly implies this is net new activity. But you can corroborate that further by the spillover effects that he finds from the zones into neighboring communities, and that even at a citywide scale, you’re seeing a boost in development and in housing values.

The third piece is: What does that mean for the local residents or homeowners? It means that they’re seeing a boost in their home values at three and a half percent as well as a boost in housing supply—which right now is particularly important—but no change in rents. All of that is promising, but it’s not dispositive of the final result. We still can’t answer the most important questions.

What surprised you the most about the data you’re seeing?

Fikri: I was genuinely and pleasantly surprised at the reach of the incentive already by 2020 as shown in the Coyne and Johnson paper. The fact that you had 64% of census tracts in Mississippi having received investment in that year—the first year the policy was really active and live and fully regulated—was stronger than I would’ve guessed at the outset. The fact that in Washington, D.C. it gets all the way up to 80% of census tracts reflects a really, really deep penetration of the incentive, and it shows how seamlessly it’s grafting onto the map of local investment and local community development.

To be honest, I’d say that what was surprising was how much it corroborated our instincts that the policy was large. It was changing behavior; it was significant for communities, and if you were patient and looked at the right indicators, you would be able to see it. To me, the most surprising thing was that the results in the end were so clear and so intuitive to us, because we’ve been very close to the policy for a long time. I think EIG has a better understanding—at least in the policy and research community—of how Opportunity Zones work and how the capital incentive is being deployed into economic activity in communities than almost anyone else just by nature of us being so close to it. We have maintained that there is something here, and it’s going to make a difference and you’re going to see it—you just need to be patient.

Lettieri: I assumed that the financial scale was going to be large, but my expectations on geographic reach were blown out of the water. I thought we might get to about 50-plus percent of OZ’s by the end of 2026 or beyond, so to see it happening in less than three years is surprising to me. When you look state by state and how distributed investment is across a state like Vermont, Rhode Island, South Dakota, or South Carolina, it answers another really important question: Will this incentive be malleable enough to be relevant in local contexts that looks dramatically different from one to another? That’s also been an elusive thing about economic development policy and place-based incentives—it’s hard to “right-size” for everybody.

I think we have a three-pronged, major finding in Wheeler’s study of large effects, positive spillovers, and no increase in rents simultaneous to an increase in housing value. I wouldn’t have been surprised in the long run that that was the case—that’s what I would’ve expected. But to see it happening again in such a relatively short timeframe and at such a compelling scale was encouraging.

And is there anything in this data that would suggest tweaking the model? Or do you need to give it more time before really getting that sort of dispositive evidence?

Lettieri: OZ 1.0 is an experiment, and I think the more that we reiterate that, the better the lessons will be that we draw and apply to the future. Again, Wheeler’s big contribution here is that he doesn’t just stop with the effects. He then models the optimal OZ for generating a development effect—What are the characteristics of a local community that make it most responsive to this type of incentive? One could have an intuition about that as we did before the selection process, but until you actually see it play out, you can’t know for sure what the answer’s going to be. The fact that he finds such a large effect with actual OZ designations, and he also models that it could be even larger in future designations (if you skew towards the kind of places with the characteristics that he finds are most responsive), that’s the most exciting part to us. You now have the potential to apply learnings from OZ 1.0 into the second iteration and future iterations of the policy so that the effects can be even stronger and even better targeted. We can use that same knowledge to influence how we think about a broader toolkit of place-based policies. That allows us, with the next iteration, to be perhaps more targeted, more specific, and provide more stringent guidance to governors to consider factors in a way that just was not possible with the first iteration.

Our focus since Opportunity Zones passed has been heavily on: How do we improve the current iteration so it’s better targeted and more transparent? More transparent means we need more publicly available, fine-grain data on OZ activities so that all researchers—not just ones with privileged access to IRS tax returns—can develop really in-depth longitudinal studies about the policy. That’s not possible right now, and that requires Congress to act. Second, we know that we could make OZs better targeted towards the types of communities that the policy was intended to reach. Overall, the data confirms very strongly that the targeting was strong, but there are a few places that slipped in under the eligibility guidelines that should not be Opportunity Zones in our view—we don’t have to wait for the second iteration to sunset those that shouldn’t be part of the first iteration. There are things like that Congress can do to strengthen the policy’s transparency and targeting so that those benefits are really hitting the bullseye. We’ve been working very hard to make that case to Congress and there is a bipartisan, bicameral bill that would do just that.

What were some of the characteristics that Wheeler found in terms of changing the designation? What did you learn there?

Fikri: He found generally, census tracts that were closer to downtowns and had more vacant parcels are the types of communities where you see a strong development response. He also advocated strongly for clustering OZs together to maximize the spillover effects. He only looks at large cities, however, so there’s no similar optimization data for rural areas, which down the road could be very useful.

What’s the timeline for the next tranche of data that you’re looking for? And I know some of this data was based on pre-pandemic findings. Does the pandemic scramble the results for a few years or how does it impact?

Lettieri: The nice thing about Wheeler’s paper is that it does go through mid-2022, and he’s still finding large effects. The caution there would be if we were to look at employment within the zones, for example: How much would COVID—and especially the peak-COVID period—skew whether people could actually work in those areas? We don’t quite know what to expect on that yet, nor to what extent COVID will affect capital flows and geographic spread on the one side and the downstream output of jobs and establishments on the other side. I’m actually more worried about high interest rates and what effect that’s having on the commercial and residential real estate markets, and how that’s going to flow through into OZ activity. I think that could be more consequential than the pandemic’s economic impacts for OZ investment and development.

When we can expect to see more IRS data is in the hands of Treasury—they will certainly by now have all of the 2021 tax data available. Once you have that, you really start to feel having three full years of samples to work with on the IRS side (it would be really three and a half years.) That’s a lot. On the other hand, there’s other research that’s been done on jobs and establishments in OZs that only carries through 2019. If those researchers re-ran and expanded their data sample into 2020 and 2021 and found similar results, they’d be able to analyze trends not been covered by Wheeler or Coyne and Johnson. That would be very significant, as well.

« Back to Spotlight Exclusives