My React2Shell Story

A chair with arm rests roughly carved into a giant log, with the words "story time" carved into the front.

Gather ’round, friends! It’s time to hear the story of how I led the charge to mitigate React2Shell: a dangerous remote code execution vulnerability which was patched and announced by the React project on 3rd December, 2025.

This story occurred while I was working for [FORMER EMPLOYER REDACTED] as a Principal Architect on the security team. I got a lead from a colleague, Brad, that there was a Wiz write-up on a new vulnerability that was a real doosey in react-server, but with impact for Next.JS as well. I spent a good amount of time digging into the write-ups on the vulnerability because there was clearly more to this vulnerability than just making sure the packages were updated, there was nuance.

React2Shell has been written about a bunch, and I’m not going to go into a full write-up on the vulnerability since there are so many good ones out there. What I will do, though, is share my nuance! React2Shell depends on two things being true: 1) you’re running React Server Components in some form or fashion, and 2) they’re impacted by the CVE. This might not seem like a lot of nuance, but if you’ve worked with React and its many, many encapsulating frameworks, you know that this is tricky if you’re not already familiar with the project. Even Next.JS, a very popular encapsulating framework, supports running it without server components (you can perform a static export, or you could limit your use to client-side rendering; though you miss out on a lot of the framework’s value-add), so it’s important to look past the surface-level.

When you have a vulnerability like this you certainly want to patch everything over time, the risk of keeping it in there if someone ever adds server-side rendering shouldn’t be underestimated. However, only the stuff that was using server-side rendering needed to be patched in an emergency. Anyway, now that I had a good idea of what to look for it was time for the code dive.

When performing a code dive like this I use two main tools: GitHub’s Security Insights, the dependency info, and Datadog’s SCA tooling. I also use a good amount of GitHub’s code search functionality to help zero in on usage patterns. There was a challenge here, though: I was faster than my tools. When I went to go look for this vulnerability, I couldn’t find it in either of these tools because it was too new. I had beat them to the punch, and now I had the opportunity to beat my tools; how exciting!

Using GitHub’s code search I looked for the packages impacted by this vulnerability, which Wiz had done a really good job of identifying (including downstream dependencies!):

  • react-server-dom versions 19.0.1, 19.1.2, and 19.2.1
  • next versions 14.x stable, 15.0.5, 15.1.9, 15.2.6, 15.3.6, 15.4.8, 15.5.7, and 16.0.7

You’d think that you could just go to GitHub’s security insights and just search for these versions, but you’d be disappointed. At time-of-writing GitHub’s tooling in this space lacks A) the ability to simply list a bunch of versions, and B) it’s buggy as hell and often introduces more confusion than clarity. It was up to me to identify the affected packages in my employer’s ecosystem; no problem, I love a code dive.

I quickly identified four repositories with package.json files having one of the affected versions. One was archived, so that reduced to three, and from there I reached out to another colleague, Blake, who was much more familiar with how we used these modules. He and I briefly connected over Slack and we figured out that there was only one repository that had real impact. It was time to ticket and mitigate!

I made three tickets, one Critical and two High which were later reduced to Medium. The Critical, however, was one that Blake had worked on himself. Within 49 minutes he had that repository patched and deployed. At this point the vulnerability had become more widely known and there was chatter on our internal security channel, so I wrote a quick update for folks so that teams could remain focused on their work. I really enjoy letting folks know it’s safe to breathe after high-profile issues like this, it might be my favorite part about working in security: helping people be safe and feel safe.

Anyway, now we were fully mitigated and I started getting curious: do my tools show the vulnerability now? The cool thing is that I had the Critical that was mitigated and released. The other two tickets I created were for repositories that didn’t use server components, so we didn’t escalate their fix and release yet (though they were fixed and released the same day). I had the opportunity to test my tools, what fun! I pulled up GitHub’s security insights and went looking on the two repositories I expected to find. Unfortunately, GitHub didn’t have this CVE in their system yet, and I didn’t find anything. Datadog’s SCA tool, though, totally had the vulnerability! It was correct, and it identified both of the repositories that remained unpatched. The repository which was patched was good, which is nice.

A lot of stories that I know about from security professionals surround the deficiencies, challenges, and problems. The unaddressed risk, the seemingly-obvious nature of so much of the risk that exists. Security professionals talk about risk and such a lot, but I really believe the primary goal should be to celebrate security excellence while promoting continuous improvement. Yes, the risk does need to be discovered and addressed, but it’s important that we own our successes or all we’re left with is the failures.

Anyway, that’s my React2Shell story. When all was said-and-done, the tickets I created beat my tooling by six minutes. It’s always fun to win the race, but my tools absolutely had my back. The alerts from Datadog about the vulnerabilities showed up in my inbox well before my friend IanB told me that log files were showing failed attempts to exploit (about two hours after we finished patching).

Have a great day, and don’t forget to celebrate your success today.