Nvidia GeForce RTX 5090 Founders Edition review: Blackwell commences its reign with a few stumbles

The Nvidia GeForce RTX 5090 Founders Edition has arrived — or at least, the reviews have arrived. It’s the fastest GPU we’ve ever tested, most of the time, and we expect things will continue to improve as drivers mature in the coming weeks. When it lands on retail shelves, the RTX 5090 will undoubtedly reign as one of the best graphics cards around for the next several years.

The card itself — as well as AIB (add-in board) partner cards using the RTX 5090 GPU — won’t go on sale until January 30. Once it does, good luck acquiring one. It’s an extreme GPU with a $1,999 price tag, though there will certainly be some well-funded gamers looking to upgrade. It also features new AI-centric features, including native FP4 support, and that will very likely generate a lot of interest outside of the gaming realm. With 32GB of VRAM and 3.4 PetaFLOPS of FP4 compute, it should easily eclipse any other consumer-centric GPU in AI workloads.

Review in Progress…

It’s been an extremely busy month so this review is currently a work in progress and our current score is a tentative 4.5 out of 5, subject to adjustment in the next week or so as we fill in more blanks. There are tests that we wanted to run that failed, and several of the games in our new test suite also showed anomalous behavior. We’ve also revamped our test suite and our test PC, wiping the slate clean and requiring new benchmarks for every graphics card in our GPU benchmarks hierarchy, and while we have reviewed the Intel Arc B580 and Arc B570 and tested some comparable offerings, there are a lot of GPUs that we still need to retest. It takes a lot of time, even without any driver oddities, and we need to do some additional work.

The Nvidia Blackwell RTX 50-series GPUs also bring some new technologies, which require separate testing. Chief among these (for gamers) is the new DLSS 4 with Multi Frame Generation (MFG). That requires new benchmarking methods, and more importantly, we need to spend time with the various DLSS 4 enabled games to get a better idea of how they look and feel.

We already know from experience that DLSS 3 frame generation isn’t a magic bullet that makes everything faster and better. It adds latency, and the experience also depends on the GPU, game, settings, and monitor you’re using. With MFG potentially doubling the number of AI-generated frames (DLSS 4 can generate 1, 2, or 3 depending on the setting you select), things become even more confusing. MFG as an example running at 240 FPS would mean user input only gets sampled at 60 FPS, so while MFG could make games smoother it might also feel laggy. We’ll be testing out some of the early DLSS 4 examples and updating our review in the coming days.

Here are the specifications for the RTX 5090 and its predecessors — the top Nvidia GPUs of the past several generations.

Swipe to scroll horizontally
Graphics Card RTX 5090 RTX 4090 RTX 3090 RTX 2080 Ti
Architecture GB202 AD102 GA102 TU102
Process Technology TSMC 4N TSMC 4N Samsung 8N TSMC 12FFN
Transistors (Billion) 92.2 76.3 28.3 18.6
Die size (mm^2) 750 608.4 628.4 754
SMs / CUs / Xe-Cores 170 128 82 68
GPU Shaders (ALUs) 21760 16384 10496 4352
Tensor / AI Cores 680 512 328 544
Ray Tracing Cores 170 128 82 68
Boost Clock (MHz) 2407 2520 1695 1545
VRAM Speed (Gbps) 28 21 19.5 14
VRAM (GB) 32 24 24 11
VRAM Bus Width 512 384 384 352
L2 / Infinity Cache 96 72 6 5.5
Render Output Units 176 176 112 88
Texture Mapping Units 680 512 328 272
TFLOPS FP32 (Boost) 104.8 82.6 35.6 13.4
TFLOPS FP16 (FP4/FP8 TFLOPS) 838 (3352) 661 (1321) 285 108
Bandwidth (GB/s) 1792 1008 936 616
TBP (watts) 575 450 350 260
Launch Date Jan 2025 Oct 2022 Sep 2020 Sep 2018
Launch Price $1,999 $1,599 $1,499 $1,199

The raw specs alone give a hint at the performance potential of the RTX 5090. It has 33% more Streaming Multiprocessors (SMs) than the previous generation RTX 4090, just over double the SMs of the 3090, and 2.5 times as many SMs as the RTX 2080 Ti that kicked off the ray tracing and AI GPU brouhaha. Just as important, it has 33% more VRAM than the 4090, and the GDDR7 runs 33% faster than the GDDR6X memory used on the 4090, yielding a 78% increase in memory bandwidth.

The rated boost clocks on the RTX 5090 have dropped compared to the 4090, but Nvidia’s boost clocks have always been rather conservative. Depending on the game and settings used, in some cases the real-world clocks were even higher than before. Except, that’s mostly at 1080p and 1440p, where CPU bottlenecks are definitely a factor and the 5090 wasn’t hitting anywhere close to the maximum 575W of power use. Typical clocks ranged from 2.5 GHz to 2.85 GHz in our testing (more details on page eight).

But it’s not just performance and specs that have increased. The RTX 5090 has an official base MSRP of $1,999 — $400 more than the RTX 4090’s base MSRP. That’s probably thanks to the demand that Nvidia saw for the 4090, with cards frequently going for over $2,000 during the past two years. We suspect much of that was thanks to businesses buying the cards for AI use and research (not to mention people reportedly trying to smuggle 4090 cards into China). Those same factors will undoubtedly affect the RTX 5090.

Things shouldn’t be as bad as the cryptomining shortages of the RTX 30-series era, but we would be shocked if the 5090 isn’t difficult to buy in the coming months at anywhere close to $2,000. Nvidia’s top GPUs have traditionally been hard to acquire in the first month or two after launch, and that pattern will no doubt continue — and perhaps be even worse than the 4090 launch.

Nvidia GeForce RTX 5090 Founders Edition card photos and unboxing

RTX 5090 Founders Edition has slimmed down to two slots but has the same dimensions as the 4090 FE. (Image credit: Tom’s Hardware)

We’ve already noted that testing is ongoing and that we haven’t been able to run all the benchmarks that we’d like. But really, what’s the competition to the RTX 5090? Most people will want to see how much faster it is than the RTX 4090, plus a few other high-end / extreme offerings. It’s not like someone is looking at an RTX 4070, RX 7700 XT, or Arc B580 but will instead decided to spend 4–8 times as much on a 5090.

AMD’s RX 7900 XTX didn’t really compete with the 4090, and it certainly won’t beat the 5090 — at least, not at 4K. And you really should be running a 4K or higher resolution display if you’re thinking about using a 5090 for gaming. We’re still running 7900 XTX benchmarks on the new test PC, but we’ll have complete results in the next day or so. We also want to add the 3090 to show what a two generation upgrade will deliver, for those using the top Ampere RTX 30-series GPUs.

Other than that? We don’t really see anything that will keep up with the 5090 arriving any time soon. And that’s just for gaming. For AI workloads that can use the new FP4 number format, Nvidia claims the 5090 can be up to three times as fast as the 4090. It’s set to dominate the GPU landscape until the inevitable RTX 6090 or whatever arrives in a couple of years — or perhaps Nvidia will do an RTX 5090 Ti or Titan Blackwell this generation.

Nvidia GeForce RTX 5090 Founders Edition

Nvidia GeForce RTX 5090 Founders Edition card photos and unboxing

(Image credit: Tom’s Hardware)

Nvidia provided its RTX 5090 Founders Edition for this review, the reference model that everything else needs to try to beat. It’s a very different beast compared to prior top-tier models, in both packaging and form factor. Our 5090 unboxing covered this already, but let’s recap.

Assuming the box we received is the same for all 5090 Founders Edition cards, it’s a major change from the RTX 4090 Founders Edition. Nvidia is “going green” on the packaging, with no inks or plastics and a box made of recycled paper fibers. And yes, we get the irony of a potential 575W graphics card trying to be green. It’s certainly a different look for a GPU package.

The 5090 Founders Edition card weighs 1826g, a sizeable weight loss compared to the 4090 Founders Edition that weighed 2186g. That’s 360g lighter, or about 0.8 pounds. Nvidia also qualifies for its own SFF-ready guidelines, though the length and height are the same as before. The card measures 304x137x40 mm, compared to a width of 61mm on the 4090 card.

Nvidia continues to use a 16-pin power connector, but this time it’s the updated and improved 12V-2×6 rather than the 12VHPWR initially used with the 40-series (and later replaced after numerous 4090 cards suffered meltdowns). Nvidia says it’s confident the adapter problems have been solved, even when pulling 575W through the 16-pin connector. We’ll have to wait and see if that proves correct.

One interesting change is that the provided quad-8-pin to 16-pin adapter this time has rather flexible individually sheathed cables. Combined with the angled 16-pin socket on the card and the 2-slot width, the 5090 Founders Edition should be much easier to fit into a variety of cases. Whether a mini-ITX case can handle the heat and power output is a different matter.

Nvidia’s 5090 Founders Edition — and all 5090 cards — come with a PCIe 5.0 x16 slot connector, which you can tell by the “T” markings on the gold fingers. It also features DisplayPort 2.1b UHBR20 support on all three DP outputs, which allows up to 4K 480Hz or 8K 165Hz (using DSC — Display Stream Compression). There’s also a single HDMI 2.1 output that can do up to 4K 240Hz / 8K 120Hz.

The 5090 FE uses liquid metal as a thermal interface material (TIM) between the GPU and the heatsink, and as such it should not be disassembled. That’s one aspect of the card design that helps with keeping a dual-slot 575W part cool.

It also has a double flow through design that’s enabled by using one larger PCB for the GPU and memory, which then links to a separate PCB that houses the PCIe x16 connector, and a third PCB that contains the video outputs. The radiator fins also feature an indentation that’s supposed to help optimize airflow.

Two custom 115mm fans are present, the same design as the previous generation. The fans run reasonably quiet in our experience, even at higher power loads. Ventilation slots on the top and bottom of the card help to direct the exhaust away from the fan intakes to minimize the recylcing of warmed up air.

RGB lighting is present, in a relatively minimalistic form. Like the 4090 Founders Edition, the “X” shape has lighting, on both sides of the card. The GeForce logo on top of the card also lights up. By default, all the lighting is white, but it can be controlled through various software utilities.

(Image credit: Tom’s Hardware)

For the coming year (starting with the Arc B580 review), we’ve upgraded our GPU test PC and modernized our gaming test suite. The new system has an AMD Ryzen 7 9800X3D processor, the fastest current CPU for gaming purposes. We also tested the RTX 5090 on our old 13900K test bed, with some at times interesting results. Some games still seem to run faster on Raptor Lake, though overall the 9800X3D delivers higher performance. The margins are of course quite small at 4K ultra.

We’re running Windows 11 24H2, with the latest drivers at the time of testing. We used AMD’s 24.12.1 drivers, and Nvidia’s preview 571.86 drivers for the RTX cards. Note that these changes mean all the results from our GPU benchmarks hierarchy, while still valid for when they were run, need to be refreshed. We’ll be working on a revised GPU hierarchy in the coming weeks, but it will be a bit before that’s fully ready — we want at least all the current generation cards to be included, and there are plenty of new GPUs coming soon.

Our PC is hooked up to an MSI MPG 272URX QD-OLED display, which supports G-Sync and Adaptive-Sync, allowing us to potentially experience some of the higher frame rates that GPUs like the 5090 can deliver (especially with MFG). Most games won’t get anywhere close to the 240Hz limit of the monitor at 4K when rendering at native resolution, which is where framegen and MFG can be useful.

The new GPU test suite consists of 22 games. We’re still looking at some potential changes and additions, but this is where we’re at for now. Six of the games in our standard test suite have RT support enabled. The remaining 16 games are run in pure rasterization mode. However, with cards like the RTX 5090, we’ll be looking at supplemental testing using some of the most demanding games that feature full RT / “path tracing” support.

All 22 games are tested without any upscaling or frame generation. Again, we plan to do additional investigations into things like DLSS 2/3/4 and framegen/MFG, but that’s separate from the primary testing. (It’s also part of why this is a review in progress.) There are noticeable differences between the image quality of DLSS, FSR, and XeSS, as well as differences in how much they can affect performance, which is why we’re not using any of them for our baseline measurements.

All games are tested using 1080p ‘medium’ settings (the specifics vary by game and are noted in the chart headers), along with 1080p, 1440p, and 4K ‘ultra’ settings. This provides a good overview of performance in a variety of situations. Depending on the GPU, some of those settings don’t make as much sense as others, but seeing how a card like the 5090 runs at 1080p can be enlightening. (In this case, 1080p performance is sometimes lower than on a 4090, indicating there’s still driver tuning that needs to happen.)

Our test PCs are now running Windows 11 24H2, with all the updates applied. We’re also using Nvidia’s PCAT v2 (Power Capture and Analysis Tool) hardware, which means we can grab real power use, GPU clocks, and more during our gaming benchmarks. We’ll cover those results on page seven.

Finally, because GPUs aren’t purely for gaming these days, we run some professional and AI application tests. We’ve previously tested Stable Diffusion, using various custom scripts, but to level the playing field we’re turning to standardized benchmarks. We use Procyon, and run the AI Vision test as well as the Stable Diffusion 1.5 and XL tests; MLPerf Client 0.5 preview for AI text generation; SPECworkstation 4.0 for Handbrake transcoding, AI inference, and professional applications; 3DMark DXR Feature Test to check raw hardware RT performance; and finally Blender Benchmark 4.3.0 for professional 3D rendering.

We’re breaking down gaming performance into two categories: Traditional rasterization games, and ray tracing games. Each game has four test settings, though for the RTX 5090 we’re mostly interesting in the 4K ultra results, less so in 1440p ultra, and 1080p ultra/medium are mostly just for reference (and to check for any odd behavior). We also have the overall performance geomean, the rasterization geomean, and the ray tracing geomean.

We’ll start with the rasterization suite of 16 games, as that’s arguably still the most useful measurement of gaming performance. Granted, an extreme GPU like the 5090 can be used with full RT enabled even at 4K, but that’s something we’re still working to test (on multiple GPUs so that we can actually compare the 5090 with something else).

We’re providing the initial RTX 5090 charts now with limited to no commentary, as testing is still ongoing. RX 7900 XTX testing is in progress and some of the games have results (but it’s not in the geomean yet due to missing tests). The RTX 3090 will be added in the next couple of days. Let us know what other cards you most want to see — though the 5090 mostly stands alone.

More analysis to come as we finish testing other GPUs…

Our overall rasterization results help provide our baseline impressions of any new GPU. Sure, the 5090 has even better ray tracing, AI, and frame generation technology compared to the previous generation, but rasterization is used in virtually every game, while RT and DLSS 3/4 are only in a subset of games. Of course, any games that don’t use RT will probably run as fast as your CPU can go for now.

We tested the RTX 5090 Founders Edition on our Ryzen 7 9800X3D and Core i9-13900K PCs. The results are at times interesting and/or strange. The 13900K often wins at 4K ultra, but there are also games where that CPU performs very poorly. This is part of what gives us pause in declaring a final score, as it seems like either our test platforms or the drivers are not jiving.

Given the $2,000 price tag, 4K ultra is clearly the target for the RTX 5090 — or maybe even higher resolutions if you have an appropriate display. We’ll put the 4K charts first, then 1440p, and 1080p will be at the end. Below are the individual rasterization results, in alphabetical order with limited commentary.

Assassin’s Creed Mirage uses the Ubisoft Anvil engine and DirectX 12. It’s an AMD-promoted game as well, though these days that doesn’t necessarily mean it always runs better on AMD GPUs. It could be CPU optimizations for Ryzen, or more often it just means a game has FSR2 or FSR3 support — FSR2 in this case. It also supports DLSS and XeSS upscaling.

Baldur’s Gate 3 is our sole DirectX 11 holdout — it also supports Vulkan, but that performed worse on the GPUs we checked, so we opted to stick with DX11. Built on Larian Studios’ Divinity Engine, it’s a top-down perspective game, which is a nice change of pace from the many first person games in our test suite.

Black Myth: Wukong is one of the newer games in our test suite. Built on Unreal Engine 5, with support for full ray tracing as a high-end option, we opted to test using pure rasterization mode. Full RT may look a bit nicer, but the performance hit is quite severe. (Check our linked article for our initial launch benchmarks if you want to see how it runs with full RT enabled. We’ll do supplemental testing on the 5090 as soon as we’re able to find the time!)

Dragon Age: The Veilguard uses the Frostbite engine and runs via the DX12 API. It’s one of the newest games in my test suite, having launched this past Halloween. It’s been received quite well, though, and in terms of visuals I’d put it right up there with Unreal Engine 5 games — without some of the LOD pop-in that happens so frequently with UE5.

Final Fantasy XVI came out for the PS5 last year, but it only recently saw a Windows release. It’s also either incredibly demanding or quite poorly optimized, but it does tend to be very GPU limited. Our test sequence consists of running a path around the town of Lost Wing.

We’ve been using Flight Simulator 2020 for several years, and there’s a new release below. But it’s so new that we also wanted to keep the original around a bit longer as a point of reference. We’ve switched to using the ‘beta’ (eternal beta) DX12 path for our testing now, as it’s required for DLSS frame generation even if it runs a bit slower on Nvidia GPUs.

Flight Simulator 2024 is the latest release of the storied franchise, and it’s even more demanding than the above 2020 release — with some differences in what sort of hardware it seems to like best. Where the 2020 version really appreciated AMD’s X3D processors, the 2024 release tends to be more forgiving to Intel CPUs, thanks to improved DirectX 12 code (DX11 is no longer supported).

God of War Ragnarök released for the PlayStation two years ago and only recently saw a Windows version. It’s AMD promoted, but it also supports DLSS and XeSS alongside FSR3. We ran around the village of Svartalfheim, which is one of the most demanding areas in the game that we’ve encountered.

Hogwarts Legacy came out in early 2023, and it uses Unreal Engine 4. Like so many Unreal Engine games, it can look quite nice but also has some performance issues with certain settings. Ray tracing in particular can bloat memory use and tank framerates, and also causes hitching, so we’ve opted to test without ray tracing. (We’ll do some supplemental RT tests with this one on page six.)

Horizon Forbidden West is another two years old PlayStation port, using the Decima engine. The graphics are good, though I’ve heard at least a few people that think it looks worse than its predecessor — excessive blurriness being a key complaint. But after using Horizon Zero Dawn for a few years, it felt like a good time to replace it.

The Last of Us, Part 1 is another PlayStation port, though it’s been out on PC for about 20 months now. It’s also an AMD promoted game, and really hits the VRAM hard at higher quality settings. And if you have 32GB like the RTX 5090, it’s not a problem.

A Plague Tale: Requiem uses the Zouna engine and runs on the DirectX 12 API. It’s an Nvidia promoted game that supports DLSS 3, but neither FSR or XeSS. (It was one of the first DLSS 3 enabled games as well.) It has RT effects, but only for shadows, so it doesn’t really improve the look of the game and tanks performance.

Stalker 2 is another Unreal Engine 5 game, but without any hardware ray tracing support — the Lumen engine also does “software RT” that’s basically just fancy rasterization as far as the visuals are concerned, though it’s still quite taxing. VRAM can also be a serious problem when trying to run the epic preset, with 8GB cards struggling at most resolutions.

Star Wars Outlaws uses the Snowdrop engine and we wanted to include a mix of options. It also has a bunch of RT options, though our baseline tests don’t enable ray tracing. We’ll look at RT performance in our supplemental testing.

Starfield uses the Creation Engine 2, an updated engine from Bethesda where the previous release powered the Fallout and Elder Scrolls games. It’s another fairly demanding game, and we run around the city Akila, which is one of the more taxing locations in the game.

Wrapping things up, Warhammer 40,000: Space Marine 2 is yet another AMD promoted game. It runs on the Swarm engine and uses DirectX 12, without any support for ray tracing hardware. We use a sequence from the introduction, which is generally less demanding than the various missions you get to later in the game but has the advantage of being repeatable and not having enemies everywhere.

Nvidia was the driving force behind the creation of DirectX Raytracing (DXR) and related APIs like Vulkan Ray Tracing. It all started with the Turing RTX 20-series GPUs, with each subsequent generation doubling the ray/triangle intersection calculation rates (per RT core).

Not surprisingly, most RT games end up being better optimized for Nvidia GPUs, because Nvidia has been pushing the tech far more than AMD or Intel. We’ve selected six reasonably demanding RT games for our testing, and we’ll have additional supplemental RT / full RT / DLSS 4 testing on the next page.

The RTX 5090 can dominate when it comes to ray tracing performance. The RTX 4090 was already far ahead of the competition, at times more than doubling the performance of the closest non-Nvidia GPU. With the RTX 5090, Nvidia delivers up to 42% higher performance than the incumbent… except when things go wrong.

To be clear, we think any performance regressions shown in our charts on the 5090 are the result of having a new architecture, which requires different driver tuning to extract maximum performance. Blackwell isn’t just Ada Lovelace with FP4 support, in other words, and the drivers at times feel a bit raw. We expect this will improve in coming days, particularly with games where the 5090 falls far behind the 4090 (i.e. Minecraft on the 9800X3D).

We also have our big picture overview that combines the rasterization and ray tracing results. These charts use the geomean of all 22 games that we’ve tested, with RT accounting for a bit more than a quarter of the overall score.

We think it’s fair to say that there are a lot of RT games where the tech doesn’t really do much other than tank performance, but there are also a select few games that definitely benefit visually from RT. So, we have far more rasterization games in our suite but still include a handful of RT games to give a more balanced overall view of how the GPUs stack up.

Due to the performance anomalies we’ve already mentioned a few times, the overall scores are subject to change. The 1080p results of the RTX 5090 in particular look like there’s plenty of room for improvement. Below are the individual game charts.

Avatar: Frontiers of Pandora uses ray tracing, but it’s not particularly forthcoming on when and where it’s used. Reflections in general don’t appear to use RT, which is one of the most noticeable upgrades RT can provide. Instead, it’s used for shadows and possibly global illumination and some other effects. What I can say for sure is that nothing in the menus (other than “BVH Quality”) directly mentions ray tracing, and the performance hit doesn’t seem to be as severe as in some games. Still, since there’s supposed to be RT of some form, this one gets lumped into our DXR suite.

If you want a game where ray tracing is both clearly visible and actually makes the game look better, without totally destroying performance, look no further than Control. It’s now five years old, and we’re using the Ultimate version, but it’s still arguably the best example of using RT well. And probably a lot of that is because you’re running around the Federal Bureau of Control, an office space of sorts that has good reasons to have plenty of glass windows that reflect the scenery.

The RTX 5090 has some rendering errors in Control right now, so these results could change quite a bit with updated drivers. There’s also a hard 240 FPS cap, and the closer a GPU gets to that mark (or could exceed it), the worse the 1% lows become. That’s why the 5090 looks somewhat bad at 1080p.

Possibly the most hyped up use of RT in a game, Cyberpunk 2077 launched with more RT effects than other games of its era, and later the 2.0 version added full path tracing and DLSS 3.5 ray reconstruction. Ray reconstruction ends up looking the best but only works on Nvidia GPUs, so as with upscaling it can be a case of trying to compare apples and oranges.

We’re using medium settings with RT lighting at medium and RT reflections enabled, and then the step up uses the RT-Ultra preset. In all cases, any form of upscaling or frame generation gets turned off.

F1 24 enables several RT effects on the ultra preset but leaves them off on medium. But then 1080p medium runs at hundreds of frames per second, so we went ahead and turned all the RT effects on for our testing. We use the Great Britain track for testing, in the rain naturally.

Minecraft supports full path tracing, as well as DLSS 2 upscaling on RTX cards. We don’t enable DLSS, and the 5090 actually doesn’t allow us to do that — a game or driver bug of some form.

The 5090 really underperformed in this game when running on the 9800X3D, but did better on the 13900K; Nvidia is looking into the situation.

Last on our list of RT-enabled games, Spider-Man: Miles Morales doesn’t look as nice with RT turned on as the previous Spider-Man: Remastered. The reflections are less obvious, and perhaps performance is better as a result. But beyond the RT effects, maxed out settings in Miles Morales definitely needs more than 8GB of VRAM. The 5090 shrugs nonchalantly as it flexes it’s 32GB of memory.

Nvidia GeForce RTX 5090 Founders Edition charts

(Image credit: Tom’s Hardware)

One final ray tracing benchmark we have is the 3DMark DXR Feature Test, where we report the average FPS rather than the calculated score. This is similar to full RT in a game, only done via a standalone benchmark and perhaps in a more vendor agnostic fashion. The RTX 5090 worked in this test, but it was slower than the 4090. Clearly, either some driver tuning or an application update are in order.

We’re not finished testing the RTX 5090 in games that support DLSS 4 — either natively or via the Nvidia App override method. It’s a topic that deserves a lot of attention, so we’ll be working on fleshing things out before finalizing our review. Stay tuned! In the meantime, we’re including links to some Nvidia content (that can be a bit heavy on the marketing speak) that covers some of the new features and technologies.

DLSS 4 | New Multi Frame Gen & Everything Enhanced – YouTube
DLSS 4 | New Multi Frame Gen & Everything Enhanced - YouTube

Watch On

Designing the Founders Edition | GeForce RTX 5090 – YouTube
Designing the Founders Edition | GeForce RTX 5090 - YouTube

Watch On

GeForce RTX 5090 Founders Edition Overview – YouTube
GeForce RTX 5090 Founders Edition Overview - YouTube

Watch On

There are over 75 games and applications that are currently DLSS 4 enabled (via the Nvidia App overrides). You can also use the new DLSS Transformers model to get improved image quality, with a slight hit to performance relative to the previously existing DLSS CNN (Convolutional Neural Network) models. We’ll be looking into these areas of the RTX 5090 in the coming days.

Modern GPUs aren’t just about gaming. They’re used for video encoding, professional applications, and increasingly they’re being used for AI. We’ve revamped our professional and AI test suite to give a more detailed look at the various GPUs. We’ll start with the AI benchmarks, as those tend to be more important for a wider range of users.

Note that we had some issues getting some of these tests to work on the RTX 5090. Procyon needs to be updated for the tests we normally run.

Procyon has several different AI tests, and for now we’ve run the AI Vision benchmark along with two different Stable Diffusion image generation tests. The tests have several variants available that are all determined to be roughly equivalent by UL: OpenVINO (Intel), TensorRT (Nvidia), and DirectML (for everything, but mostly AMD). There are also options for FP32, FP16, and INT8 data types, which can give different results. We tested the available options and used the best result for each GPU.

Right now, the RTX 5090 fails to run any of these TensorRT workloads.

ML Commons’ MLPerf Client 0.5 test suite does AI text generation in response to a variety of inputs. There are four different tests, all using the LLaMa 2 7B model, and the benchmark measures the time to first token (how fast a response starts appearing) and the tokens per second after the first token. These are combined using a geometric mean for the overall scores, which we report here.

While AMD, Intel, and Nvidia are all ML Commons partners and were involved with creating and validating the benchmark, it doesn’t seem to be quite as vendor agnostic as we would like. AMD and Nvidia GPUs only currently have a DirectML execution path, while Intel has both DirectML and OpenVINO as options. Intel’s Arc GPUs score quite a bit higher with OpenVINO than with DirectML.

Nvidia GeForce RTX 5090 Founders Edition charts

(Image credit: Tom’s Hardware)

We’ll have some additional SPECworkstation 4.0 results below, but there’s an AI inference test composed of ResNet50 and SuperResolution workloads that runs on GPUs (and potentially NPUs, though we haven’t tested that). We calculate the geometric mean of the four results given in inferences per second, which isn’t an official SPEC score but it’s more useful for our purposes.

For our professional application tests, we’ll start with Blender Benchmark 4.3.0, which has support for Nvidia Optix, Intel OneAPI, and AMD HIP libraries. Those aren’t necessarily equivalent in terms of the level of optimizations, but each represents the fastest way to run Blender on a particular GPU at present.

Nvidia GeForce RTX 5090 Founders Edition charts

(Image credit: Tom’s Hardware)

SPECworkstation 4.0 has two other test suites that are of interest in terms of GPU performance. The first is the video transcoding test using HandBrake, a measure of the video engines on the different GPUs and something that can be useful for content creation work. We use the average of the 4K to 4K and 4K to 1080p scores. Note that this only evaluates speed of encoding, not image fidelity.

Our final professional app tests consist of SPECworkstation 4.0’s viewport graphics suite. This is basically the same tests as SPECviewperf 2020, only updated to the latest versions. (Also, Siemen’s NX isn’t part of the suite now.) There are seven individual application tests, and we’ve combined the scores from each into an unofficial overall score using a geometric mean.

(Image credit: Tom’s Hardware)

All our gaming tests are conducted using an Nvidia PCAT v2 device, which allows us to capture total graphics card power, GPU clocks, GPU temperatures, and some other data as we run each gaming benchmark. We have separate 1080p, 1440p, and 4K results for each area.

Despite the 575W TGP (Total Graphics Power, the power used by the entire graphics card) rating, the RTX 5090 Founders Edition doesn’t often come anywhere near that limit. Technically, a few games did exceed (slightly) 575W, but most were far below that mark. Check the table at the bottom of this page for more details.

Clock speeds among the different GPUs and architectures aren’t particularly important, but it’s interesting to see where things land. The 5090 clocks lower at 4K, where power use is higher and so it has to keep things in check. At 1080p, though, it ends up with higher clocks than the 4090.

Like the clock speeds, comparing GPU temperatures without considering other aspects of the cards doesn’t make much sense. One card could run its fans at higher RPMs, generating more noise while being “cooler.” So these graphs should be used alongside the noise and performance results.

Considering the reduction in size combined with the increased power draw, it’s impressive that the 5090 Founders Edition does this well. It doesn’t beat the 4090 on thermals, but it’s not too far behind.

We check noise levels using an SPL (sound pressure level) meter placed 10cm from the card, with the mic aimed right at the center of one fan: the center fan if there are three fans, or the right fan for two fans. This helps minimize the impact of other noise sources like the fans on the CPU cooler. The noise floor of our test environment and equipment is around 31–32 dB(A).

[Charts to come, sorry! Still testing…]

Here’s the full table of testing results, with FPS/$ calculated using MSRP for both the 5090 and 4090 cards. (Neither is available at MSRP right now.)

(Image credit: Tom’s Hardware)

What’s there to say about a $2,000 graphics card? Yes, it’s fast — usually. Yes, it can use a lot of power — sometimes. It’s also really expensive, and depending on supply and demand, retail pricing could go even higher. The RTX 5090 ‘only’ costs $400 more than it’s 4090 predecessor, a 25% price increase. It’s also about 25% faster (at 4K) in our testing, though these should very much be considered preliminary results.

The RTX 5090 is a lot like this initial review: It’s a bit of a messy situation — a work in progress. We’re not done testing, and Nvidia isn’t done either. Certain games and apps need updates and/or driver work. Nvidia usually does pretty good with drivers, but new architectures can change requirements in somewhat unexpected ways, and Nvidia needs to continue to work on tuning and optimizing its drivers. We’re also sure Nvidia doesn’t need us to tell it that.

Gaming performance is very much about running 4K and maxed out settings. If you only have a 1440p or 1080p display, you’re better off saving your pennies and upgrading you monitor — and probably the rest of your PC as well! — before spending a couple grand on a gaming GPU.

Unless you’re also interested in non-gaming applications and tasks, particularly AI workloads. If that’s what you’re after, the RTX 5090 could be a perfect fit.

(Image credit: Tom’s Hardware)

The RTX 5090 is the sort of GPU that every gamer would love to have, but few can actually afford. If we’re right and the AI industry starts picking up 5090 cards, prices could end up being even higher. Even if you have the spare change and can find one in stock (next week), it still feels like drivers and software could use a bit more time baking before they’re fully ready.

Due to time constraints, we haven’t been able to fully test everything we want to look at with the RTX 5090. We’ll be investigating the other areas in the coming days, and we’ll update the text, charts, and the score as appropriate. For now, the score stands as it is until our tests are complete.

And there’s more to come. We should be getting in a few AIB partner 5090 cards in the coming days, and it will be interesting to see how those stack up against the 5090 Founders Edition. I’ll admit to being skeptical of the cooling abilities of the 5090 FE when it was first revealed, but while it did get warmer than the 4090 FE, in general it performed admirable. It’s definitely not the scorcher than the RTX 3080 Ti FE was.

For those that can’t afford or justify spending two grand, we also have the RTX 5080 coming next week. Thankfully, a lot of the testing overlaps with what we’ve already done for this article. Stay tuned to see how it stacks up, and if MFG works just as well on a $999 card as on a $1,999 GPU.

This post was originally published on this site