Uncategorized

AAA gaming comes to Apple M1 thanks to the latest Asahi Linux build — Control, Cyberpunk 2077, and The Witcher 3 are playable with respectable frame rates

You rarely hear Apple gamers playing AAA games on their MacBooks, much less Apple users desiring to run AAA games on Linux with Apple hardware. However, the developers behind Asahi Linux have announced alpha driver compatibility with x86-based Windows games in Linux on Apple M1 and M2 Arm-based silicon, making Asahi Linux the world’s first Linux distro to accomplish such a feat.Asahi Linux’s playing toolkit now supports x86 emulation and Windows compatibility with its Vulkan 1.3 drivers. Asahi Linux is the only distro that ships conformant OpenGL, OpenCL, and Vulkan drivers for Apple ARM-based hardware, making x86 AAA gaming possible through Linux.Asahi’s translation stack comprises a whopping four translation layers to get x86 Windows games to work. FEX emulates x86 instructions to work on ARM hardware, Wine translates Windows code to Linux, and DVK and Proton focus on translating DirectX API calls to Vulkan.Adding additional complexity is a workaround for page sizes; Apple systems use 16K page sizes, while Windows x86 games expect 4K pages. As a result, the Asahi devs virtualize a secondary ARM Linux kernel with a different page size to get around this restriction. The process involves running an x86 game inside a tiny virtual machine using muvm (a micro virtual machine service), then passing through devices required to play the game, such as the GPU and peripheral inputs.Asahi’s x86 Windows compatibility is currently in the alpha stage, and the developers are working towards a 1.0 release. Some blockers include incompatibility with sparse texturing on Ashai’s Vulkan 1.3-based Honeykrisp driver, which is required to unlock more DX12 game support. Honeycrisp is the first conformant driver for M1 silicon (for any operating system) and the only rendering driver that can run games in Linux.However, multiple games, including Cyberpunk 2077, Hollow Knight, Portal 2, Fallout 4, and Control, were shown to be running on Asahi’s compatibility layer for x86 Windows games.Get Tom’s Hardware’s best news and in-depth reviews, straight to your inbox. […]

No Picture
Uncategorized

New motherboards for Intel Arrow Lake CPUs are announced

We’ve put together a list of all Z890 motherboard models coming out from Gigabyte, Asus, ASRock, and MSI. For full details about Arrow Lake and Intel’s new Core Ultra 200S series CPUs check out our previous coverage. But in a nutshell Z890 is Intel’s new flagship chipset powering the LGA 1851 platform, which comes with updated connectivity features including Thunderbolt 4 support and built-in Wi-Fi 6E.All four motherboard makers are going the extra mile by adding features beyond what the chipset supports, primarily for higher-tier board models. These additional features include Thunderbolt 5, Wi-Fi 7, 2.5GbE, and Bluetooth 5.4 support. A handful of board models should also come with 10GbE, just like the previous generation of Intel boards. Z890 motherboards also see a drastic improvement in memory frequency support with many supporting memory speeds of 9000MHz or faster.Beyond connectivity, all the board makers have seemingly refreshed most of their board models from the past generation, adding brand-specific aesthetic touches and new features. Asus for instance, has incorporated a new screwless M.2 mounting solution in its motherboards as well as a new PCIe x16 slot that doesn’t require unlocking the latch to release the GPU from the slot.GigabyteImage […]

Uncategorized

OBS cuts the cord on Kepler GPU NVENC support — version 31.0.0 Beta 1 no longer works with GTX 600 and GTX 700 GPU hardware encoders

OBS Studio is dropping support for Nvidia’s first generation of GPUs, which featured hardware GPU encoding support. The latest version of the streaming and recording software, 31.0.0 Beta 1, no longer works with GTX 600 and 700 series Kepler GPUs.The developers behind OBS did not give a reason for dropping Kepler GPU support, but it was inevitable that this would happen at some point. Three years ago, Nvidia dropped driver support for the Kepler-powered GTX 700 series lineup, making software compatibility with the ancient GPU architecture rather pointless for modern-day applications.The Kepler GPU architecture was Nvidia’s first-ever GPU design, and it was equipped with a hardware accelerator aimed at video encoding. Kepler featured Nvidia’s first-generation NVENC engine, capable of only encoding in H.264. It wasn’t known for being a super high-quality encoder and was often outperformed by software-based encoders that operated on the CPU. It was also very limited in features, only supporting Chroma 4:2:0 and lacking a lossless encoding option. Regardless, it laid the groundwork for future NVENC encoders to compete with software encoders in terms of quality and features.If you are one of the few people still operating a GTX 600 or 700 series card for encoding purposes, prior versions of OBS Studio still support the NVENC encoder on Kepler-based GPUs.OBS Studio 31.0.0 Beta 1 adds many new features and changes to the streaming/recording application, including new Nvidia blur, background filters, and first-party YouTube chat features. Nvidia-specific changes include a refactored NVENC implementation with various improvements. SDK 12.2 features such as split encoding, B-Frames references, and Target Quality VBR mode from old SDKs are now supported. This NVENC refactoring could be why Kepler support was stripped, but we can’t be sure.Get Tom’s Hardware’s best news and in-depth reviews, straight to your inbox. […]

Uncategorized

Intel Itanium gets a new lease on life, sort of — GCC 15 “un-deprecates” Linux compiler support

In a strange twist of fate, the depreciated Itanium IA-64 architecture is getting a new lease on life, at least sort of. Phoronix reports that the upcoming GCC 15 compiler for Linux will support Itanium processors.Initially, the plan was to remove Itanium IA-64 support from the GCC 15 compiler; however, an open-source developer named René Rebe purportedly stepped in to keep Itanium compiler support alive. The developer notes, “The following un-deprecates ia64*-*-linux for GCC 15. Since we plan to support this for some years to come.”It is a mystery why there are still users in the open-source community who want to keep Itanium support alive, but apparently, there is a large enough user base to warrant additional support.Remember that this “un-deprecation” only involves the GCC 15 compiler and nothing more. Itanium IA-64 support was dropped with Linux 6.7 last year so these niche chips won’t run on the latest Linux kernels. This compiler update only affects the compiler as the name suggests and is optimal for users running on older supported Linux kernels but want improvements that the GCC 15 compiler might provide over previous compiler versions.Itanium was one of the most exotic CPU architectures Intel ever built. Built in conjunction with HP, it was Intel’s first 64-bit CPU architecture. Itanium came out before AMD’s x86-64 instruction set and, thus, did not rely on the x86 instruction set. Itanium relied on what’s known as the IA-64 architecture, a Very Long Instruction Word (VLIW) architecture that took advantage of a software compiler to calculate instructions that were to be executed in parallel in advance of when those instructions needed to be executed.This was a unique way of scheduling instructions to the CPU cores. x86 CPUs use a hardware-based scheduler to schedule instructions to the cores (or execution units).Unfortunately for Intel and HP, IA-64 struggled to be adopted by the broader market primarily due to its incompatibility with native x86 applications. When AMD released the x86-64 instruction set a few years after IA-64 was first released, the whole market immediately gravitated toward AMD’s implementation due to its native support for x86 apps — since AMD’s 64-bit implementation was built on top of the x86 instruction set.Get Tom’s Hardware’s best news and in-depth reviews, straight to your inbox.Intel officially discontinued Itanium in 2021, the year the last batch of Itanium-designed 9700-series Kittson CPUs was shipped. HP Enterprise is now the last Itanium customer left and is expected to support its Itanium-based servers until late 2025. […]

Uncategorized

Nvidia’s Jensen Huang will be CES 2025’s keynote speaker as RTX 50 rumors abound

Nvidia CEO Jensen Huang will deliver the main keynote at the Consumer Electronics Show in 2025, reports PR Newswire. Huang will deliver his keynote on Monday, January 6th, a day before the show officially starts on January 7th in Las Vegas, Nevada.”We are thrilled to welcome Jensen Huang as a keynote speaker at CES 2025,” said Gary Shapiro, CEO of the Consumer Trade Association (CTA). “Jensen is a true visionary in the tech industry. His insights and innovations improve the world, enhance the economy, and will inspire our CES audience.”With Nvidia’s CEO leading the main keynote for the 2025 event, there is a likely chance that the RTX 50 series could be announced at the show. Reputable leakers recently lit up social media when they announced detailed main specifications for the RTX 5090 and RTX 5080, as well as a claimed release date during CES 2025.Ironically, CES is not a tradeshow in which Nvidia traditionally announces its upcoming desktop gaming GPUs. However, this strategy shifted when the company announced its RTX 40 series super refresh at CES earlier this year. Apparently, the increased awareness and hype generated around AI at recent CES events is changing Nvidia’s timeline to launch its gaming GPUs at CES. You can assume that Nvidia will beat the AI drum hard for the RTX 50 series when it arrives, just as it has done for prior RTX generations, thanks to its modern GPU’s inclusion of AI-focused tensor cores.Speaking of AI, we can also expect the multi-multi-billionaire to talk extensively about AI-focused products during the keynote. Since this is a consumer-focused trade show, we can expect updates to ACE, Nvidia’s gaming-focused NPC platform powered by AI-generated dialogue. We can also expect Nvidia to discuss any AI topics related to consumers during the show, such as new AI-powered applications that have yet to be announced. We could also potentially hear some updates regarding its business-focused software products, such as Omniverse or its robotics platform, updates to its enterprise AI GPUs, such as new BH200 server models, and supply updates depending on how deep Huang goes into AI during the keynote.If Nvidia does unveil the RTX 50 series during the keynote, we can also expect a big portion of the keynote to focus on ray tracing graphics, as with previous RTX-focused keynotes from the company. We can expect Nvidia to discuss the performance improvements and fidelity improvements the RTX 50 series will bring to the public, as well as potentially new ray-tracing/path-tracing technologies that have not been announced yet that will speed up performance. We wouldn’t be surprised if Nvidia also talks about new DLSS updates during the keynote — possibly even a DLSS 4.0 announcement.Get Tom’s Hardware’s best news and in-depth reviews, straight to your inbox.Nvidia CEO Jensen Huang will deliver his CES 2025 keynote at 6:30 PM on January 6th, so mark it down on your calendars. […]

Uncategorized

GTA IV footage generated with AI shows the power of AI for future PC game remasters

Gaming is one area where AI has not made any significant advances yet (except for upscaling). But that could all change soon. A user named Indiegameplus on the aivideo subreddit shared a video he reportedly made of GTA IV remastered with real-life graphics, with just AI text prompts.The video portrays a GTA character roaming an ultra-realistic-looking city from the 3rd person’s viewpoint. The first scene shows a red Ferrari sportscar (though the model itself looks ultimately custom) driving through a sprawling metropolis at night featuring LED advertisement displays on most surrounding buildings. The following scene shifts to a 3rd personal perspective of the main character walking towards an NPC bystander on the sidewalk of a busy city at either sunset or dawn.Skipping some scenes, the main gameplay highlights include the main character going up to a stopped vehicle and pulling a classic GTA-style car hijack, pulling the driver out of the driver’s seat, and hijacking the vehicle. Though hilariously, the victim walks away from his car after being forcefully thrown out of it by the main character as if nothing happened. The next scene shows the main character back at the second scene, pulling a bat on the NPC bystander and taking down the NPC with the bat.aivideo from r/aivideoThe video is an AI-generated version of a photorealistic GTA IV remaster gameplay demo trailer. The creator purportedly created the video entirely through Runway’s latest Gen-3 Alpha model. Runway is an AI research company that builds content generation tools focusing on video content generation guided by user-inputted text prompts.The video shows the power of modern-day AI and a glimpse at what it could offer gamers in the future. With video-generation tools already getting so good, it’s logical that we will see this kind of text prompt-style generation make its way into game engines or AI-generation tools optimized for building video games rather than video content.Before AI-made games arrive, we could see the graphics portion of the gaming development pipeline get transferred to AI first, with all graphical effects, such as object rendering, lighting, anti-aliasing, and more, being processed through machine learning hardware rather than classic 3D rasterization/ray-tracing hardware. Nvidia itself believes this is the future, nothing that something such as DLSS version “10” will have full neural rendering.Get Tom’s Hardware’s best news and in-depth reviews, straight to your inbox. […]

No Picture
Uncategorized

Core Ultra 5 235 performance is comparable to the Core i5-14500 in early benchmark — Non-K Arrow Lake chip surfaces with 14 cores and 5 GHz boost clock

Intel’s upcoming Core Ultra 5 235 Arrow Lake CPU was benchmarked in Geekbench 6. Discovered by Benchleaks on X, the Ultra 5 SKU pumped out multi-core and single-core results somewhat similar to those of its Core i5-14500 and Core i5-14600 counterparts.The Core Ultra 5 235 yielded a single-core benchmark of 2,634 points and a multi-core benchmark of 13,293. These results were barely any better (if at all) than Intel’s outgoing 14th Generation Core i5 counterparts, the Core i5-14500 and Core i5-14600. According to the Geekbench database, the Core i5-14600 scored 2,545 points and 13,288 points in Geekbech 6’s single and multi-core benchmarks, respectively. The Core i5-14600 featured 2,600 points and 13,765 points in the same benchmarks.This makes the Arrow Lake 235 successor a mere 1% faster than the Core i5-14600 and 3% faster than the Core i5-14500 in the single-core department. In multi-core performance, the Arrow Lake CPU loses out, with the Core i5-14600 being 4% faster and the Core i5-14500 virtually equaling the 235’s score.The only previous-generation Core i5 that the Core Ultra 5 235 manages to beat by any noticeable margin is the Core i5-14400. The Arrow Lake chip is 10% faster in single-core and 18% in multi-core than the vanilla Core i5-14400 variant.Swipe to scroll horizontallyCPUsSIngle-CoreMulti-CoreCore Ultra 5 2352,63413,293Core i5-146002,60013,765Core i5-145002,54513,288Core i5-144002,39511,288These results aren’t fantastic, but as with all of these Geekench results, take them with a grain of salt. Geekbench is just one benchmark and won’t reveal the CPU’s real-world performance. The Core Ultra 5 235 could also be an engineering or pre-production sample.Another aspect worth mentioning is the unknown power levels and cooling at which all these chips were benchmarked. This is arguably the most significant factor since varying power limits (or TDPs) can drastically change CPU performance, even on desktops.The Core Ultra 5 235 shares the same six P-core and eight E-core configurations as its Raptor Lake processors. However, the Lion Cove P-cores coming in Arrow Lake are an entirely new design from the Raptor/Golden Cove cores used in previous Intel hybrid architectures, with noteworthy IPC improvements as well. Theoretically, the Core Ultra 5 235 should perform better than its predecessors.Get Tom’s Hardware’s best news and in-depth reviews, straight to your inbox.Arrow Lake K-series processors are rumored to launch on October 24. However, non-K parts, such as the Core Ultra 5 235, will arrive later, so we may have to wait a bit longer to grasp what the Core Ultra 5 235 can do. […]

Uncategorized

Lunar Lake allegedly smokes Z1 Extreme handheld gaming champ in early gaming benchmarks

New benchmarks of Intel’s Lunar Lake CPUs reveal that the efficiency-optimized architecture has some serious chops when it comes to gaming. YouTuber 极客湾Geekerwan discovered that the Core Ultra 7 258V with its Xe2 integrated graphics surpasses the performance of AMD’s Ryzen Ai 9 HX, and Z1 Extreme handheld gaming champ in several gaming benchmarks.In Black Myth: Wukong at 30W at 1080P, the Ultra 7 258V was 10% faster than the Ryzen AI 9 HX 370 and 20% faster than the Core Ultra 9 185H. Cyberpunk 2077 showed even greater gains for the Lunar Lake chip, with the Ultra 7 258V outpacing the Ryzen Ai 9 HX 370 and Ultra 9 185H processor by 38%. In Red Dead Redemption 2, the 285V was 50% faster than its Ryzen counterpart and 37% faster than its Meteor Lake Ultra 9 predecessor.

英特尔Lunar Lake深度评测:轻薄本有救了! – YouTube

Watch On
Intel 30 Watts vs AMD 80 Watts #Gaming#Intel #LunarLake #IntelCoreUltra #Apple #AMD #Qualcomm #Snapdragon #SnapdragonXseries #ARM #AppleEvent #Ryzen Source:- https://t.co/1A0hbudnQ2 pic.twitter.com/GbPz0hTmDvOctober 3, 2024This behavior was similar in the other games the reviewer tested, including CS2 and Genjin Impact. The only game where the Lunar Lake chip struggled was in Elden Ring, where it was only 8% faster than the HX 370. The 285V also lost to its Meteor Lake predecessor in the game, with the Ultra 9 185H being 25% quicker.When constrained to 15W, Lunar Lake’s performance shifts to an entirely new level. For the 15W comparisons, the reviewer includes AMD’s Van Gough APU in the Steam Deck and the Z1 Extreme in addition to the Ryzen AI 9 HX 370 and Ultra 9 185H.In Cyberpunk 2077, the 258V was roughly twice as fast as all four of its competitors, with a frame rate of 28 fps at 720p. The Z1, HX370, 185H, and Van Gough APU all had frame rates in the low teens, with the fastest AMD chips only being able to output 13 fps. 1% lows were also even more in favor of Lunar Lake, with the 285V pulling a 3x to 9x performance advantage over the rest of the chips.The review compared the 258V to the Snapdragon X Elite, Ryzen AI 9 HX 370, and Ryzen 9 185H at different TDPS to highlight Lunar Lake’s efficiency, opting to use significantly higher TDPs for the more power-hungry chips. The Ultra 7 258V was benchmarked at 30W, HX 370 80W, and 185H at 90W.In this comparison, Black Myth Wukong performed best on the HX 370 at 44 fps, but the Lunar Lake 258V wasn’t too far behind at 35 fps. The Meteor Lake chip was just behind its successor at 34 fps. The Snapdragon X Elite (unsurprisingly) failed to run the game. The Ryzen HX 370 was 25% faster than the Core Ultra 7 258V; however, it consumed nearly three times the power to do it.Get Tom’s Hardware’s best news and in-depth reviews, straight to your inbox.Cyberpunk 2077 showed similar results. The Ryzen HX 370 was the performance leader at 45 fps, Lunar Lake was the runner-up at 33 fps, and the Meteor Lake chip was running at 32 fps. The Snapdragon X Elite didn’t crash in this particular title and ran the slowest at 21 fps. The Ryzen AI 9 HX 370 was 36% faster than the Lunar Lake Core Ultra 7 258V, but again, consuming significantly more power to do so.Red Redemption 2 was one of the few titles that actually saw the Lunar Lake Core Ultra 7 258V beating the whole lot, even with its severe power disadvantage. The 258V dished out 61 fps in this particular title, the Ryzen AI 9 HX 370 56 fps, Core Ultra 9 185H 48 fps, and Snapdragon X Elite 35 fps. The Core Ultra 7 258V was 9% faster while consuming nearly three times less power.The rest of the games showed similar results, with the Lunar Lake chip maintaining competitiveness with the rest of the pack despite consuming the least amount of power. Technically, the Snapdragon X Elite would probably match or consume less power than Lunar Lake, but the reviewer couldn’t get power readings for the Snapdragon chip. Regardless, the ARM-based processor was no match for Lunar Lake’s Xe2 graphics engine.极客湾Geekerwan testing reveals that Lunar Lake packs a serious punch when it comes to power-efficient PC gaming. It is very power efficient in regular applications and can provide better performance than the best handheld gaming PCs on the market, including systems powered by AMD’s Z1 Extreme flagship. As a result, it is very possible we might see a Lunar Lake-powered handheld gaming PC in the future. […]

Uncategorized

AMD has introduced a new driver debugging tool, but it’s not for gamers — Driver Experiments for Adrenalin, aimed at developers for troubleshooting buggy code

AMD has introduced new functionality into its Adrenalin drivers aimed at developers and graphics programmers called Driver Experiments. This new tool allows users to turn parts of the rendering pipeline on or off in a video game, all through the driver itself.This powerful manipulation of game rendering is designed to assist developers in hunting down problems or bugs affecting their game/application. AMD gave several examples:“If you are a graphics programmer, you may have encountered situations when you thought: – Why is my application crashing or working incorrectly on this particular GPU? If only I could disable some optimizations done by the graphics driver or enable some extra safety features, I would check if the bug goes away.Or, if you don’t have the source code of your application on the machine where you are testing to be able to reconfigure it, you could say: – I wish I could pretend my GPU doesn’t support ray tracing, or force-off V-sync, and see if it helps.Or, maybe you even thought: – I suspect this is a bug in the graphics driver! I am sure developers at AMD have some secret tools to control it that helps them with testing. If only such tool was available publicly…”Driver Experiments allow developers to troubleshoot these problems more quickly without changing any code in the application itself. Driver Experiments can turn many features on and off through the Adrenalin driver alone, including disabling mesh shaders, ray tracing, vector extensions, work graphs, low precision, and more.Optimizations and safety features can also be manipulated, including turning off floating point optimizations, barrier optimizations, shader compiler optimizations, acceleration structure optimizations, mesh shader optimizations, ray tracing shader inlining, shader cache, depth-stencil texture compression, V-Sync, and more.Get Tom’s Hardware’s best news and in-depth reviews, straight to your inbox.To use Driver Experiments, users will need Radeon Developer Panel version 3.2 or later. Support is also limited to RX 5000, RX 6000, and RX 7000 series GPUs, Windows 10, Windows 11, and Adrenalin driver version 24.9.1 or newer. This new tool is specifically aimed at developers and programmers. Still, nothing stops diehard enthusiasts from messing with this software to try and improve their overall gaming experience, even if that’s not what the software is intended for. Regardless, AMD warns that Driver Experiments level of control is so deep that it can affect the stability of the application it is being used on. […]

Uncategorized

Fujitsu, Supermicro working on Arm-based liquid cooled servers for 2027

Fujitsu is collaborating with Supermicro to build liquid-cooled servers by 2027, according to a report by The Register. These liquid cooler servers will be based on Fujitsu’s upcoming ARM-based Monaka processor, which is slated to be released in the same timeframe.The combination of liquid cooling and Fujitsu’s energy-efficient Monaka chips is aimed at combating sky-high demand for data center capacity, which has become greater than what can be supplied thanks to various factors, including AI. One of the biggest obstacles preventing accelerated data center capacity development the ability to meet the growing power consumption of modern datacenter chips. By combining the efficiency of the ARM architecture with liquid cooling, Fujitsu and Supermicro hope to offer a market-leading server portfolio for their customers. Monaka is the name of Fujitsu’s next generation ARM-based datacenter processor. The new chip is aimed at AI, HPC, and datacenter deployments featuring 150 Armv9-A cores with SVE2. Monaka is designed to take full advantage of the power efficiency of the ARM architecture, and Fujitsu has set an ambitious goal of having Monaka be twice as power-efficient as its competitors’ chips — not its competitors’ current chips, but those that will be made in 2026 and 2027. Monaka will be built on TSMC’s 2nm fabrication process.Fujitsu apparently originally designed these CPUs with air cooling in mind. However, the manufacturer is now shifting gears in this partnership with Supermicro. The main goal is to reduce the size of Monaka-based servers; liquid cooling paired with highly power-efficient processors allows designers to build highly compact cooling solutions. It’s also likely that liquid cooling Monaka could result in greater power efficiency gains as compared to air cooling. Testing by SMC has revealed that Nvidia’s GPU servers are 50% more power efficient when using submersion liquid cooling compared to air cooling. We don’t know how exactly these servers will be setup, but Fujitsu and Supermicro have an opportunity to make some of the densest and most power-efficient servers in the world by 2027, if Fujitsu can deliver on its goals for Monaka. Get Tom’s Hardware’s best news and in-depth reviews, straight to your inbox. […]