Benchmarking Your PC: A Guide to Best Practices
Like millions of people around the world, yous probably use your PC to play video games. Yous may well have lots of experience of building and configuring computer systems, as PC gamers are often knowledgeable and enthusiastic when it comes to hardware. And then how about combining all three? Take the computer know-how, the beloved of games, and the involvement in components, and mix them all together. Information technology's a perfect recipe for diving into the workings of a PC and seeing how well it performs.
In this article, nosotros'll explicate how you can utilise games to criterion your PC and what you tin do to analyze the results -- either to check the overall performance or to run into what part of your computer is doing most of the work. We'll cover the best style to go about collecting the data and how to apply the tools that nosotros run with to check out the latest hardware. Time to become a-testin'!
It'due south more than simply a game
When we took a look at xx programs to clarify and benchmark your hardware, some of those that specifically test 3D graphics performance provide in-depth charts for the performance and offering monitoring tools to log how the various parts are operating. But as good equally these programs are, they're ultimately an artificial setup: we don't play benchmarks. We don't buy $2,000 machines merely for them to run testing programs all solar day long (well, almost of us don't).
But if you take a bunch of games installed on your system, you tin hands employ them instead -- the latest titles often button the CPU and the GPU to their absolute limits of what they can practice, so they give your system just as skillful a workout as any benchmark program.
Quite a few titles too accept their ain built-in test mode. Ubisoft's latest titles in their popular franchises can run a set test, then display the results in quite a lot of item. In Assassin's Creed Odyssey, for example, the tool is advanced enough that results can exist compared to a previous run, so you lot can easily see what impact any changes to hardware or the game's settings accept produced.
In the higher up image, we can see 3 graphs: the frame rate, how long it took the CPU to process each frame for rendering, and the time it took for the GPU to work through a frame. There'southward some basic statistics, too, with the average (the arithmetic mean) and the absolute maximum/minimum figures shown.
For games with benchmark modes, or ones that can display frame rates while playing, the simplicity of these statistics is the showtime problem with using such features in games. Absolute values might but occur for a single frame or that figure might be hit multiple times, at regular intervals. If the results can exist viewed in a graph, and so this is easy to see, merely if all you get are the numbers, then the figures themselves are near useless.
Another potential upshot with using a built-in test, is that the workload placed on the CPU, GPU, etc. might non exist indicative of what you might see during actual gameplay. The groundwork processing of input variables, path finding, audio tracks, and so on volition exist missing, and the rendering might not have things similar explosions or other particle effects, which can often bring a graphics carte du jour downwardly to its silicon knees.
If you lot just want something to quickly check out your arrangement with, though, these kind of tests and the numbers they collect are good enough. On the other hand, if you want to test what the PC is doing when its going through real workload, so you're improve off logging data in the game itself -- and for that, you need to utilize the right tool for the chore.
Choose your weapon!
In that location'south a reasonable number of programs, all freely available, that can exist used to tape how apace a 3D game is being processed. For Windows systems, MSI's Afterburner and FRAPS, are 2 of the well-nigh popular. The onetime is a comprehensive tool for adjusting the clock speeds and voltages of graphics cards, and works on AMD and Nvidia models; it tin can too display this data, forth with temperatures and frame rates, in an overlay when playing games (all of this data can be logged, for later on assay).
FRAPS can also display and log frame rate, as well as capture video output and take screenshots, but it hasn't been updated since 2022. Information technology'southward best used on older titles, that utilise Direct3D 10 or earlier, but given that Afterburning covers this, too, likewise as the latest games, information technology'south probably only worth using this, if nothing else works.
Nosotros really prefer using a tool chosen OCAT (Open Capture and Analysis Tool): it'south made past AMD, every bit part of their GPUOpen project, an initiative designed to give developers access to a range of gratis software tools and code samples. OCAT utilizes Intel'southward PresentMon, which is likewise open source.
Both tools are in abiding development, and then you lot will occasionally come across a glitch or ii, but for displaying and logging frame rate information in Direct3D xi, 12, and Vulkan-based games, they're very good at what they do. Nvidia also offers FrameView, which does the same thing, but it's not open source. Lastly, there'south CapFrameX, which also logs a wide range of technical data, but its best feature is the comprehensive tool kit for analyzing the information and presenting the statistics.
Like Afterburner and FRAPS, OCAT offers an overlay function, where the frame rate and frame times are displayed.
When running, the overlay displays what graphics API is being used and can show the changes in the frame charge per unit via a footling graph. This tin be somewhat hard to see, so information technology's non worth using information technology.
Other tools, similar Afterburner, offer more than a lot more than detail in their overlay systems (clock speeds, temperatures, etc), so if you only want to monitor what'southward going on with your PC, while you're playing games, OCAT doesn't give you very much. Even so, we apply it because information technology captures a lot of critical information, and then procedure and analyze information technology for united states.
To practise this, ready a hotkey for the capture procedure (make certain you lot choice 1 that the game doesn't apply) and, if yous demand to, select where you lot want the data to exist saved. It's a good idea to limit the capture time to a fixed value, merely we'll look at this in a moment.
The results are stored every bit a .csv file (comma separated values), so you don't have to apply OCAT to go through the data -- Microsoft'south Excel, Google's Sheets, Apache OpenOffice, etc. tin can all easily handle this format. Just a give-and-take of warning though: the file contains a lot of data, and hands grind up your PC when trying to work through it all.
Using it is very easy - outset OCAT beginning, then fire up the game you're going to exam with, and press the Capture hotkey one time you lot're in gameplay. If you haven't ready a time capture limit, you demand to press the hotkey again to terminate recording.
Note that when you start and end the capture, there is often a momentary drop in the game'southward performance; there's non much you can practise about this, but at least information technology merely affects the results for a handful of milliseconds.
To utilize or non to utilize -- that is the question
So that's the logging of the frame rates covered, just what else is worth recording? A question that y'all often see on forums nigh PC hardware is 'does my CPU clogging my graphics carte du jour?' or vice versa, and the responses regularly talk nearly monitoring CPU and GPU utilization.
You can log such data using HWMonitor or Afterburner, merely it's worth knowing exactly what the utilization % figure really refers to. Roughly speaking, it is a measure out of how long threads in a kernel (sequences of instructions in a block of code) are currently beingness candy past the CPU/GPU, as a percent of the time allocated for that thread (checked per second or other time interval).
Threads might be being actively worked on ('decorated') or waiting for data from enshroud/retention ('stalled') or finished ('idle'). Busy and stalled threads get classed in the same way, then a high utilization effigy does non necessarily mean the processor is being worked very hard, and the % value doesn't tell you lot how much of the processor'south capabilities beingness used.
We tin come across this by running two unlike GPU benchmarks, and logging everything with HWMonitor. The start test is with Geekbench v, which runs through a series of compute routines, all done on the graphics card.
If nosotros look at the Max column in HWMonitor, the GPU utilization peaked at 96% - just pay shut attention to the power value. This hitting a maximum of 57.29%, which for this particular graphics carte du jour, equates to around 140 West of electrical power. As well note the temperature of the GPU rose just ix degrees above the minimum recorded.
At present compare that to using a program which is designed to really stress the graphics card. Nosotros used OCCT but you could utilise a graphics benchmark or a game set up to maximum particular levels.
This examination resulted in a 100% utilization figure, only four% more than in GeekBench 5, but the power and temperature values are much higher -- the erstwhile beingness over 100 Westward more than in the previous test. Together they prove that the GPU was being worked far more than thoroughly than in the previous benchmark, and this is something that's not obvious from just looking at the utilization value.
Logging power can be a useful way to analyze the workload the components are under. The image below displays the CPU, organisation memory, and GPU power values while running two sections of the Time Spy Extreme criterion in 3DMark - the one on the left is the CPU Test, the right is for Graphics Exam 2.
We tin can conspicuously run across that the GPU is doing relatively footling work for the CPU Test, whereas the central processor is existence hit hard (as expected, simply note that the RAM is being worked, too). Then in the exam designed to push the limits of the GPU, the ability consumption of the graphics carte du jour is total -- aye, the CPU is withal pretty high, but the GPU power % tells us that it'south doing a lot of work.
But is it doing all of the work? How can we be sure that the CPU isn't really affecting what'due south supposed to be a graphics-only test? The simple respond is: we can't tell, not past just looking at utilization and/or ability figures.
And then while there's naught incorrect with logging this data, there are meliorate ways of determining how difficult the various components are being used. And while we're looking at what information is worth collecting, let's have a wait at what OCAT is doing nether its hood.
What is really being logged to get the frame rate?
To sympathise what information the likes of OCAT is collecting, you take to know a little bit about the rendering process in a 3D game. Y'all can read a brief introduction on the subject, if you lot're interested, but we'll let Nvidia gives us the overview with this diagram, from the documentation in FrameView:
For each frame that'southward going to be created by the graphics bill of fare, the game's engine works out all of the data and instructions needed to be issued to the GPU. One particular instruction, in Direct3D, is called Present - this tells the processor to brandish the frame once it's finished rendering it.
Any game or plan that displays the frame rate measures the fourth dimension interval betwixt successive Nowadays instructions. In this gap, several things have place:
- The graphics API (Direct3D, OpenGL, Vulkan, etc) converts the broad instructions from the game engine into a more detailed, complex sets of algorithms
- And so the graphics card driver then converts these into the various codes for the GPU
- Next, the GPU and then works through all of the code and and then flags the completed frame gear up for displaying
- The output part of the GPU then sends the prototype to the monitor, when draws the frame during the next screen refresh
- And while that'south taking place, the game engine has already started, or even finished, preparing the side by side frame
And then, the fourth dimension between Present() instructions isn't a mensurate of how fast the GPU is at rendering a frame, or not direct, at to the lowest degree. However, since this processing is about always takes much longer to do than everything else, it'south a pretty close estimation.
PresentMon, OCAT, and FrameView measure lots of different fourth dimension intervals, multiple times per 2nd, and saves them in the .csv file. Nosotros'll await at this when we clarify some results later on on, but these are the main times recorded:
| CSV Column Header | What the time interval is |
| MsInPresentAPI | The number of milliseconds that the code spent going through the Present() teaching |
| MsUntilRenderComplete | The gap, in milliseconds, from when the Present() teaching was issued to when the GPU finished rendering the frame |
| MsUntilDisplayed | The number of milliseconds from when the Present() instruction was issued to when the frame was displayed |
| MsBetweenPresents | How many milliseconds there were between the final Present() issued and the electric current one |
| MsBetweenDisplayChange | The fourth dimension gap between the terminal frame beingness displayed and the current one getting displayed, in milliseconds |
When nosotros show frame rates, in hardware reviews, we've used the MsBetweenPresents information; OCAT defaults to this automatically, and information technology'south the same figure that'southward used by other logging tools and games, when they show frame rates.
Merely notice how these are all times: how does this get turned into a frame rate (fps = frames per 2nd)? The calculation is elementary, as there are 1000 milliseconds in 1 2nd, you divide 1000 by the Present fourth dimension.
For example, if the value for MsBetweenPresents was a abiding 50 milliseconds, and then the displayed frame charge per unit would be thou/l = xx fps. Then if yous're aiming for 60 fps or 144 fps, and then the time interval will need to be 17 and seven milliseconds, respectively.
Take the scientific arroyo to benchmarking
While benchmarking your PC isn't quite the aforementioned as conducting particle physics inquiry at CERN, nosotros can withal employ some central aspects of the scientific method for data collection and assay. The offset function of this is to minimize the number of variables that can modify and have an touch on the effect of the examination.
In an ideal world, you'd desire the reckoner to exist doing absolutely goose egg more running the game and the logging software -- this is what we exercise when we test the latest hardware, as our test machines are used only for benchmarking. This might not be possible for about home computers, but at that place are some things yous can do to help:
- Only accept the minimum number of programs open for testing and monitoring
- Exit non-essential groundwork programs, such as chat software or cloud services, like Discord and OneDrive
- Intermission anti-virus programs or set them to block all network traffic
- Configure software updates or backup systems to time their deportment to take place outside of when you're benchmarking
The next matter to practice is to ensure the testing surroundings is the same every bit that experienced during normal gameplay. This might seem like nosotros're contradicting ourselves, given what we've simply said almost reducing variables, just nosotros're referring to the fact that modern hardware actively changes clock speeds and operating voltages, depending on the temperature they're running at.
Graphics cards are especially prone to this, as the GPU can become very hot when working. Once they reach their predesigned heat limits, the hardware will start to lower clocks to ensure the flake doesn't overheat. We can run across this conspicuously in the higher up image: as the temperature of the bit has risen, so the clock speed has decreased to go along the heat levels under command.
Of course, this means the performance will decrease, too, but past pre-warming all of the cardinal components in the PC (CPU, RAM, GPU), the clocks should be a little more consistent. The simplest way to practise this is by running the game you're going to test for at least 5 minutes, before you offset logging any data.
Some other thing to bear in listen is that, even with all the to a higher place precautions in place, test results will always vary. Information technology might be downwardly to the game you're using, or information technology could some unremarkably dormant background procedure, popping upwards to say hullo. This is why it's of import to collect multiple sets of data -- practice several test runs, at least 3, and so that an average can exist calculated.
This is something that nosotros always do in our hardware testing. Doing more than iii is better, but doing something like x, is unlikely to provide any benefit. This is because the variations themselves are usually quite small, provided the test environment is controlled, and one time you have several thou information, the odd rogue value isn't going to have much of an impact on the statistics.
The final thing to consider is how much data to collect with each exam run. The time allocated for logging needs to be large enough to exist representative of what'southward normally going on, but not and so big that y'all're just wasting valuable testing fourth dimension. To demonstrate this, we took 3 samples of frame rate data using Ubisoft'south Assassinator'southward Creed Syndicate.
Nosotros picked a location in the game that allowed us to hands repeat the test and set OCAT to capture data for ten, 100, and ane,000 seconds. Nosotros'll show you how we got the following numbers, and what they all mean, in a moment, merely for now here are the results:
| Length of data collection | ten seconds | 100 seconds | 1000 seconds |
| Hateful frame rate (fps) | 59.eight | 60.0 | threescore.0 |
| 1% Low (fps) | 33.8 | 54.0 | 54.0 |
| 99% Loftier (fps) | 120.4 | 66.9 | 66.5 |
| Frame fourth dimension standard deviation (ms) | 2.97 | 0.62 | 0.58 |
We can come across that there is virtually no departure in the average frame rates, but the x second run is seemingly giving a much wider range in the rates (from 33.eight upwards to 120.4 fps). That same variation was probably in the other runs, simply because they contain so much more data, the statistical bear on of it is greatly reduced. This is what you'd become when playing games anyway -- afterward all, who plays for only x seconds?
Nevertheless, notice that the 100 and 1,000 seconds numbers are nearly carbon copies of each other. So for this particular test, collecting data for over 16 minutes produced statistics no different than those from the run 10 times shorter.
Brought to yous by the letter S and the number ane
Nosotros've mentioned the discussion statistics a few times now, then we need to move on to getting some data to analyze with math.
Permit'southward assume that we've already selected a game to test and captured all the data we demand. We can at present utilize OCAT to clarify the results for us, and to do this, head for the 'Visualize' tab:
Just click where information technology says 'Select capture file to visualize', select the .csv file needed, and and so hit the 'Visualize' button. Nosotros did a quick test in Assassinator's Creed Odyssey, recording the various frame times from the game's built-in criterion; the test was gear up to at 4K resolution, with the graphics settings on Ultra High.
Past default, OCAT shows the MsBetweenPresents numbers over the duration of the logging in the form of a smoothed graph.
It might non look like it is smoothed, but 2629 data points were collected, and the graph would be much more messy if they were all shown. The lower the frame times, the improve the functioning is, so nosotros can run across that the benchmark starts at around at 18 milliseconds (which equates to 56 fps) before dropping to a reasonably consequent 26 milliseconds (38 fps) for the remainder of the test.
You tin select more than one capture .csv file to analyze: simply load upwards 1 to outset with, so utilize the 'select capture file' button again. This get in easy to compare frame times across unlike scenarios or games -- for example, the paradigm below shows readings from Milestone'southward latest MotoGP 20 (the green line) and Shadow of the Tomb Raider (orange line).
Both games were run at 4K resolution and with every graphics option set to its highest, which included using DLSS and ray traced shadows for Tomb Raider. We tin can see that this game is running slowly from the frame times, but besides note how much the times bounce about. Compare that to MotoGP 20, where the frames take a very consequent fourteen milliseconds.
Equally well as plotting the results, OCAT can do some bones statistical analysis for us. Past clicking on the 'Capture statistics' button, nosotros can come across a variety of options. The ii nosotros're later on is 'Boilerplate FPS' and '99th-percentile'. OCAT calculates the average frames per second (FPS) by working out the arithmetic mean of the MsBetweenPresents times. This is done by adding all of the collected times together, and so dividing the sum by the number of data points collected.
The conversion into the frame rate is the same as how nosotros described it before: divide one,000 by the times. And then in this example, the mean MsBetweenPresents was 24.88 milliseconds, giving the following average frame rate:
The average FPS, by itself, paints a very poor picture of the information. This is because the average (in this instance, the arithmetic mean) is only ane statistic -- specifically, it'due south a mensurate of something called central trend. This is a value that the sample of numbers tends to cluster around.
There are other averages, such as the geometric mean, the median, modal values, and and then on. Nonetheless, OCAT doesn't calculate these, so if yous're interested in looking at other central tendency measures, you lot'll need to examine the information with another slice of software.
AMD'south program does work out the 99th-percentile of the frame times. Percentiles are values that tell you lot about the distribution of the numbers, inside the sample. And in the case of the 99th-percentile, the value is maxim that 99% of all the frame times are lower than this time -- only 1% of frame times were higher than this.
In our Assassin's Creed examination, the 99th-percentile was 31.65 milliseconds. Now, remember that the bigger the frame time, the slower the frame rate? And so if we turn this into an fps value, we go the i%-percentile for the frame rates and this comes to 1000/31.65 = 31.lx fps (coincidence, honest!).
In our hardware reviews, we telephone call this the 'one% Depression' value, and information technology tells you that 99% of the frame rate is college than this number.
The average fps and i% depression are two, quick statistics that can give you a good insight into what's going on behind the scenes in your computer. While the absolute minimum frame rate could be a lot less than i% depression value, information technology's not occurring very often -- just 1% of the time! If large drops in frame rate were taking place more oft, then the 1% number would be lower.
Just what if we desire to tease out more statistics, or just do your own assay? Well, CapFrameX tin practise vast array of statistics for you lot or you could write your ain programme in Python or R to do this. Y'all've too got the option of using a spreadsheet program (such as Excel or Google Sheets) and for those, here are the functions you'll need:
| Function | What it calculates the time interval is |
| =min(array) | Finds the accented minimum in the array of information (the lowest number) |
| =max(array) | Finds the absolute maximum in the array of information (the highest number) |
| =average(assortment) | Calculates the arithmetic mean of the values selected (central trend measure) |
| =geomean(array) | Calculates the geometric mean of the values selected (central trend measure) |
| =median(array) | Calculates the median of the values selected -- what value lies exactly in the middle of the numbers, when ranked lowest to highest (central tendency measure out) |
| =percentile.exc(array,k) | Calculates the kth-percentile of the assortment selected (distribution measure) |
| =stdev.south(array) | Works out what the standard deviation of the array is, as a sample of the population (dispersion measure) |
The geometric mean and median just provide a different view of the average of the frame times -- the quondam is best used where this a large difference in the times, and the latter is good for when the times tend to autumn into several groups. For most people, the good ol' arithmetic mean does the job.
Nosotros've already talked about percentiles but use the exclusive version, rather than inclusive, to ignore the very first and final data points. The capture process can frequently cause these to be lower than they should be, due to the game pausing for a fraction of second as the organisation enables the logging and then stores the recorded data.
Another useful statistic is the standard departure. This value gives you lot a good thought about how consistent the frame times were, equally it is a measure of the average gap between the private times and the overall mean. The larger this value is, the greater the variation in the frame rate, so for shine gaming, y'all'd want this to exist every bit small equally possible.
Yous don't demand all this mathematical data, though, to be able to dig into the PC's ruminations during a game -- the average frame rate and the 1% Low value are good statistics to work with. It'due south about how you apply the results!
Know Thy Enemy!
Time to put all of this data and knowledge into exercise. The results from benchmarking your PC can tell you something about how what office of the computer is having the most affect on the game'southward frame rates. Our examination guinea pig had an Intel Core i7-9700K, 16 GB DDR4-3000, and GeForce RTX 2080 Super equally its master components -- and so it'southward fairly powerful, although there are faster/more capable parts out there.
To demonstrate a detailed assay procedure, we used Assassinator'south Creed Odyssey again, to see how its own criterion tool is handled on the higher up system. Nosotros're looking to make a sentence every bit to what kind of a test is information technology: does information technology button the CPU hard or is it all about the GPU? We'll besides compare these findings to figures collected from playing the game directly, which will requite u.s.a. an idea of how representative the criterion tool is of actual performance behaviour.
With the game set to a resolution of 1080p (1920 ten 1080 pixels) and the graphics quality at Ultra High, a total of 5 runs were recorded. Using a spreadsheet parcel, rather than OCAT, the frame times were averaged (with some other statistics calculated), and then converted into frame rates, and finally plotted in a scatter graph.
Now this might seem like the frame rate is billowy about all over the place, and the test must have been a stuttering mess. But these rapid changes are separated past milliseconds and that's far likewise quick to be directed observed. The overall impression of the exam was that it seemed quite smoothen.
The average frame rate is pretty skilful, at just under 75 fps, but that the gap betwixt the 1% Low, 99% High, and the average are fairly big, around 22 fps and xl fps respectively. This strongly suggests that the workload is quite intensive at times, and at that place'southward ane component within the PC that's struggling with this: is it the CPU or the GPU?
Ane way to examine this is to repeat the tests at different resolutions and graphics settings. Changing the onetime but really affects the GPU, whereas changing the latter will affect the CPU and GPU (although how much and then, does depend on the game and effects are existence run at the various quality levels). We picked 5 resolutions and used the lowest/highest possible detail levels.
Nosotros're just showing the average frame rates here, because the ane% Low values followed very similar patterns. At face value, these information wouldn't seem to be telling us something we don't already know: having more than pixels to shade or using extra rendering furnishings results in a lower frame rate.
Withal, changing the resolution produces a linear change in the frame rate (as indicated past the straight trend lines). To see the significance of this, let's compare this to the results we got running i of the graphics tests in 3DMark's Fire Strike Extreme criterion:
This test has securely curving trend lines, which tells the states that irresolute the resolution has a massive impact on the performance. In situations like this, the game/exam is pixel bound -- i.e. the graphics card is easily capable handling the shader calculations only equally the number of pixels increases, the frame charge per unit becomes limited by the GPU'due south pixel output rate and retentiveness bandwidth.
Think of it similar a manufacturing line that produces a unproblematic component at a fixed rate. If the production order wants 100 items, the line volition become through these apace, but if the order is for a few million, then it will take far longer to get it all done -- even though each particular doesn't accept long to practise.
The directly lines seen in the Assassinator'southward Creed runs are indicating that this test is either compute bound or bandwidth constrained. In other words, there are so many long complex calculations for the CPU or GPU that the extra pixels don't make a big divergence to the workload, or at that place's so much data to move about, the system'due south memory bandwidth can't cope.
Going back to the factory illustration, a compute bound scenario is where the manufacturing of the role is not affected by the size of production order, but by the complication of it. A bandwidth constrained scenario would be a mill constantly having to wait for raw materials to exist delivered, earlier it can become going.
We tin can figure out which situation we're in, past altering ane variable in the PC: the GPU's core clock speed. Using MSI's Afterburner, we locked the graphics card speed to a fixed rate and ran multiple tests over a wide range of clock values.
We picked 1080p for this test, simply because it was the resolution in the middle of the five we had checked previously. Looking at the Ultra Loftier trend line first, we tin run across that doubling the GPU's clock almost produces a doubling in the average frame rate (the 1% Low was reasonably similar to this).
That's not quite as large a jump equally we saw in the 3DMark resolution checks, only information technology's enough to suggest that the game'south benchmark is compute bound at these settings. Dropping the quality level down to Depression gives u.s. the same kind of pattern, but the fact that it's more curved and it just flattens at around ane,900 MHz, is more evidence that the benchmark'due south workload is heavily loaded onto the GPU.
At this point we could have run further tests with the graphics card, altering its memory clocks, or done the same with the system memory, but with the evidence suggesting that it was a compute issue, and not a data one, we turned to looking at confirming exactly where in the processing chain the load was.
To exercise this requires altering the CPU's clock speeds and we used Intel's Extreme Tuning Utility to practise this, forcing all of the cores in the processor to run at the same, constant rate. If your arrangement is unable to exercise this, and then unfortunately it's a check that's unavailable to you lot
At 1080p, with Ultra High settings, altering the CPU speed over a range of 1.4 GHz barely made any departure. This conspicuously tells usa that our test PC was indeed compute bound and that this roadblock was entirely at the GPU.
In fact, we had to get all the way down to 720p, with graphics details at their everyman, to see any significant change in the frame rate, with CPU clock speed.
The fact that the trend line is starting to flatten off at around 5 GHz, the same region equally the CPU's default speed, tells united states that the built-in benchmark in Assassin'southward Creed Odyssey is very much a graphics card test. Why? Because the CPU's performance only impacts on the test issue when the GPU is given the least amount of piece of work possible.
And so that's the examination assay done and we accept enough data to exist conviction in proverb that the game's own criterion pretty much just tests the graphics card, no matter what settings are used. But how does all of this compare to what happens when you're actually in the game? Let'southward echo the resolution tests over again:
To start with, the frame rates themselves are lower in the game, than we plant in the criterion, but notice how different the trend lines are? At the lowest graphics settings, the functioning is essentially the same at 1080p, as it was at 720p -- the line is pretty flat between these resolutions. Simply with more than pixels than this, practice nosotros see the fps decrease. The i% Low results besides followed this trend, just equally nosotros found in the benchmarking tool.
This tells us that the GPU easily copes with the work, then the functioning of the game is beingness determined past the capabilities of the CPU. Switching to Ultra settings reverses this blueprint, and nosotros see a curved trend line, just like saw in the 3DMark exam. It doesn't dip as much as in Fire Strike Extreme, simply information technology's enough to indicate that, with these graphics levels, the game is somewhere between being compute and pixel jump.
We re-examined the furnishings of GPU and CPU clock speeds at Ultra and Low settings and essentially constitute the same patterns as earlier -- all of this strongly suggests that Assassin's Creed's benchmark is definitely a graphics card test, but it is a reasonably proficient indicator of what to await in the game itself.
Large caveat time, though -- this is truthful for this PC, running this particular test, only. It cannot be stressed enough that with less or more capable hardware, the results would be unlike. A stronger GPU would cope with the compute load better, meaning the CPU would accept more sway in the average frame rates, whereas a weaker GPU would fully command the performance.
Only whatever organization or game is used and checked, the test routine we've simply gone through tin be applied to whatsoever situation, be information technology in-game or in-criterion. Let'south summarize the overall process, and so information technology's easier to follow and echo with your own PC:
- Set the game's graphics details and resolution to the highest the PC will support
- Capture several sets of information and boilerplate the results
- Repeat the tests a few times, lowering the resolution for each set
- Plot the findings in a graph: straight lines signal the game is compute jump (CPU and/or GPU), curves suggest pixel bound (GPU only)
- Pick a resolution and retest, only change the GPU clock speeds
- Plot these results: if the trend is constantly upwardly, so the GPU is limiting the frame rate; if the trend line flattens off, then it's the CPU
- Repeat once more with CPU clock changes (if possible) to confirm the in a higher place
This is clearly a lot of work, and this is why testing hardware for reviews takes then much time and effort! Still, you lot can practice a simplified version, where the merely thing you change is the GPU clocks -- set the game to how you normally have it, and apply a tool to drop the speed of the graphics carte du jour in stages. Large changes in the average or 1% depression frame rates will indicate that it's the GPU that's the limiting cistron in the game'south performance; if slicing off, say, 25% of the graphics card's speed doesn't make much difference, then it'll be the CPU that'due south decision-making matters.
Windows volition now shutdown
If PCs were like consoles, none of what nosotros've been going through would exist worth doing (or possible, for that thing). This is because the range of different hardware and software configurations out there is very small. Game developers have a far easier job of ensuring their projects work properly on the Xbox, PlayStation, or Switch than with Windows-based computers.
And it's not hard to see why, when y'all look at all the unlike CPUs and GPU models that tin be purchased -- for case, Nvidia offers nigh 60 products that utilize their Turing processors and AMD has over 50 CPUs sporting the Zen compages. Not every combination of the two would be used for gaming, but the count still runs into the thousands, and that's without throwing other processor models, RAM, motherboards, storage, operating systems, and drivers into the mix.
Information technology might seem like it's nil short of a miracle that developers manage to become their games to work at all on PCs, only they do information technology past generalizing their approach to how their code is going to run and what hardware support is required. This ways there's always some room for improving a game's performance, but it as well means that there'due south a proficient chance that a detail title might not run well on a specific setup.
This is why using games to benchmark hardware and sharing the results with the world can exist so useful -- no game developer has straight access to all of the possible hardware configurations, just through our and your hard work, they can collate the information and employ information technology to continually improve their work. Well, theoretically they tin!
Of course, in-depth benchmarking and data analysis isn't everyone's favorite flavor of water ice cream; it can ofttimes exist tedious to exercise and it rather misses the whole point of having PC games in the outset place (i.e. play them!). Only we promise this article has given you some insight equally to how we test hardware and how you can do the same. And if you're wondering about what office of your PC to next upgrade, this is a swell way of getting the numbers to help you lot make that decision.
If you accept your own method of testing hardware or know virtually some smashing logging tricks, share them with everyone in the Comments section below.
Tests done, stats calculated, information analyzed -- time to shutdown Windows!
Download: 20 Programs to Analyze and Benchmark Your Hardware
Don't Miss: How We Exam: CPU Gaming Benchmarks -- or: How I Learned to Stop Worrying and Benchmark Using a High-end GPU
Shopping Shortcuts:
- AMD Ryzen nine 3900X on Amazon
- AMD Ryzen 9 3950X on Amazon
- AMD Ryzen 7 3700X on Amazon
- AMD Ryzen 5 3600X on Amazon
- Sabrent 1TB Rocket NVMe PCIe 4.0 SSD on Amazon
- Asus ROG Strix GeForce RTX 2080 Ti on Amazon
- GeForce RTX 2070 Super on Amazon
Source: https://www.techspot.com/article/2013-benchmarking-your-pc/
Posted by: motsingerhadvingrow.blogspot.com

0 Response to "Benchmarking Your PC: A Guide to Best Practices"
Post a Comment