Hd graphics 630 comparison with discrete cards. Graphics: fast, slow and integrated

Many gamers have dark days in their lives when either the video card orders to live a long time, or the system is bought without a discrete adapter in order to save up and take a more powerful version a little later. At such moments, you can only count on the integrated graphics core. So we decided to check what the Intel UHD Graphics 630 is capable of in terms of gaming.

To begin with, let us briefly and clearly recall the evolution of iGPU processors from Intel.

Year of issue

Microarchitecture CPU / technical process, nm

Intel Haswell / 22

Intel Broadwell / 14

Intel Skylake / 14

Intel Kaby Lake / 14

Intel Coffee Lake / 14

Integrated graphics

Intel HD Graphics 4600

Intel Iris Pro Graphics 6200

Intel HD Graphics 530

Intel HD Graphics 630

Number of execution units

Rasterization modules

Texture blocks

GPU clock frequency (nominal / in turbo mode), MHz

Maximum number of displays supported

Support for instructions and technologies

DirectX 11.2, OpenGL 4.3, OpenCL 1.2, Shader Model 5.0, Intel Quick Sync Video, InTru 3D, Intel Insider, Intel Wireless Display, Intel Clear Video HD

DirectX 11.2, OpenGL 4.3, OpenCL 2.0, Shader Model 5.0, Intel Quick Sync Video, InTru 3D, Intel Insider, Intel Wireless Display, Intel Clear Video HD

DirectX 12, OpenGL 4.4, OpenCL 2.0, Shader Model 5.0, Intel Quick Sync Video, InTru 3D, Intel Insider, Intel Wireless Display, Intel Clear Video HD

DirectX 12, OpenGL 4.4, Intel Quick Sync Video, InTru 3D, Intel Clear Video, Intel Clear Video HD

DirectX 12, OpenGL 4.5, Intel Quick Sync Video, Intel InTru 3D, Intel Clear Video HD, Intel Clear Video

It will not be superfluous to note that Intel does not use its best graphics cores in processors that are focused on use in desktop systems, but saves them for the mobile segment. The only exception is the Intel Iris Pro 6200 GT3 class for the Intel Broadwell family. In other cases, desktop "stones" can rely on graphics of the GT2 level, while the younger models of chips are completely content with a simplified GT1 configuration. If you do not go into the microarchitectural jungle, then Intel uses a modular design and, thanks to the combination of building blocks, can assemble iGPUs of different levels. Therefore, starting with the Intel Skylake generation, most desktop iGPUs have 24 execution units (EU, Execution Unit) at their disposal.

In turn, the mobile CPU Intel Core i7-6770HQ boasts an integrated Intel Iris Pro Graphics 580 (GT4e), which includes 72 execution units and eDRAM memory at once. The replacement of Intel HD Graphics 530 with Intel HD Graphics 630 is caused by the introduction of new hardware capabilities of the media engine, which has learned to encode / decode video in VP9 and H.265 formats, and also received full support for content in 4K resolution. In turn, at Intel Coffee Lake, the marketing department decided to change the name of the integrated graphics from Intel HD Graphics 630 to a more solid Intel UHD Graphics 630, which, as it were, hints that it can handle even 4K Ultra HD resolution. But in essence they are no different, except for the addition of support for OpenGL version 4.5 instead of 4.4 in its predecessor.

For practical tests of Intel UHD Graphics 630, we will use its least productive version, which is included in the processor. Just briefly recall that its built-in video core includes 23 execution units with a base frequency of 350 MHz and a dynamic frequency of up to 1100 MHz. Other variants of UHD Graphics 630 in older processors can be equipped with 24 blocks or support acceleration up to 1200 MHz, which adds a little performance to them.

Testing was carried out on a GIGABYTE Z370 AORUS Ultra Gaming motherboard, and a Thermalright Archon SB-E X2 cooler was responsible for cooling the processor. The RAM is represented by a 2-channel Patriot Viper 4 kit in DDR4-2400 mode. The operating system and many games were installed on the GOODRAM Iridium PRO series SSD.

Test stand:

  • Intel Core i3-8100
  • GIGABYTE Z370 AORUS Ultra Gaming
  • Thermalright Archon SB-E X2
  • 2x8GB DDR4-3200 Patriot Viper 4
  • GOODRAM Iridium PRO 240GB
  • GOODRAM Iridium PRO 960GB
  • Seagate IronWolf 2TB
  • Seasonic PRIME 850 W Titanium
  • AOC U2879VF

Let's move on to games. Dota 2 at Full HD resolution it is better to run with minimal settings. The resulting frame rate is quite high: an average of 103 frames / s with drawdowns up to 76, so you might want to raise the quality. But when several pumped heroes collide, the frequency can drop significantly, so it's better not to risk it.

V Rocket league I also had to go down to the minimum settings to leave the Full HD resolution. In this mode, everything is quite playable: the average speed of the video sequence tends to 60 FPS, and the minimum speed does not fall below 45. There are no problems with the control, which confirms the fairly smooth and even time frame of the frame.

Campaign in The long dark can be run in HD-resolution and with a low graphics preset. In the house we get around 60 FPS. In nature, the average frame rate drops to 56 frames / s, and the minimum does not fall below 43. Input Lag is slightly felt in the control, but in general it is possible to play.

V Cuphead only the resolution can be selected. In this case, it's HD. In addition, Vsync is used by default, so the average speed does not rise above 60 FPS. The minimum figure was 30 frames / s. The timeline of the frame is quite smooth and smooth, so there were no questions about the responsiveness of the control. But the control itself from the keyboard is a separate story.

V Dungeons 3 from the settings there is only rendering resolution control. Dropping it down to 50%, we did not experience any problems in the first mission. Although the video sequence speed was not high: on average 33 FPS with drawdowns up to 30. If you have gone further, please share in the comments whether the load on the system increases significantly?

Warmly received by gamers and critics alike Hob you can also go on the built-in graphics. True, you will have to reduce the resolution to HD, and the settings - to the minimum level. The picture quality is not particularly pleasing, but there are no problems with the video sequence or control responsiveness. On average, we got 41 FPS with drawdowns up to 33.

In a 2D platformer Inside only the resolution can be selected. With Full HD, you can count on an average of 37 frames / s with drawdowns of up to 31. The timeline of the frame is boring in its appearance, but since you won't have to hit the pixel, no problems with the passage are expected.

At the beginning of the game What Remains of Edith Finch you need to get to the Finches' house. And in this segment, even at HD-resolution and low-resolution rendering, drawdowns reach 20 FPS. But already in the house, the average speed rises to 46 frames / s, and the minimum does not fall below 28. The game does not require a high reaction speed, so the nature of the Frame Time schedule does not play a special role.

Battle chasers nightwar combines two modes. When exploring the global map in HD resolution and low graphics settings, you can count on 70 FPS. In the first battles, the average speed rises to 154 frames / s. But there are also unpleasant freezes up to 3 FPS. As you progress through and increase the level of opponents, the average frame rate will probably drop.

For start WorldofTanks we used an SD client with simplified textures. Therefore, in Full HD and with a low preset, the graphics got quite comfortable gameplay. The average speed reached 95 FPS, and the minimum speed was 58. There were also no problems with control, even on a fast tank.

And here WarThunder with similar graphics settings, it produces about 45 FPS with drawdowns up to 35. In general, you can play, but the controls no longer feel so comfortable, which is confirmed by the ragged frame timeline with a higher average level. For more comfort, it's best to downgrade to HD resolution.

V Battlerite the limit of 60 FPS is used, so it is a pleasure to play in Full HD and with a low graphics preset. The speed is stable at this level, the control allows you to conduct aimed fire at opponents and quickly change your deployment. And the picture quality is quite acceptable.

But compete with other users in Quake Champions not recommended. First, to get a playable average speed of 48 FPS, I had to go down to HD-resolution and a low preset. But even this does not save up to 8 frames / s from freezes. Also, the Input Lag is constantly felt, which prevents you from hitting rivals, so instead of euphoria, you will more often experience frustration.

It will be much safer for your nervous system and gaming peripherals. CS: GO... In the most difficult conditions, that is, on the Nuke map with bots, you can safely choose Full HD resolution at low settings. And in this mode, we get 79 FPS with drawdowns up to 38. Yes, there was a small Input Lag, but it didn't really interfere.

For lovers Fortnite: BattleRoyale We recommend using a low preset in Full HD to raise the average FPS to 56 fps. True, the minimum can sink up to 21. The timeline of the frame is not very pleasing, given that this game is about the speed of reaction and the ability to hit the pixel.

Falling from a parachute in PUBG at a very low profile and HD resolution, it happens at an impressive 5-6 FPS. Then the average frequency rises to 15 FPS, and the frame time decreases from 130 to 60 ms. That is, there can be no talk of any more or less acceptable gameplay. If there is a separate Hell for gamers, then this is one of the worst punishments.

V Gta v I had to not only go down to HD-resolution and turn all the settings to a minimum, but also turn on vertical sync by 50%, limiting the FPS at 30 frames / s. In this mode, there is a small Input Lag, but in general it is possible to play.

V RainbowSixSiege at HD resolution and low profile, you can try to go through the training scenarios. But even in this case, you should rely on an average of only 38 FPS with drawdowns up to 30. The Frame Time graph is unpleasant, indicating that the controls are not very comfortable. Therefore, it is better not to meddle online so that your teammates do not shoot as an enemy agent.

More demanding level projects Need for speed payback unplayable even in HD and at a low preset. Only in cutscenes, you can get about 30 FPS, and in real gameplay we have an average of 23 frames / s with drawdowns up to 16. Moreover, the large Frame Time makes it difficult to fit into sharp turns.

And finally, let's take a look at Middle- earthShadowofWar in HD at a very low preset. If the quality of the picture does not make you feel rejected, then the low frame rate will force you to quit the game as soon as possible: an average of 22 frames / s with drawdowns of up to 18. Of course, brave heroes with steel ... nerves who can pass the Third Witcher at 15 FPS are already ready throw dislikes at us, but we still take the risk and will not advise you to play this game on the iGPU.

Outcomes

As a result, life on the "built-in" definitely exists. True, only in its simplest form. That is, we are talking about undemanding platformers, casual projects, strategies or adventures, where the emphasis is on interesting plot twists, and not on the most realistic graphic effects or elaboration of the surrounding world. You can also try your luck in popular online projects like Dota 2 or World of Tanks. For many, this is quite enough. More complex games, at least of the Need for Speed ​​Payback level or higher, will resemble slideshows, so it is better not to launch them at all so as not to spoil the experience.

Intel Core i3-8100 processor was provided by the channel Oglyad UA.

Article read 109952 times

Subscribe to our channels

All major manufacturers of video cards traditionally have two lines - mobile and desktop. Recently, Nvidia began to install desktop video cards somewhat underreported in frequency in laptops, but basically the lines differ, and much (you can't just take the letter M out of the name).
I do not have the opportunity to evaluate the performance of all video cards, so I will only take the most modern and most popular ones - in most laptops there are only 15-20 models of video cards, which can be examined in detail. Another addition is that all compared video cards will be compared for convenience with desktop video cards from Nvidia.

  • Intel graphics cards.
    Yes, you can play on them. Yes, with difficulty and in undemanding games, but you can. And here there are several points: firstly, games (with rare exceptions) for video cards from Intel are not optimized, which means that even if, according to tests, Intel's embedding is more powerful than the minimum video card required for a game (we do not even stutter about the recommended ones), this does not mean that, that the game will come with a comfortable performance. But there may be a reverse situation - embedding may not be corny to render some objects, which will increase fps. In short, games on such video cards are random, and you should not take them specifically for games (unless all your games specify in the system requirements that Intel video cards are supported). Secondly, such video cards use part of the RAM for video memory, so the faster it is, the more FPS, and if you nevertheless decide to take a laptop only with built-in, it is advisable to install two RAM dies with the maximum frequency for the first upgrade (if it is possible, of course).
    The modern line of HD Graphics is represented by 3 video cards - HD Graphics 515, 520 and 530. Physically, they are all the same (each have 24 computing units), the maximum frequencies fluctuate around 1 GHz. The differences are only in the thermal packs of the processors in which they are installed - the larger the thermal pack, the higher the frequency of the video card will be, therefore the HD 515 installed in 4-watt processors will work significantly worse than the HD 530 installed in processors with a TDP of 35 watts or more. The approximate performance is as follows:
    Intel HD Graphics 515 = Nvidia GeFroce GT 210 (yes, it is still actively sold);
    Intel HD Graphics 520 = Nvidia GeForce GT 720;
    Intel HD Graphics 530 = Nvidia GeForce GT 630.
    In general, the performance is similar to that of office plugs.
    The Iris Graphics line looks more invigorating - they can use 64-128 MB of fast L4 cache, have 48 (not 24) computational units and are installed in processors with heat packs of 15 watts (Iris 540), 28 watts (Iris 550) and 45 watts (Iris Pro 580). The problems are the same, but the performance is much higher:
    Intel Iris 540 = Nvidia GeForce GT 640;
    Intel Iris 550 = Nvidia GeForce GT 740 (we have already reached the level "everything is playable at 800x600 at low");
    Intel Iris Pro 580 = Nvidia GeForce GTX 650.
    It's already more fun - on the GTX 650, albeit in HD, but you can play modern hits.
  • Video cards from AMD.
    Quite rare guests in laptops (especially expensive ones), although AMD video cards have made many different ones. In fact, they differ from desktop AMD only in performance and heat dissipation, support for standards is not cut down. Also, the M4xx line is essentially a complete change of the M3xx line (which in turn is a complete renaming of the M2xx), so the performance between the same video cards of these lines differs by no more than 5-10%. Alas - in laptops, they often cannot compete with Nvidia in terms of price-performance.
    AMD Radeon R5 M320 = Nvidia GeForce GT 710 (how did this video card come about? It's even weaker than the HD 520 ...)
    AMD Radeon R5 M430 = Nvidia GeForce GT 720 (humor is that such a video card is often installed in a laptop with an Intel processor and an HD 520 of the same performance, that is, in fact, it is superfluous);
    AMD Radeon R7 M440 = Nvidia GeForce GT 730;
    AMD Radeon R7 M460 = Nvidia GeForce GTS 450;
    AMD Radeon R6 M340DX = Nvidia GeForce GT 640 (the gloomy genius from AMD came up with the idea of ​​making Crossfire, which does not work very well, on two video cards of different performance - the integrated R6 Carrizo processor and the discrete R5 M330. As a result, this bundle works very badly);
    AMD Radeon R7 M370 = Nvidia GeForce GTX 550 Ti;

    AMD Radeon R9 M370X = Nvidia GeForce GTX 650;
    AMD Radeon R9 M375 = Nvidia GeForce GTX 460;
    AMD Radeon R9 M380 = Nvidia GeForce GTX 465 (you can probably find it only in the iMac 5K, the simplest model);
    AMD Radeon Pro 450 = Nvidia GeForce GTX 560 Ti (video card from the younger version of the new 15 "MacBook);
    AMD Radeon Pro 455 = Nvidia GeForce GTX 750 (graphics card from the middle version of the new 15 "MacBook);
    AMD Radeon Pro 460 = Nvidia GeForce GTX 750 Ti (graphics card from the top version of the new 15 "MacBook);
    AMD Radeon R9 M390 = Nvidia GeForce GTX 750 Ti (iMac 5K, mid model)
    AMD RX 460M = Nvidia GeForce GTX 760;
    AMD Radeon R9 M395 = Nvidia GeForce GTX 590 (iMac 5K, top model);
    AMD RX 480M = Nvidia GeForce GTX 680;
    AMD Radeon R9 M395X = Nvidia GeForce GTX 680 (iMac 5K, selectable when ordering from Apple).
    In general, the appearance of the first three video cards in laptops I can only explain by the fact that AMD paid manufacturers (because the performance of these video cards is not far from the already built-in video cards from Intel), a good half is installed only in MacBooks / aimags, and RX is only in new ones Alienware. So everything is rather sad for AMD in the mobile segment.
  • Video cards from Nvidia.
    In general, they are the ones who rule the show, because in the high-performance segment they are practically the only ones, and in the middle and low ones they offer great performance at the same price as AMD. Likewise, with the latter, no standards have been cut. The graphics cards of the GT 8xx and 9xx lines are essentially the same up to 870M / 970M (yes, Nvidia also went into the renaming).
    Nvidia GeForce GT 920M / 920MX = Nvidia GeForce GT 730 (the same as with AMD - the video card is pointless because it's not far from Intel's built-in);
    Nvidia GeForce GT 930M / 930MX = Nvidia GeForce GTS 450;
    Nvidia GeForce GT 940M / 940MX = Nvidia GeForce GTX 550 Ti;
    Nvidia GeForce GTX 950M = Nvidia GeForce GTX 560 Ti;
    Nvidia GeForce GTX 960M = Nvidia GeForce GTX 750 Ti (here is a 100% coincidence, because the video cards are essentially the same);
    Nvidia GeForce GTX 965M = Nvidia GeForce GTX 950;
    Nvidia GeForce GTX 970M = Nvidia GeForce GTX 960;
    Nvidia GeForce GTX 980M = Nvidia GeForce GTX 770.
    All video cards that are desktop, but installed in a laptop - GTX 980/1050/1050 Ti / 1060/1070/1080 are weaker than the reference desktop counterparts by 0-10% in performance.

Part 24: Intel HD Graphics 3rd and 4th Generation

It so happened that we got acquainted with the performance of the current generation of Intel integrated graphics using the example of its older modifications or in a laptop version, but the last article where Celeron, Pentium and Core i3 were studied was published more than a year ago, so it was limited to Sandy Bridge and Ivy Bridge ... From the point of view of a potential buyer, of course, this situation is not correct. After all, the integrated graphics core in a top-end desktop processor is usually used by those who do not care about its characteristics, so, by and large, the HDG 2500 is enough. If not, then a discrete graphics card is usually simply purchased, especially since owners of computers based on Core i7 or Core i5 can easily afford not to save on the latter. And in the older models of laptops, manufacturers often put discrete on the principle "so it was." Let it often turn out to be a GPU, comparable in performance to the built-in one, but it is not always possible to fight off such "worries".

But in the budget segment, everything is completely different. Of course, a Pentium (not to mention Core i3) can be used to build a decent gaming computer. And if we restrict ourselves to the single-user mode, then not even "good", but good (as we have already seen). However, with serious performance requirements, you usually have to purchase expensive video cards without saving on other systems, so you can not save too much on the processor (especially since, as we have already written more than once, at the moment all processors in the consumer segment are very inexpensive ). Who needs the cheapest models? Mainly for those who have to save every dollar (and even more often - a ruble or hryvnia), so buying a decent discrete video card is not even considered (or is considered, but somewhere in the future). Nowadays, as has been shown more than once, there is no point in acquiring an "indecent" one at all - money wasted, which will still not allow you to get a quality advantage over the use of integrated graphics. But in this case, the characteristics of the latter may begin to play a decisive role - simply because in interactive applications (which include games), quantitative characteristics result in quite qualitative differences. In other words, there is not much difference in how many minutes it will take to import into the database or process a large number of images: of course, 15 minutes is better than 30, but in the end the work will be done (even if you have to drink an extra cup of coffee or look for another - something lesson). At the same time, 15 (and even 20-25) and 30 frames per second in the game are already qualitative differences: in the second case, the game can be played with the selected settings, but in the first one not yet. In general, the question is of principle. So the answer is interesting to many. Today we will look for him.

Testing: goals and objectives, configurations, methodology

This section of a relatively large volume will be common and the same for all articles: unfortunately, it is not enough for all people to explain something once :) Moreover, not all readers will carefully study all articles of the cycle - the probability of "starting from the middle" or just to limit ourselves to one or two materials is extremely large, of which we are fully aware. Therefore, we immediately apologize to those who are against the constant repetition of the same truths. Which, however, is known to be the mother of learning :)

So, first and foremost, it should be borne in mind that within the framework of this testing we do not deal exclusively with components - we are testing systems that consist of them. Processors are tested separately within the "main line" articles. Always in a fixed configuration - with a powerful video card, a large amount of RAM, etc. We have on our site and testing directly video cards in gaming applications, updated monthly. Within the framework of i3D-Speed, all video cards (from a simple budget to a multi-GPU) are tested on a powerful configuration, selected from the calculation of sufficiency for a graphics component of any power. That is, we believe that from the point of view of traditional "component" testing, these two article lines are quite enough.

But for the practical use of the results obtained within their framework, a certain connecting link is needed. The fact is that applications, the performance of which does not depend on the central processor, do not exist in nature. There are, of course, cases when it is limited by other components, but this very often happens at different levels for different processors. Gaming and similar applications significantly depend on the performance of the GPU, but they also give a considerable load to the CPU. If the task turns out to be too "easy" for graphics, everything starts to be determined only by the processor. If it is “heavy”, then the influence of the processor, on the contrary, becomes minimal, and sometimes it can even be ignored. In the interval between these extreme cases, both components are important, and the degree of their importance can be reversed. A priori, in an unknown way. That is, from the fact that one processor is faster than another using a powerful video card, it does not follow that the ratio will remain if it is replaced with a budget one. More precisely, in some modes it will be preserved, in some it will change, in some it will simply turn out to be the same. A similar problem is inherent in video cards - the level of "sufficiency" of the CPU changes depending on the GPU and its mode of operation.

It would seem that it is enough to simply test all the processor + video bundles. The solution is obvious and correct in theory, but practically impracticable in practice, since the volume of work is growing exponentially. In other words, 40 video cards on one system - 40 test configurations. 40 processors with one video card - also 40 configurations. And if you combine this, you get 1600 test configurations. Although, of course, if all this work can be done, truly invaluable results will be obtained. But by the time they are received, they will no longer be needed by anyone, since they will become outdated (looking ahead - even the “simplified” method we have chosen allows us to test no more than a dozen configurations in a working week, so 1600 is a task for three years using one stand).

But you can approach from the other side: do not try to find exact answers to all questions, but limit yourself to qualitative assessments. At least for some of the processors, you can try to "grope" the lower performance level. Which is integrated graphics, since recently it has become an integral part of most modern processors. And there are younger models of discrete adapters that are at least as good as that. But many times simpler and slower than top-end solutions - there is still a wider range of characteristics on the graphics market than on the processor one. With this choice of equipment, we can significantly reduce the list of test configurations and modes. Indeed, the most relevant results will be for buyers of budget computers, because with the price of a system unit in 1000 dollars, you can give 10% of this amount for a slightly more powerful video card than the lower level, and not get involved with the same integrated video. Just - to be. So processors of the middle class and higher often do not need to be tested with weak video. Sometimes, of course, we will do this too - in order to have the necessary guidelines, but only sometimes. Besides, systems of this class do not require tests in some outstanding modes, such as 2560 x 1600 with older variations on the topic of full-screen anti-aliasing :) In a word, the work can be significantly simplified.

Further reducing the workload is the fact that 90% of standard processor technology applications do not depend on video performance at all. In the previous series, we used all the programs, so its four parts are ample proof of this fact. Who is still not enough - we can't do anything about it :) Be that as it may, but GPGPU is still nothing more than a curious experiment, and all work in this direction shows that it is generally special for systems with weak GPUs does not differ in relevance: powerful video cards on "good" tasks are really capable of accelerating something, but when trying to squeeze something worthwhile out of an entry-level discrete very often all the steam goes to the whistle- complication of algorithms and unnecessary data transfers "eat up" all the potential gain. From which, however, one should not conclude that we will pass by any curious and popular application that can actively use GPU resources. Of course, we will not go through and add it to this experimental technique. Only so far the main problem is that nothing like this comes across. More precisely, there are already "curious" programs, but popular they still do not become in any way for one reason or another. The same video transcoding, around which a lot of copies were broken, in fact, very few people need it regularly, and the quality of work developed by program enthusiasts leaves much to be desired (this is still very putting it mildly). And (here it is a grimace of fate) is most quickly performed using specialized hardware units available in integrated Intel GPUs, and not at all on general-purpose pipelines.

Thus, we are left with not so many programs that it makes sense to "run" on systems with weak graphics. In fact, the "standard" method is simplified to literally five groups, three of which are experimental. These are: Interactive work in three-dimensional packages No changes Mathematical and engineering calculations MAPLE and MATLAB were thrown away, since they do not display anything on the screen, but the remaining three applications are interesting to readers, judging by the reviews (it is clear that it is hardly advisable to save so much on the workplace, but suddenly you have to work on a weak computer). In fact, it turns out that the composition of these two groups as a result coincides, but in the previous case, the "graphic" score of the corresponding test is taken into account, and in this case, the "processor" score: as the testing practice has shown, in fact both of them depend on both the processor and video cards, which is what we need Games No changes Games with low resolution and quality settings Within the framework of the "basic" methodology, this group is practically not used and does not affect the overall score, but it was made just for systems with weak graphics. First of all, they are mobile, but they are not so different from what we are testing in this series High-definition video playback No special comments needed

Since we do not have so many groups, and all of them are quite specific, we will not give a general assessment. We are primarily interested in the results. Which, as usual, will be fully compatible with those obtained on the configurations of the main line of testing, since we already know for sure that video cards do not affect other applications in any way. So, if you wish, you can simply replace the corresponding piece in the "big" table, since we in no way hide them. However, it should be borne in mind that the scores of this testing with the main line are in no way compatible: here we take a system with a Celeron G540 and a Radeon HD 6450 512 MB GDDR3 as a scale unit, so for independent manipulations you should download a table in Microsoft Excel format, in which all the results are given both converted into points and in “natural” form.

Testbed configuration

CPUPentium G2140Pentium G3430Core i3-3245Core i3-4130Core i3-3250Core i3-4330
Kernel nameIvy Bridge DCHaswell DCIvy Bridge DCHaswell DCIvy Bridge DCHaswell DC
Number of cores / threads of computation2/2 2/2 2/4 2/4 2/4 2/4
Core frequency, GHz3,3 3,3 3,4 3,4 3,5 3,5
L3 cache, MiB3 3 3 3 3 4
RAM2 × DDR3-1600
Video coreHDGHDGHDG 4000HDG 4400HDG 2500HDG 4600
24 40 64 80 24 80
Video frequency (std / max), MHz650/1050 350/1100 650/1050 350/1150 650/1050 350/1150
TDP, W55 53 55 54 55 54

Desktop Celerons based on Haswell microarchitecture were announced recently and have not yet reached our hands, and Bay Trail is a different story altogether: only BGA performance and TDP up to 10 W make these models as a maximum competitors of CULV processors, but not a "standard desktop" one. platforms. But Pentium and Core i3 of various modifications are massively available for both the LGA1155 and the new LGA1150. Accordingly, three pairs of processors will take part in our testing - two Pentiums and four Core i3s. Everything is simple with Pentium - we took two processors with equal clock speeds of computational cores: the old G2140 and the new G3430. Please note that the graphics core of the younger models is still called HD Graphics, although this is already the fourth GPU with this name, and it differs from the previous two not only architecturally, but also the number of pipelines increased from 6 to 10. That is, the difference with Ivy Bridge will be certainly, but there is nothing to compare with Pentium and Celeron on Sandy Bridge that are still on sale - the functionality is very different, which we already noted a little over a year ago.

There is no confusion with the names in the Core i3 family. Moreover - the order has generally increased - the company previously offered both processors with the HDG 2500 core (the most massive in the desktop Ivy Bridge) and several modifications with the HDG 4000. this condition) for models with a lower graphics core. The new generation was divided into two families. The heirs of the old Core i3s are the 41x0 series models, similar to them in frequencies and cache capacity and equipped with the HDG 4440. The more expensive 43x0 series processors have become relatively new products, where on board not only the oldest among the "socket" processors GPU HDG 4600, but and all 4 MiB of L3 cache are used: as in the first generation Core i3 or in the mobile dual-core Core i7. In general, the positioning of new processors has become simpler and more logical: pay more - get more. In all respects. There are also overlaps with the previous generation in terms of clock frequency, which gave us two pairs equal in terms of it 3245-4130 and 3250-4330.

CPUA6-6400KA8-6600K
Kernel nameRichlandRichland
Number of modules / threads of computation1/2 2/4
Core frequency (std / max), GHz3,9/4,1 3,9/4,2
L3 cache, MiB- -
RAM2 × DDR3-18662 × DDR3-1866
Video coreRadeon HD 8470DRadeon HD 8570D
# Of GPUs192 256
Video frequency (std / max), MHz800 844
TDP, W65 100

The fourth pair of test participants are AMD APUs. Cheaper than Intel processors, but ... As it was already found out earlier, in terms of graphics performance, the Core i7-3225 (with HDG 4000) was roughly equivalent to only A4 of the Trinity line. The latter has already been replaced by Richland in the low-end segment (here the A8 will have to wait for Kaveri) with a slight increase in performance. Intel's growth is more significant, but even the top desktop model of the company could not reach the level of modern A8 in the summer. Since then, the drivers have been updated, which has led to some curious effects, but we still a priori were sure that the A8 would remain unattainable for the lower Intel processors. The only question is - how much? And how does the graphics performance compare to the more affordable A6. But the A4 is not interesting: as mentioned above, this level of graphics performance was already available with the old Core i3. Let it be noticeably more expensive, but the performance of the processor component is also very different, so you just need to choose which is more important. We hope that today's testing will simplify this task.

Another visitor from another world is a video card based on GeForce GT 630. We already tested something with this name a year ago, but what about the name: old products were based on GF108, and new ones use the GK208 chip. NVIDIA itself claims that this is a new development, in fact, the GPU is very similar to the cut-off GK107 (previously used in the GT 640 and above). And cropped programmatically - both have the same area and overlapping wiring. Why partially? Because the GK208 lacks one memory channel, and the bus interface is only PCIe x8, not x16. Thus, it is obvious that at comparable frequencies, the GT 630 is not a competitor to the old GT 640, despite the same number of GPUs. But in comparison with the old GT 630 DDR3, everything should not be so bad: the "narrow" memory bus is partially compensated by its higher clock frequency (1800 MHz versus the official 1600 MHz, which often dried up in real products even up to 1400 MHz), and the arithmetic capabilities of the chip are much higher - at the level of the GT 640. Another question is whether such a level is needed in a modern computer or is it better to do with integrated video? :) But, what is important, the cards based on GK208 are compact and equipped with passive cooling (because the GPU is not hot enough), and at the price they can compete with the GT 610/620, which have absolutely no performance at all. In general, these solutions have a certain niche - at least an upgrade of old compact systems. Well, we will determine the exact level of performance using a card from ASUS with 2 GB DDR3 (we did not test the modification with 1 GB because there is nothing for it - the different volume in video cards of this level will not affect in any way), working together with the Core i3-4330 (so that certainly the processor did not interfere).

Interactive work in 3D packages

As we already wrote, Intel programmers fixed another batch of errors in the driver version 9.18.10.3257, which led to an interesting effect: even Pentium on Ivy Bridge (adding 20% ​​to last year's results) is already reaching the level of any AMD APU (except, maybe , Kaveri, but these models are just starting to enter retail chains). Moreover, this is the level of the junior discrete gaming chips from NVIDIA, even paired with a faster processor. All in all, don't be afraid of Intel integrated graphics anymore. Especially after the release of Haswell, this is an even higher level of performance. And, as we can see, the installation of a lower gaming discrete (which was almost mandatory for such programs at the time of Sandy Bridge) significantly reduces performance, that is, it is better not to do this anymore.

Mathematical and engineering calculations

Here and in the past, HD Graphics did not get in the way, as the results mainly depended on single-threaded processor performance, which put Intel devices in an advantageous position, and now the situation has only worsened. But, by the way, pay attention - a discrete video card allows you to improve the results. Simply because it does not pretend to be either a processor's cache memory or a thermal package. However, the gain is extremely small, which, together with a decrease in the "graphic" score, does not allow changing the conclusion - if you buy a discrete video card for professional programs, then certainly not a junior gaming one.

Aliens vs. Predator

As you would expect, the third generation HDG and the HDG 2500 are identical - we will see this more than once, so we will not dwell on this result in the future. 4400 is only slightly faster than 4000, which is forgivable - one of the younger decisions against the once older. The HDG 4600, on the other hand, almost reaches the performance of the A6 - a noticeable step forward because, as we have already said, the HDG 4000 was only enough to fight the A4. And the difference between the two HDGs is even greater. Although in practice, in this mode, everything breaks down on the fact that even the A8-6600K (faster than the GT 630 by the way) is still not enough to get a comfortable frame rate. Therefore, the settings will have to be lowered.

At the minimum, of course, everything flies. In addition to the junior graphic configuration Ivy Bridge - even in this mode, it was barely enough to go overseas at 30 FPS. So I am glad that the new graphics do not have such problems at least. And even from the discrete of the GT 630 level, only Pentium is already lagging behind, and even then a little, and installing such cards in a computer based on any new Core i3 is definitely a bad idea. Well, the APU is ahead by a wide margin from the others. The result was not unexpected, although hopes for at least approximate parity of older Core i3s with at least much cheaper A6s. We saw once lower results even in very old A8s, of course, but AMD engineers and programmers have not been idle for the last year either :)

Batman: Arkham Asylum GOTY Edition

The high-quality (within the framework of our tests) mode of this game "gave up" to the integrated graphics from Intel after the appearance of the HDG 4000, and the newer GPUs of the company, of course, are even faster. And even the Pentium was just a little too small to reach 30 FPS. An achievement, which, however, pales against the background of the fact that even the old A4-5300 or the very ancient A6-3500 is still faster - AMD has set the bar high, you can’t say anything. Actually, there is nothing surprising in the fact that the APUs of this company are already squeezing out the junior discrete from the market. And Intel, despite the rapid progress, the pipe is lower and the smoke is thinner:) However, it is already clear that it makes no sense to install solutions of the GT 630 class (especially lower) into systems based on its new processors - there will be no fundamental performance gain.

With a low picture quality and an old graphics engine, a comparison of processors is already obtained for the most part. With small variations: after all, the HDG 2500 (and its relatives of budget families) is too weak a solution, and the use of discrete less interferes with the processor component to work at full strength. But on the whole, it was possible to play in this mode even on the Celeron G555, and the progress since its inception allows us not to limit ourselves so much.

Crysis: Warhead x64

An example of the opposite situation - so far no integrated graphics solution can cope with this game with the selected settings. Moreover, as we can see, despite the steady increase in productivity, the next year is unlikely to change much. Which is not surprising, since even a discrete Radeon HD 7750 DDR5 is enough for this with almost no speed margin. But if we evaluate not only the absolute results themselves, but the dynamics of their growth, the assessment of the situation changes somewhat. As you can see, modern Pentiums have already reached the level that only some Core i3 modifications were available just a year ago. And the senior representatives of the latter in terms of graphics core performance are now at the level of APUs of the A6 family or ... Discrete video cards not so long ago, such as Radeon HD 6670 DDR3. Or quite modern GeForce GT 630. That is, the border between the senior (and even not the oldest) integrated GPU models and the junior discrete models is becoming increasingly blurred.

It is necessary to reduce the quality of the picture to the level of games of ten years ago, as soon as it turns out that anything is enough, which is quite correlated with the "worldly wisdom". But it also makes such modes not very indicative, of course, but as we have already said more than once, they were chosen at one time in an attempt to make graphics of lower classes provide acceptable performance - for example, integrated into low-power Celerons three years ago. However, some interesting information can be squeezed out of them even now. In particular, the progress of Intel drivers is quite visible - a little over a year ago the Pentium G2120 produced less than 50 frames per second here, and with the new drivers the G2140 became one and a half times faster. However, this is not enough to keep up with at least the cheap AMD A6, while the new Pentiums in games with simple graphics (either initially simple or with simplified settings) can already "wrestle" with the A8. And, again, the only plus of the weak sampling rate is that it does not prevent the processor from giving all its best. Although the effect of this when using inexpensive video cards, as we can see, cannot be called significant.

F1 2010

Although the game will soon be four years old, it is still a tough nut to crack for integrated graphics. But in a slightly different way than Crysis - if there the entire load fell on the GPU, then the performance of the processor is already important here, and it is desirable that the latter support more than two computation threads. As a result, most of the junior solutions are kept at the level of 12.5 FPS thanks to the engine itself - as far as possible, it tries to “not fall” below, further simplifying the picture. HDG 4000 and higher, as well as integrated Radeon HD, work “honestly”, but still too slow. And no wonder - as we already know, only the top-end A10 can cope with this mode. Better yet, use discrete as before. At least a Radeon HD 7730 DDR5 or better is desirable.

In light mode, even with weak graphics, the disadvantages of dual-thread processors are visible. However, once again, this is most noticeable when using AMD processors, but at Intel the difference between Pentium and Core i3 is not great (and the new Pentium may overtake the old i3). Therefore, the minimum should be considered something of class A8. Or buy a discrete graphics card - the specificity of the EGO engines (used in the entire Formula One series) is such that even a decrease in the quality of the graphics does not make it useless.

Far cry 2

Far Cry 2 is even older, so only Intel and AMD A4 / A6 processors cannot cope with the task in high-quality mode. In general, the qualitative difference between Intel HD Graphics and the APU or the lower discrete - as you can see, it still persists, despite a very noticeable increase in performance in the new generation of GPUs.

But for the easy mode, only Sandy Bridge was lacking, and in the case of more modern devices, we almost test the performance of the processors themselves. With a completely predictable result.

Metro 2033

In fact, one more stress test for integrated graphics - you won't be able to get something more or less acceptable from it for a long time. But for evaluating the actual performance of the GPU, it suits well. However, there is almost nothing new here for us, with the exception, perhaps, of the most noticeable difference between the two generations of Intel's IGPs - Haswell really became a big step forward, allowing the company to almost catch up with integrated Radeon cards. More precisely, the HDG 4000 could already compete with the A4, which, however, was not drawn to the achievement - the level was too low for relatively expensive solutions. But the approximate parity with the A8 is already all right. In theory, of course - in practice, as we already know, even $ 100 discrete is too little.

Actually, the integrated graphics “learned” to cope with the low quality mode (in this game it is not so low, it should be noted - the minimum supported resolution of 1024 x 768 was only recently used in practice) not so long ago. And not every one - the A6 based on Llano was the first to break through the border, and the transition to Trinity turned out to be even a step back in this family (because the game can fully use multi-core processors), but, in general, there are enough of them. And there are no slower solutions. However, we again observe that within the framework of the new Intel platform even Pentium is enough, but most products for the previous one could not cope due to the weakness of the mass HDG 2500. That is, in fact, we have a transition from quantity to quality - something that a year ago "could not." many Core i5, today Pentium. Or any Core i3, but not individual models of this family. Well, that's good too.

Summary results

What do we have in the bottom line? If you remember that 100 points is a Radeon HD 6450 paired with a Celeron, that's a lot. Indeed, mass graphics for LGA1155 (this is HDG 2500 and its analog in Celeron / Pentium, or even functionally weak IGP Sandy Bridge) could not even reach this level. The new Pentium surpasses it, that is, the built-in GPU easily outperforms such discrete products as the aforementioned Radeon HD 6450 or GeForce GT 610/620. It is clear that all of them can only be called gaming solutions out of politeness, but they exist and are still being sold (not to mention older video cards of a comparable level, which continue to be used by many economical computer users). In addition, A4 was left behind for the FM1 platform - also a basic level, of course, and even for an outdated two-year-old platform, but a couple of years ago, few believed that Intel would be able to catch up with AMD in the foreseeable future: Sandy Bridge graphics in any version was in no way comparable to desktop APUs of all modifications.

At first glance, Core i3 "grew" weaker - HDG 4400 is faster than HDG 4000 by only 20%, not 1.5 times. Which is easy to explain - if in the budget segment the number of pipelines increased from 6 to 10, then “on the floor above” only from 16 to 20. However, do not forget that 4000 in the previous generation was the top GPU, and it is used only in a small part of desktop processors. and the 4400 is the lower end of the new desktop Core: most are already using the HDG 4600, which has slightly higher performance. In fact, we can even talk about the transition from quantity to quality - just a year ago, only HDG 4000 (the very rare variant) could provide a frame rate in games at the level of AMD APUs of the A4 line, but now it is already on par with the faster A6s. Naturally, this does not look like a victory - after all, even the A8 is priced at the level of the Pentium, and the Core i3 are faster, but also noticeably more expensive processors, but the fact is that the positions are gradually leveling off. However, the release of the APU based on Kaveri may well be able to restore the status quo, but the mass distribution of these devices (and their promotion to the lower segments of the AMD range) will have to wait. And the replacement of Trinity with Richland, as we already wrote, was only a cosmetic update. Not at all like the transition from Ivy Bridge to Haswell.

Of course, the "build-up of integrated muscles" in the products of both vendors increasingly narrows the potential areas of application of junior discrete solutions. The new GT 630 is only slightly faster than the old one (the bottleneck is the memory system) and still lags behind the A8 / A10. And the gap from the junior solutions from AMD and Intel has already decreased so much that the purchase of a discrete video adapter of this level has ceased to be a justified measure at all - the performance gain does not compensate for the extra costs and other disadvantages of the approach. In general, the only thing that video cards of this segment can claim is the modernization of old computers. And here, in most cases, a more attractive solution will be either buying a faster discrete, or simply replacing the platform.

Well, you can gradually stop paying attention to the modes of minimum settings - all modern solutions already cope with them. In any case, desktop surrogate systems still cannot boast of comfortable results even when the graphics are simplified to the level of ten years ago.

OpenCL

Despite active talk about heterogeneous computing, the scope of their application remains very limited. Especially when it comes to those areas that are applicable to integrated graphics - the use of discrete GPUs for some "heavy" calculations in the HPC field began several years ago, but this has little to do with the mass market. And the main problem for the latter was, as it seems to us, that OpenCL is not at all as "open" as it was declared. In fact, programmers are forced to take into account the specifics of the implementation of specifications by all three vendors, that is, to work at too low a level. WinZip turned out to be a typical example of the immaturity of the technology - after the triumphant reports about the release of an application of at least some general purpose with OpenCL support, not everyone noticed that it was only AMD implementation, but not Intel and NVIDIA support.

Curiously, these features still emerge even in synthetic benchmarks, many of which simply execute different branches of code on different solutions. In particular, such is the Basemark CL, which we started using some time ago in the tests of this line. What this leads to in practice is clearly seen in our study of the programs themselves: this utility is clearly not indifferent to the GPU from AMD. And if you also remember that not so long ago, Intel processors executed OCL code only on the main cores, but without using the GPU, it becomes clear why this particular program became AMD's favorite benchmark, the use of which was recommended to all testers. Recently, however, they stopped recommending. Let's try to understand why, considering, naturally, that Basemark CL should be used very carefully for cross-platform comparison.

On the diagram, we have collected the results of all processors tested in this program, which painted an extremely interesting picture. Firstly, as we can see, HDG 2500 or the "no-dimensional" relative of this GPU provide performance only at the level of junior mobile solutions. It is clear why - the code is well parallel, so six pipelines are six pipelines, either in the CULV Celeron or in the desktop Core i3. But the Pentium on Haswell is already much faster. However, it still cannot be regarded as a serious OpenCL accelerator: it still does not reach the A6 or processors with HDG 4000 (again, it does not matter: mobile or desktop). But certain preferences when using OpenCL can be obtained with its help - at least more than a buyer of any processors based on the AMD Kabini core will receive. But the HDG 4400 is a much more attractive option: as you can see, only the new generation Core i3 turned out to be equal to the top-end Core i7 of the previous one! And in comparison with competing products, this is not so bad - the level of some A8. It is clear that they are cheaper, but the difference in price with the lower Core i3 is still much less than with the older Core i7 :) And the HDG 4600 is already the A10 level. Moreover, it is easy to see that all thrifty buyers can get big benefits from the implementation of OpenCL, and not only those who choose AMD products: the difference between i3 and i7 is less than 10%. In general, victorious reports spoil only Kaveri's results - AMD managed to jump over its head once again. But there are few of these APUs so far, unlike the Core i3 lying on every corner. In addition, cheaper and more productive ones on the classic x86 code, which is extremely important in the current state of affairs with the implementation of OpenCL (a processor that is faster in more programs and slower in a small number of programs looks more attractive than the one that wins only in exotic specially selected environment).

There is no need to comment on the results of the GT 630 - as has been noted more than once, this benchmark does not like NVIDIA's solutions (moreover, in this case, the OpenCL 1.1 code is used, not 1.2). On the other hand, no one is immune from a repetition of such a situation in real programs. Well, in this case, as we can see, the lower discrete can easily lag behind even the inexpensive integrated graphics. Which is the extra nail in her coffin :)

Total

If when choosing a high-end processor (and even assuming the use of a discrete video card) no one could find any particular advantages of Haswell over Ivy Bridge, then in the budget segment and when using integrated graphics the situation is the opposite: there is no point in buying "old" processors. Unless to upgrade the system to Sandy Bridge while keeping the motherboard, but it's better to just buy a video card - cheaper and more efficient. And the new system is exclusively based on LGA1150. In that case, of course, if you choose from Intel solutions - as you can see, the lag behind AMD APUs has been greatly reduced, but not completely disappeared. Thus, if you want to save money and focus primarily on the performance of the graphics core, the FM2 / FM2 + platform is still a good choice: the same A8-6600K is cheaper than any Core i3, and the A8-5600K can compete with the Pentium in price. Naturally, in this case, one should not forget that this saving is not free at all - the processor part is very different, which is often very important (at least in this segment), and in the case of the subsequent purchase of a discrete video card, an additional payment for a "good" the integrated GPU will disappear entirely. In addition, the "appetites" of AMD APUs are slightly higher than those of Intel dual-core processors. In general, they are not direct competitors, but, we repeat, if the performance of integrated graphics is in the first place, then it is better to continue paying attention to AMD developments - the new generation of devices from Intel has reduced the gap in this matter, but far from zero. even aside from the price difference.

Well, in a global sense, we are certainly pleased with the progress. Especially when it comes to baseline performance. You can, of course, once again scold Intel for some confusion - after all, this is already the fourth graphics core with an impersonal name "HD Graphics", but more importantly, its performance has increased by a factor of 1.5. This does not make HDG a game solution, but the very fact of "raising the bar" is already a good signal to programmers. Yes, and the order increased - after all, up to Ivy Bridge inclusive, the "main" level of Intel graphics in the desktop segment coincided with the "base" one: the most massive GPU was the HDG 2500. Now, the Core i3 differs from the Pentium not only by its support for Hyper-Threading, but also more powerful graphics: at least HDG 4400, and this video core is already better than any GPU Ivy Bridge. Albeit not one and a half times, but this (and higher) level of graphics capabilities is now received by every buyer - there is no need to chase after special processor models. That, again, allows us to count on its more complete utilization by programmers.

And, of course, such an increase in the graphics capabilities of low-end processors is another nail in the coffin of budget discrete video cards. While the performance advantage still lingers even in the $ 60 segment, it is already too small to buy a standalone device rather than use the “free” IGP. That is, the practical meaning remains only for video cards with a price of $ 100 or more. And already only for gaming use - in all other areas, integrated graphics are no worse, and, most importantly, no worse than any integrated graphics, and not just a few models, as it was two or three years ago.

Any modern laptop has at least one video card, which comes "by default". Considering that the vast majority of laptop computers come with Intel processors, the graphics system is from the same manufacturer. Naturally, AMD processors use a video core of their own production, but in this case we will talk about Intel and the fact that each CPU has an integrated video card (GPU) - Intel HD Graphics or Iris Graphics. For use in modern games, for serious work with 3D modeling, creating animations, working with complex graphics packages, the capabilities of such graphics systems are not enough, but for the vast majority of everyday tasks, performance is more than necessary.

What is an integrated graphics card

Integrated means that the video core is located on the same substrate with the processor, sharing RAM. The amount of RAM taken away by the integrated video card is within 5% of the total and depends on the tasks being performed. The graphics card driver interact with the operating system to maintain optimal performance and memory allocation between the graphics subsystem and the processor.

According to Intel representatives, the task of catching up with discrete solutions is not worth it, since the integrated video card aims to provide maximum stability, reduce the cost of the system by not buying an additional video card, and reduce heat generation and power consumption. The last two arguments are especially relevant for laptops.

In the latest generation of Kaby Lake processors, the integrated video core has been updated, which exists in two varieties and is called Intel HD Graphics and Intel Iris Plus Graphics. In the previous generation of Skylake, they were called Intel HD Graphics and Intel Iris Graphics, respectively.

The integrated graphics card model depends on the processor used, as shown in the table.

Generation CPUIntel GPU ModelCPU
SkylakeIntel HD Graphics 500Celeron N3350, Celeron N3450
Intel HD Graphics 510Pentium 4405U, Celeron 3955U, Celeron 3855U
Intel HD Graphics 515Pentium N4200, Core m7, -6Y75, Core m5-6Y57, Core m5-6Y54, Core m3-6Y30
Intel HD Graphics 520Core i7-6600U, Core i7-6500U, Core i5-6300U, Core i5-6200U, Core i3-6100U, Core i3-6006U
Intel HD Graphics 530Core i7-6920HQ, Core i7-6820HQ, Core i7-6820HK, Core i7-6700HQ, Core i5-6440HQ, Core i5-6300HQ, Core i3-6100H
Intel Iris Graphics 540Core i7-6660U, Core i7-6650U, Core i7-6560U, Core i5-6260U, Core i5-6260U
Intel Iris Graphics 550Core i7-6567U, Core i3-6157U, Core i3-6167U
Intel Iris Pro Graphics 580Core i7-6970HQ, Core i7-6870HQ, Core i7-6770HQ, Core i5-6350HQ
Kaby lakeIntel HD Graphics 610Pentium 4415U, Celeron 3965U, Celeron 3865U,
Intel HD Graphics 615Pentium 4410Y, Core i7-7Y75, Core i5-7Y54, Core i5-7Y757, Core m3-7Y30
Intel HD Graphics 620Core i7-7600U, Core i7-7500U, Core i5-7300U, Core i5-7200U, Core i3-7100U
Intel HD Graphics 630Core i7-7920HQ, Core i7-7820HQ, Core i7-7820HK, Core i7-7700HQ, Core i5-7300HQ, Core i5-7440HQ, Core i3-7100H
Intel Iris Plus Graphics 640Intel Core i7-7660U, Core i5-7360U, Core i5-7260U
Intel Iris Plus Graphics 650Core i5-7287U, Core i5-7267U

What is the difference between Intel HD Graphics and Intel Iris Plus Graphics

It should be said right away that an integrated video card is not the best choice for working in AutoCAD, for games such as DOOM, Rise of the Tomb Raider and the like. There is no need to wait for miracles. Old games that are already several years old, or those with low hardware requirements, can be played on such video cards.

Unlike Intel HD Graphics, a number of processors are equipped with a more "advanced" video core - Intel Iris Plus Graphics, as it is called in the generation of Kaby Lake processors. In previous Skylakes, such video cards were called Iris (Pro), and in the 5th generation, Broadwell, the name Iris was used - just like that, no fancy.

What is the difference between regular video cores and Iris? The latter use a doubled number of executive cores, 48 ​​versus 24 for HD Graphics (Intel Iris Pro Graphics 580 uses 72 cores), and also uses a small 64 MB eDRAM cache (Intel Iris Pro Graphics 580 has 128 MB), which significantly increases the performance of such a card. According to tests, such solutions can compete with the initial lines of discrete video cards. For example, the Iris Plus 650 is roughly on par with the GeForce 930M in terms of performance.

Another thing is that Iris models of laptops with integrated graphics have missed one, two times. It can be said to be a niche product used in just a few models. So, Apple MacBook Pro 13 uses Intel Core i5 6267U processors with Intel Iris Graphics 550, or Dell XPS 13 - one of the hits in the class, in one of the modifications it uses Intel Core i5 6560U with Iris Graphics 540. Lenovo has similar offers and HP, but the number of models can be counted on one hand. By the way, I did not find modifications with Iris graphics in the updated line of Dell XPS 13 laptops, although I may have looked at something.

Main characteristics of integrated video cards:

ModelGPU Number of executive cores Base frequency, MHz Maximum frequency, GHz VolumeeDRAM, MB
Intel HD Graphics 50012 200 0.7
Intel HD Graphics 51012 350 1.05
Intel HD Graphics 51524 300 1.00
Intel HD Graphics 52024 300 1.05
Intel HD Graphics 53024 300 1.15
Intel Iris Graphics 54048 300 1.05 64
Intel Iris Graphics 55048 300 1.10 64
Intel Iris Pro Graphics 58072 300 1.15 128
Intel HD Graphics 61024 350 0.95
Intel HD Graphics 61524 300 1.05
Intel HD Graphics 62024 300 1.05
Intel HD Graphics 63024 300 1.10
Intel Iris Plus Graphics 64048 300 1.05 64
Intel Iris Plus Graphics 65048 300 1.10 64

Support for multiple monitors and 4K resolution

The latest processor generations, in particular the 6th and 7th generations, support 4K monitors. The only exception is the integrated Intel HD Graphics 500, which lacks this support. In fact, the maximum resolution of these graphics chips is 4096 x 2304, which is higher than the 4K values ​​of 3840 x 2160.

With regard to connecting multiple monitors, in the case of laptops, it matters how they will be connected, which interfaces are used. Laptops equipped with DisplayPort or USB Type-C / Thunderbolt 3 ports will allow you to use 3 displays with FullHD (1920 x 1080) resolution, two monitors with 2K resolution, or one 4K. If there are no such ports, then you can use USB adapters.

Conclusion

Are integrated video cards so good or not? For games, serious graphics programs - no, if we are not talking about simple or old games, for everyday work - more than. At the same time, I don't really understand the use of low-power discrete cards of the GeForce 920M (X) class in combination with the latest generations of processors.

For example, the ASUS A541UV laptop uses a Core i7-6500U and a GeForce 920M. Yes, a discrete card will be 30-40 percent faster, but its capabilities are still outside the limits of comfortable use for games. But an extra consumer of electricity and an additional source of heating is present.