Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am not a graphics / engine programmer but I know a decent amount of what goes into programming a game, and I have to call "suspect" on the claim that low level graphics API programming was heretofore only available on game consoles like the Microsoft Xbox, Microsoft Xbox 360 and Microsoft Xbox One!


From Microsoft's warped perspective, where dogfooding is a religion, it's more or less correct.

You've always been able to get a lot more out of consoles considering their specs; the 360 was marginally better than the state of the art of PC hardware, for a few months, but being able to code right to the metal (not as much as on an Amiga or Nintendo, but relative to the PC) gave an efficiency that made the games unmatched for years.

AMD's recently released Mantle is the first exception on the PC, and DirectX 12 is reportedly quite similar. Bing it on your Zune for further reading.


Unmatched? Compared to what? Those games look like shit compared to a PC that came out on the say day, let alone years later. Not to mention things like limitations of map sizes, player numbers, etc.


The issue with PC hardware is that it's just too varied to get really close to the metal. The deeper you get, after all, the more different the various GPU architectures become. Consoles, on the other hand, are all identical, so you can do the most unportable bitfucking to get the absolute most out of the hardware. This is also the reason why it takes years for games to really start to shine on a console: it takes that long for game developers to really get to know all the nitty-gritty details that just do not exist on the PC.

Console games often look mediocre despite the above because console hardware is far cheaper, and thus simply less powerful, than the hardware in high end gaming PCs, despite the fact that consoles benefit from economies of scale, and are priced at a loss to boot. It is not fair to compare the way a game looks on a $2000 PC to the way it looks on a $400 console.


That's what I've always heard too, but it seems that the tide is turning: http://blogs.nvidia.com/blog/2014/03/20/opengl-gdc2014/

(I can't answer for the technical details, though. It reeds like hieroglyphics to me.)


> Those games look like shit compared to a PC that came out on the say day, let alone years later.

1. Developers had no experience with the hardware during the initial years of the consoles.

They had to switch from an out-of-order and forgiving x86 to an in-order and unforgiving PowerPC that had substantially less cache (32k/32k vs 64k/2mb/8mb) than its PC counterparts of the day. Just ask any PC-gone-console developer of that age about LHS[1], or the off-the-wall Cell Broadband Architecture[2].

2. Developers had to manage 512MiB between the GPU and CPU.

Everything has to fit into that including the operating system... and I found 1GiB uncomfortable for PCs in 2006!

+ It was split on PS3, and you had to DMA into 256k for SPUs. + EDRAM was slightly too small to fit 1280x720x32x4 render targets.

3. Developers had no guarantee of permanent storage.

So everything had to be streamed from disk... which is unsavoury for various reasons.

Contortionist programming springs to mind[3].

4. PCs get upgraded.

'Nuff said.

[1]: http://www.gamasutra.com/view/feature/132084/sponsored_featu...

[2]: https://www.youtube.com/watch?v=bR8CVLVmKQs&t=4m

[3]: http://doublebuffered.com/2010/03/17/gdc-2010-streaming-mass...

(I'm assuming Gen 7, i.e., the Xbox 360 and PS3.)


Compared to an equivalent specced PC. There's a huge platform overhead on PC that consoles don't have to bear :

https://twitter.com/ID_AA_Carmack/status/436012673243693056

https://twitter.com/ID_AA_Carmack/status/436012724791681024


Compared to a PC with similar specs and/or a reasonable price point? The $10,000 PC has always been able to outperform the $400 Xbox, but nobody cares about that.


A $500 dollar PC can match/outperform both the Xbone and the PS4 depending on the game. In 2014.


But we're talking about a $400 Xbox in 2005. What PC hardware configuration from 2005 would still run modern games at a playable (not good) framerate on low?


On low and at 30fps? Probably most. You really think console hardware is some magical beast?


(Former AAA game dev, including a stint at Sony)

No, but console OSes and Drivers are (well, compared to PC drivers).

There's an enormous amount going on between your code and the metal on a PC, even when writing C++ w/ OpenGL or DirectX. Driver overhead for graphics is HUGE (which is what this is about).

PC hardware comparable to PS3/Xbox360 performs significantly worse under real world conditions due to the way the graphics stack is set up and programmed against. The new Direct3d 12 (as well as AMD's Mantle) are attempts to tackle this.


> PC hardware comparable to PS3/Xbox360 performs significantly worse under real world conditions due to the way the graphics stack is set up and programmed against.

I doubt the significant part. And even if this was true 7 years ago, it's definitely not true now. Comparable hardware would mean something like a GTX 760 (both the PS4's APU and the 760 have around 1800 GFLO/s). That card can do everything current consoles can.

Mantle is for lower end cards anyways, mid tier hardware like the GTX 760 and what's inside current consoles won't see a change that dramatic.


No, Mantle is for lower end CPUs. It's all about reducing the CPU bottleneck to feeding a graphics card effectively.

Which, incidentally, is the reason why the initiative came from AMD (also the strong emphasis on multithreading, since AMD sells cheap 8 core CPUs). If you can all of a sudden game on a weak CPU, then Intel chips will look less attractive in comparison.

Where you're seeing small improvement with higher end cards is that they're pumping the resolution, AA etc, with the same number of draw calls. If you instead ramp the draw calls up - for example by putting a ton more objects in your scene - you'll be able to get much more out of high end cards.

Mantle isn't going to help at all in the move to 4k, but it really will allow for much more complex games on the PC, akin to when Total War was released.

I'd also not take the launch titles as a good indication of what the hardware of these consoles is capable of. They're usually rushed, ported from other platforms, etc. The hardware is capable of much more.

This is a good demo that goes into lots of detail:

http://www.youtube.com/watch?v=QIWyf8Hyjbg


360 was pretty close to the X1800 XT as far as I ever ascertained, released November 2005ish (http://www.bit-tech.net/hardware/graphics/2005/11/11/ati_x18...) and with the 360 released November 22, 2005 we can easily say there was at least parity with PC hardware on release.


You have to take the hardware specs into account. It's easy to overlook the crappiness of the hardware of those consoles.


"Bing it on your Zune for further reading." I lol'd.


I don't believe they they are saying it was never available but that previous versions of DirectX didn't provide it.


True, but still seems like lying by omission, or at least for PR gain...


Xbox Apis have never been as low level as other consoles.


Isn't it just abstractions?

The reason Xbox APIs haven't given direct access to the GPU is for the same reason you wouldn't do this on Windows. The API gives a safe way, preferably with low overhead, to access resources that might be already being used. With the original Xbox, this was done to keep programming for the Xbox more or less the same as programming in DirectX on a Windows machine. Having comparable APIs makes porting significantly easier. The Xbox 360 maintained this paradigm.

If you consider the PIP type of gaming that Xbox One supports there's no way a game can have direct access because it would be fighting the kernel. Instead you are actually coding against a virtual device so that the kernel can decide what instructions can actually be executed.


I am a graphics engine programmer. This is not an absolute. Rather, a graphics API is an abstraction of a generic GPU, so it is never as low-level as it could be. The more the API provides ways to expose the underlying hardware more directly, the more the API can be said to enable "low-level" programming. It's relative. You can hear Carmack talk about this issue in some of his QuakeCon keynotes.


There are tradeoffs with that, too. The moment you provide low-level access, you make whatever low-level interface is supplied a standard to be supported now and forever. See: the VGA, or any of the Amiga chipsets. One thing that may fall out of this is that future GPU vendors would provide the low-level interface as an abstraction over what's really happening under the hood. And then developers will complain that they can't take advantage of the the real chip's theoretical capabilities.


Generally, I ask this to graphics programmers (Don't take any harm). What will you do after graphics singularity is reached? People say video game graphics are going to be photorealastic in 10 years.


Movie CGI is already photorealistic. Minecraft is not. You can add an arbitrary (and exponentially growing) computational cost to your game by making the world more dynamic. The games we look at today as paragons of cg advancent are really just static meshes with a few small entities running around. Any step away from pre-baked, pre-compiled and pre-made content will easily consume as many years of tech advancement as you let it. (For evidence, look at how long a modern level editor takes to bake in something as relatively simple as lighting.)


Right now, there is no shortage of things to work on, and things that have not been done yet. I am not currently aware of any horizon past which that won't be true. I don't see any problem yet, and at any rate, there are tons of other kinds of programming to do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: