Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not at all, unless we are speaking of CUDA until version 3.0.

CUDA is a polyglot programming model for NVidia GPU, with first party support for C, C++, Fortran, and anything else that can target PTX bytecode.

PTX allows for many other languages with toolchains to also target CUDA in some form, with .NET, Java, Haskell, Julia, Python having some kind of NVidia sponsored implementations.

https://developer.nvidia.com/language-solutions

While originally CUDA had its own hardware memory model, NVidia decided to make it follow C++11 memory semantics and went through a decade of hardware redesign to make it possible.

- CppCon 2017: Olivier Giroux "Designing (New) C++ Hardware”

https://www.youtube.com/watch?v=86seb-iZCnI

- The CUDA C++ Standard Library

https://www.youtube.com/watch?v=g78qaeBrPl8

It is also driving many of the use cases in parallel programming for C++

- Future of Standard and CUDA C++

https://www.youtube.com/watch?v=wtsnoUDFmWw

You will only find brief mentions of C here,

https://developer.nvidia.com/hpc-compilers

This is why OpenCL kind of lost the race, with it focused too much in its C dialect, only going polyglot when it was too late for the research community to care.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: