论文部分内容阅读
2016年11月15日,在美国盐湖城SC16超算大会上,NVIDIA宣布将与微软共同加速企业内部的人工智能。得益于首款基于微软Azure云端或内部运行的NVIDIA Tesla GPUs定制式人工智能框架,企业现在可实施覆盖数据中心和微软云的人工智能平台。该优化平台可在NVIDIA GPU(包括采用了Pascal架构的GPU和NVLink互联技术的NVIDIA DGX-1超级计算机)和Azure N系列虚拟机(目前仍是测试版本)上运行微软的Cognitive Toolkit。
On November 15, 2016, at the SC16 Supercomputer in Salt Lake City, NVIDIA announced that it will work with Microsoft to accelerate artificial intelligence within the enterprise. Thanks to the first custom artificial intelligence framework based on Microsoft Azure cloud or in-flight NVIDIA Tesla GPUs, organizations can now implement an artificial intelligence platform that covers both the data center and Microsoft Cloud. The optimized platform runs Microsoft’s Cognitive Toolkit on NVIDIA GPUs, including the NVIDIA DGX-1 supercomputer with Pascal-based GPUs and NVLink interconnect technologies, and Azure N-series virtual machines, which are still beta versions.