NVIDIA RTX T1000
Power and Performance in a Small Form Factor
The NVIDIA® T1000, built on the NVIDIA Turing™ GPU architecture, is a powerful, low profile solution that delivers the full-size features, performance and capabilities required by demanding professional applications in a compact graphics card. Featuring 896 CUDA cores and 8GB/4GB of GDDR6 memory, the T1000 enables professionals to tackle multi-app workflows, from 3D modeling to video editing. Support for up to four 5K displays gives you the expansive visual workspace to view your work in stunning detail.
Turing GPU Architecture
Based on state-of-the-art 12nm FFN (FinFET NVIDIA) high-performance manufacturing process customized for NVIDIA to incorporate 896 CUDA cores, the NVIDIA T1000 GPU is the most powerful Single Slot professional solution for CAD, DCC, financial service industry (FSI) and visualization professionals in general looking to reach excellence performance in a compact and efficient form factor. The Turing GPU architecture enables the biggest leap in computer real-time graphics rendering since NVIDIA’s invention of programmable shaders in 2001.
Advanced Streaming Multiprocessor (SM) Architecture
Combined shared memory and L1 cache improve performance significantly, while simplifying programing and reducing the tuning required to attain best application performance. Each SM contains 96 KB of L1 shared memory, which can be configured for various capabilities depending on compute or graphics workload. For compute cases, up to 64KB can be allocated to the L1 cache or shared memory, while graphics workload can allocate up to 48 KB for shared memory; 32 KB for L1 and 16KB for texture units. Combining the L1 data cache with the shared memory reduces latency and provide higher bandwidth.
Single Instruction, Multiple Thread (SIMT)
New independent thread scheduling capability enables finer-grain synchronization and cooperation between parallel threads by sharing resources among small jobs.
Graphics Preemption
Pixel-level preemption provides more granular control to better support time-sensitive tasks such as VR motion tracking.
H.264 and HEVC Encode/Decode Engines
Deliver faster than real-time performance for transcoding, video editing, and other encoding applications with two dedicated H.264 and HEVC encode engines and a dedicated decode engine that are independent of 3D/compute pipeline.
Advanced Shading Tech
The Turing GPU architecture features the following new advanced shader technologies.
Mesh Shading: Compute-based geometry pipeline to speed geometry processing and culling on geometrically complex models and scenes. Mesh shading provides up to 2x performance improvement on geometry-bound workloads.
Variable Rate Shading (VRS): Gain rendering efficiency by varying the shading rate based on scene content, direction of gaze, and motion. Variable rate shading provides similar image quality with 50% reduction in shaded pixels.
Texture Space Shading: Object/texture space shading to improve the performance of pixel shader-heavy workloads such as depth-of-field and motion blur. Texture space shading provides greater throughput with increased fidelity by reusing pre-shaded texels for pixel-shader heavy VR workloads.
High Performance GDDR6 Memory
Built with Turing’s vastly optimized 4GB GDDR6 memory subsystem for the industry’s fastest graphics the NVIDIA T1000 features 4GB of frame buffer capacity and 160 GB/s of peak bandwidth for double the throughput from previous generation. NVIDIA T1000 is the ideal platform for 3D professionals and high demanding with vast arrays of datasets and multi display environments.
Mixed-Precision Computing
Double the throughput and reduce storage requirements with 16-bit floating point precision computing to enable the training and deployment of larger neural networks. With independent parallel integer and floating-point data paths, the Turing SM is also much more efficient on workloads with a mix of computation and addressing calculations.
Compute Preemption
Preemption at the instruction-level provides finer grain control over compute and graphics tasks to prevent longer-running applications from either monopolizing system resources or timing out.
NVIDIA GPU BOOST 4.0
Automatically maximize application performance without exceeding the power and thermal envelope of the card. Allows applications to stay within the boost clock state longer under higher temperature threshold before dropping to a secondary temperature setting base clock.
IMAGE QUALITY
Full-Scene Antialiasing (FSAA)
Dramatically reduce visual aliasing artifacts or "jaggies" with up to 64X FSAA (128X with SLI) for unparalleled image quality and highly realistic scenes.
- FSAA turned off. : It's difficult to see the details of your model.
- FSAA turned on. : It removes "jaggies" from contours of geometries for smoother, more realistic models.
32K Texture and Render Processing
Texture from and render to 32K x 32K surfaces to support applications that demand the highest resolution and quality image processing.
- Standard 3D mode : Image is void of real-world reflections and textures.
- Enhanced 3D mode : A more realistic and detailed model. Shadows, reflections, and textures appear as they would in real life, with much smoother edges.
Source : Gigabyte NVIDIA T1000