


🤖 Power your AI edge with Tesla P4 — where speed meets efficiency!
The NVIDIA Tesla P4 is a purpose-built AI inferencing accelerator featuring 8GB of GDDR5 memory and delivering 22 TOPS INT8 performance. Designed with a low-profile PCIe form factor and powered by NVIDIA's Pascal architecture, it dramatically reduces inference latency by 15X and improves energy efficiency by up to 60X compared to CPUs, making it ideal for scalable, responsive AI workloads in hyperscale server environments.
| ASIN | B073V8MVK9 |
| Batteries Included | No |
| Batteries Required | No |
| Brand | NVIDIA |
| Compatible Devices | Desktop |
| Customer Reviews | 2.6 2.6 out of 5 stars (5) |
| Date First Available | 9 August 2023 |
| Form Factor | Low Profile |
| Graphics Card Description | Dedicated |
| Graphics Card Interface | PCI Express |
| Graphics Card Ram Size | 8 GB |
| Graphics Coprocessor | NVIDIA Tesla P4 |
| Graphics RAM Type | GDDR5 |
| Item Weight | 431 g |
| Item model number | 900-2G414-0000-000 |
| Manufacturer | Nvidia Tesla |
| Model | 900-2G414-0000-000 |
| Model Name | Tesla P4 |
| Package Dimensions | 25.65 x 18.03 x 8.13 cm; 431 g |
| Resolution | 3840 x 2160 |
| Video output interface | DisplayPort |
C**Z
Not sure i'd dissuade potential buyers but maybe i got a bumb unit... This thing constantly over heats. I have run it in two different Dell R630's and unless the funs are over 60% the card thermal throttles. Even tried in in a T630 and results were worse (larger case more volume of airflow but less air pressure). While i was able to stabilize temps with R630 fan at 60-70% i feel that is quite an excessive requirement, not to mention excessively loud. Intake air temps were between 70-75F which is fine for ever server i have ever owned, maybe i need to run this card in the freezer.. lol.. Naw.. its just a bad unit. Sending it back and will look for an alternate solution.
S**S
We used the NVidia GPU for Windows Remote Desktop under VMware.
Trustpilot
3 weeks ago
2 weeks ago