GIVE FEEDBACK

Generating Print Page

Leadtek NVIDIA A100 80GB

1.6TB/sec Memory Bandwidth - PCIe Passive - HBM2 - OEM Pack

PB Tech price
PB Tech price
$ 23,999 .00
 +GST
$ 27,598 .85
 inc GST

Out of stock

Limited Supply - only stock on hand available, when sold out we may not be able to source this product again.

Leadtek NVIDIA A100 80GB

1.6TB/sec Memory Bandwidth - PCIe Passive - HBM2 - OEM Pack

  • Brand: Leadtek
  • MPN: 126S3000540
  • Part #: VGALTK8101
  • UPC:
PB Tech price
$ 23,999 .00 $ 27,598 .85
 +GST  inc GST

No store stock available.

Click & Collect: Collect unavailable
Delivery: Out of stock
  • Brand: Leadtek
  • MPN: 126S3000540
  • Part #: VGALTK8101
  • UPC:

Product Model

A100

Memory Size

80GB

Base Clock Speed

Not Specified

Boost Clock Speed

Not Specified

Max Displays

N/A

Length

Not Specified

DisplayPorts

None

Mini DisplayPorts

None

Features

NVIDIA A100

The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale for AI, data analytics, and high-performance computing (HPC) to tackle the world's toughest computing challenges.

Unprecedented Acceleration at Every Scale

 

GROUNDBREAKING INNOVATIONS

NVIDIA AMPERE ARCHITECTURE

 

HBM2 MEMORY

 

THIRD-GENERATION TENSOR CORES

 

MULTI-INSTANCE GPU (MIG)

 

STRUCTURAL SPARSITY

 

NEXT-GENERATION NVLINK

 

Accelerating the Most Important Work of Our Time

The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale for AI, data analytics, and high-performance computing (HPC) to tackle the world's toughest computing challenges. As the engine of the NVIDIA data center platform, A100 can efficiently scale to thousands of GPUs or, with NVIDIA Multi-Instance GPU (MIG) technology, be partitioned into seven GPU instances to accelerate workloads of all sizes. And third-generation Tensor Cores accelerate every precision for diverse workloads, speeding time to insight and time to market.

The Most Powerful End-to-End AI and HPC Data Center Platform

A100 is part of the complete NVIDIA data center solution that incorporates building blocks across hardware, networking, software, libraries, and optimized AI models and applications from NGC™. Representing the most powerful end-to-end AI and HPC platform for data centers, it allows researchers to deliver real-world results and deploy solutions into production at scale.

STRUCTURAL SPARSITY

AI networks are big, having millions to billions of parameters. Not all of these parameters are needed for accurate predictions, and some can be converted to zeros to make the models "sparse" without compromising accuracy. Tensor Cores in A100 can provide up to 2X higher performance for sparse models. While the sparsity feature more readily benefits AI inference, it can also improve the performance of model training.

 

MULTI-INSTANCE GPU (MIG)

An A100 GPU can be partitioned into as many as seven GPU instances, fully isolated at the hardware level with their own high-bandwidth memory, cache, and compute cores. MIG gives developers access to breakthrough acceleration for all their applications, and IT administrators can offer rightsized GPU acceleration for every job, optimizing utilization and expanding access to every user and application.

 

NVIDIA AMPERE ARCHITECTURE

A100 accelerates workloads big and small. Whether using MIG to partition an A100 GPU into smaller instances, or NVLink to connect multiple GPUs to accelerate large-scale workloads, A100 can readily handle different sized acceleration needs, from the smallest job to the biggest multi-node workload. A100's versatility means IT managers can maximize the utility of every GPU in their data center around the clock.

NEXT-GENERATION NVLINK

NVIDIA NVLink in A100 delivers 2X higher throughput compared to the previous generation. When combined with NVIDIA NVSwitch™, up to 16 A100 GPUs can be interconnected at up to 600 gigabytes per second (GB/ sec) to unleash the highest application performance possible on a single server. NVLink is available in A100 SXM GPUs via HGX A100 server boards and in PCIe GPUs via an NVLink Bridge for up to 2 GPUs.

THIRD-GENERATION TENSOR CORES

A100 delivers 312 teraFLOPS (TFLOPS) of deep learning performance. That's 20X Tensor FLOPS for deep learning training and 20X Tensor TOPS for deep learning inference compared to NVIDIA Volta™ GPUs.

HBM2

With 40 gigabytes (GB) of high bandwidth memory (HBM2), A100 delivers improved raw bandwidth of 1.6TB/sec, as well as higher dynamic random-access memory (DRAM) utilization efficiency at 95 percent. A100 delivers 1.7X higher memory bandwidth over the previous generation.

DEEP LEARNING TRAINING

AI models are exploding in complexity as they take on next-level challenges such as conversational AI. Training them requires massive compute power and scalability.

NVIDIA A100 Tensor Cores with Tensor Float (TF32) provide up to 20X higher performance over the NVIDIA Volta with zero code changes and an additional 2X boost with automatic mixed precision and FP16. When combined with NVIDIA® NVLink®, NVIDIA NVSwitch™, PCI Gen4, NVIDIA® Mellanox® InfiniBand®, and the NVIDIA Magnum IO™ SDK, it's possible to scale to thousands of A100 GPUs.

A training workload like BERT can be solved at scale in under a minute by 2,048 A100 GPUs, a world record for time to solution.

For the largest models with massive data tables like deep learning recommendation models (DLRM), A100 80GB reaches up to 1.3 TB of unified memory per node and delivers up to a 3X throughput increase over A100 40GB.

NVIDIA's leadership in MLPerf, setting multiple performance records in the industry-wide benchmark for AI training.

 

DEEP LEARNING INFERENCE

A100 introduces groundbreaking features to optimize inference workloads. It accelerates a full range of precision, from FP32 to INT4. Multi-Instance GPU (MIG) technology lets multiple networks operate simultaneously on a single A100 for optimal utilization of compute resources. And structural sparsity support delivers up to 2X more performance on top of A100's other inference performance gains.

On state-of-the-art conversational AI models like BERT, A100 accelerates inference throughput up to 249X over CPUs.

On the most complex models that are batch-size constrained like RNN-T for automatic speech recognition, A100 80GB's increased memory capacity doubles the size of each MIG and delivers up to 1.25X higher throughput over A100 40GB.

NVIDIA's market-leading performance was demonstrated in MLPerf Inference. A100 brings 20X more performance to further extend that leadership.

 

HIGH-PERFORMANCE COMPUTING

To unlock next-generation discoveries, scientists look to simulations to better understand the world around us.

NVIDIA A100 introduces double precision Tensor Cores to deliver the biggest leap in HPC performance since the introduction of GPUs. Combined with 80GB of the fastest GPU memory, researchers can reduce a 10-hour, double-precision simulation to under four hours on A100. HPC applications can also leverage TF32 to achieve up to 11X higher throughput for single-precision, dense matrix-multiply operations.

For the HPC applications with the largest datasets, A100 80GB's additional memory delivers up to a 2X throughput increase with Quantum Espresso, a materials simulation. This massive memory and unprecedented memory bandwidth makes the A100 80GB the ideal platform for next-generation workloads.

 

HIGH-PERFORMANCE DATA ANALYTICS

Data scientists need to be able to analyze, visualize, and turn massive datasets into insights. But scale-out solutions are often bogged down by datasets scattered across multiple servers.

Accelerated servers with A100 provide the needed compute power-along with massive memory, over 2 TB/sec of memory bandwidth, and scalability with NVIDIA® NVLink® and NVSwitch™, -to tackle these workloads. Combined with InfiniBand, NVIDIA Magnum IO™ and the RAPIDS™ suite of open-source libraries, including the RAPIDS Accelerator for Apache Spark for GPU-accelerated data analytics, the NVIDIA data center platform accelerates these huge workloads at unprecedented levels of performance and efficiency.

On a big data analytics benchmark, A100 80GB delivered insights with 83X higher throughput than CPUs and a 2X increase over A100 40GB, making it ideally suited for emerging workloads with exploding dataset sizes.

 

ENTERPRISE-READY UTILIZATION

A100 with MIG maximizes the utilization of GPU-accelerated infrastructure. With MIG, an A100 GPU can be partitioned into as many as seven independent instances, giving multiple users access to GPU acceleration. With A100 40GB, each MIG instance can be allocated up to 5GB, and with A100 80GB's increased memory capacity, that size is doubled to 10GB.

MIG works with Kubernetes, containers, and hypervisor-based server virtualization. MIG lets infrastructure managers offer a right-sized GPU with guaranteed quality of service (QoS) for every job, extending the reach of accelerated computing resources to every user.

 

 NVIDIA A100

The flagship product of the NVIDIA data center platform for deep learning, HPC, and data analytics.

The platform accelerates over 700 HPC applications and every major deep learning framework. It's available everywhere, from desktops to servers to cloud services, delivering both dramatic performance gains and cost-saving opportunities.

Specifications

GPU Architecture

NVIDIA Ampere

Peak FP64

9.7 TF

Peak FP64 Tensor Core

19.5 TF

Peak FP32

19.5 TF

Peak TF32 Tensor Core

156 TF | 312 TF*

Peak BFLOAT16 Tensor Core

312 TF | 624 TF*

Peak FP16 Tensor Core

312 TF | 624 TF*

Peak INT8 Tensor Core

624 TOPS | 1,248 TOPS*

Peak INT4 Tensor Core

1,248 TOPS | 2,496 TOPS*

GPU Memory

80 GB

GPU Memory Bandwidth

1,555 GB/s

Interconnect

NVIDIA NVLink 600 GB/s**
PCIe Gen4 64 GB/s

Multi-instance GPUs

Various instance sizes with up to 7MIGs @5GB

Form Factor

PCIe

Max TDP Power

250W

Delivered Performance of Top Apps

90%

Notes

* With sparsity
** SXM GPUs via HGX A100 server boards, PCIe GPUs via NVLink Bridge for up to 2-GPUs

Manufacturer Part No:
126S3000540
Brand:
Product Type:
-
UPC
-
Product Family:
NVIDIA
Shipping Weight:
0.8 kg
PB Part No:
VGALTK8101
Product Model:
A100
Warranty: 36 months *
* Warranty period is as stated above unless the manufacturer has chosen to specify a longer period.
All warranties are return to base unless otherwise specified.

Reviews

There are currently no reviews for this product

Do you own this product?

Write a review and go into the monthly draw Win A $100 PB Tech Gift Card!

Delivery & Pick-up

Potential courier delays: Please note that due to the high volumes currently within the NZ courier network there is potential for some deliveries to be delayed.

Delivery Estimates

The estimated time to ship for each product we sell is detailed on the individual product page just underneath the price. From when your items ship, products typically arrive within 1-2 working days for North Island deliveries and 2-3 working days for South Island deliveries. Rural deliveries may take an extra working day. Bulk & hazard deliveries may take an extra 2-4 working days.

Next Day Delivery

Need your order in a hurry? PB Tech offers next day delivery for local Auckland addresses. Simple place your order before 4pm (provided your items are in stock) and select our next day delivery option in the checkout. T&C's apply. Learn more about Next Day Delivery.

Same Day Delivery

Need your order in a hurry? PB Tech offers same day delivery for Auckland, Hamilton, Wellington and Christchurch. Simple place your order before 1pm (provided your items are in stock) and select one of our same day delivery options 'Evening Delivery' or 'Urgent 3-hour Delivery' in the checkout. T&C's apply. Learn more about Same Day Delivery.

Shipping Costs

Shipping costs vary based on your location and the items being shipped and in some cases shipping may even be FREE.

To calculate what the shipping costs will be for your order, add the items you are interested in to your cart, view the Shopping Cart page, and select your 'Delivery Area' to calculate the shipping cost.

Shipping Security & Insurance

All orders shipped by PB Tech are sent via a courier with a signature required for each delivery. In some cases, and only where you have given the courier company permission to leave orders at a designated location, your order may be delivered without requiring a signature. All orders sent by PB Tech are fully insured in the unlikely event that your item(s) are damaged or go missing in transit.

1 Hour Store Pick-up / Click & Collect

You can pick-up your online order from any of our stores nationwide. You can select which store you want to pick up from at the checkout. Provided the store you select has stock and there's no hold-ups with payment we will have your order ready within 1 hour (during normal trading hours for that store) otherwise it may take up to 5 working days to transfer the stock to the store so your order can be fulfilled. Learn more about our Click & Collect process.

Overseas Shipments

PB Tech regularly ships overseas to Australia and beyond. If you are located in Australia, you can order directly from our Australian site www.pbtech.com/au. If you are from another country you can order from www.pbtech.com

Returns & Warranty

7 day right of exchange

If you change your mind after making a purchase, or realise you have ordered the incorrect item, you can enjoy the peace of mind that we offer a 7 day exchange policy.

To exchange a product, goods must be sealed and unopened, with packaging in original condition, accompanied by a valid receipt dated no more than 7 calendar days from when you request an exchange. A 20% restocking fee may be applied where goods are not returned in sealed, original packaging condition.

If there is not a suitable product that can be exchanged for your returned item you will be offered a credit on your account or gift card based on the value paid at the time of purchase. Items purchased on finance cannot be exchanged for a gift card.

 

Hassle free warranty service

If your product develops a fault within the manufacturer warranty period, you can either contact the manufacturer directly (some manufacturers provide a high level of warranty service - including free pickup or in some cases onsite repair), or return to one of our services centres / stores. Where the product has been directly imported by PB Tech, you need to contact us directly or present the product at any one of our service centres / stores together with your proof of purchase.

If your product develops a fault outside of the manufacturer warranty or PB Tech warranty period, we offer a full repair service and are an authorised repair agent for leading brands such as Samsung, HP, Toshiba, Lenovo and more.

 

Returning a product / making a warranty claim

To contact the manufacturer directly to troubleshoot your product or to request a warranty repair, please view the list of manufacturer / brand warranty contacts (for products imported directly by PB Tech please return to us directly by completing our request a return form).

To return a product to PB Tech directly, please complete our request a return form.

Or view our returns policy for more information.

Leadtek NVIDIA A100 80GB 1.6TB/sec Memory Bandwidth - PCIe Passive - HBM2 - OEM Pack

  • Brand: Leadtek
  • MPN: 126S3000540
  • Part #: VGALTK8101
PB Tech price $23,999.00 Excluding GST PB Tech price $27,598.85 Including GST

Date Created: 14:50, 11-09-2025
Product URL: https://www.pbtech.co.nz/product/VGALTK8101/Leadtek-NVIDIA-A100-80GB-16TBsec-Memory-Bandwidth?srsltid=AfmBOopMcWqwPOTHOV44D6SyhhqCsvtjSTfhco1VEx1jVtZQBm-Xjwsp
Branch New Stock On Display
Auckland - Albany 0
Auckland - Glenfield 0
Auckland - Queen Street 0
Auckland - Auckland Uni 0
Auckland - Newmarket 0
Auckland - Westgate 0
Auckland - Penrose 0
Auckland - Henderson (Express) 0
Auckland - St Lukes 0
Auckland - Manukau 0
Hamilton 0
Tauranga 0
New Plymouth 0
Palmerston North 0
Petone 0
Wellington 0
Auckland - Head Office 0
Auckland - East Tamaki Warehouse 0
Christchurch - Hornby 0
Christchurch - Christchurch Central 0
Dunedin 0

Features

NVIDIA A100

The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale for AI, data analytics, and high-performance computing (HPC) to tackle the world's toughest computing challenges.

Unprecedented Acceleration at Every Scale

 

GROUNDBREAKING INNOVATIONS

NVIDIA AMPERE ARCHITECTURE

 

HBM2 MEMORY

 

THIRD-GENERATION TENSOR CORES

 

MULTI-INSTANCE GPU (MIG)

 

STRUCTURAL SPARSITY

 

NEXT-GENERATION NVLINK

 

Accelerating the Most Important Work of Our Time

The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale for AI, data analytics, and high-performance computing (HPC) to tackle the world's toughest computing challenges. As the engine of the NVIDIA data center platform, A100 can efficiently scale to thousands of GPUs or, with NVIDIA Multi-Instance GPU (MIG) technology, be partitioned into seven GPU instances to accelerate workloads of all sizes. And third-generation Tensor Cores accelerate every precision for diverse workloads, speeding time to insight and time to market.

The Most Powerful End-to-End AI and HPC Data Center Platform

A100 is part of the complete NVIDIA data center solution that incorporates building blocks across hardware, networking, software, libraries, and optimized AI models and applications from NGC™. Representing the most powerful end-to-end AI and HPC platform for data centers, it allows researchers to deliver real-world results and deploy solutions into production at scale.

STRUCTURAL SPARSITY

AI networks are big, having millions to billions of parameters. Not all of these parameters are needed for accurate predictions, and some can be converted to zeros to make the models "sparse" without compromising accuracy. Tensor Cores in A100 can provide up to 2X higher performance for sparse models. While the sparsity feature more readily benefits AI inference, it can also improve the performance of model training.

 

MULTI-INSTANCE GPU (MIG)

An A100 GPU can be partitioned into as many as seven GPU instances, fully isolated at the hardware level with their own high-bandwidth memory, cache, and compute cores. MIG gives developers access to breakthrough acceleration for all their applications, and IT administrators can offer rightsized GPU acceleration for every job, optimizing utilization and expanding access to every user and application.

 

NVIDIA AMPERE ARCHITECTURE

A100 accelerates workloads big and small. Whether using MIG to partition an A100 GPU into smaller instances, or NVLink to connect multiple GPUs to accelerate large-scale workloads, A100 can readily handle different sized acceleration needs, from the smallest job to the biggest multi-node workload. A100's versatility means IT managers can maximize the utility of every GPU in their data center around the clock.

NEXT-GENERATION NVLINK

NVIDIA NVLink in A100 delivers 2X higher throughput compared to the previous generation. When combined with NVIDIA NVSwitch™, up to 16 A100 GPUs can be interconnected at up to 600 gigabytes per second (GB/ sec) to unleash the highest application performance possible on a single server. NVLink is available in A100 SXM GPUs via HGX A100 server boards and in PCIe GPUs via an NVLink Bridge for up to 2 GPUs.

THIRD-GENERATION TENSOR CORES

A100 delivers 312 teraFLOPS (TFLOPS) of deep learning performance. That's 20X Tensor FLOPS for deep learning training and 20X Tensor TOPS for deep learning inference compared to NVIDIA Volta™ GPUs.

HBM2

With 40 gigabytes (GB) of high bandwidth memory (HBM2), A100 delivers improved raw bandwidth of 1.6TB/sec, as well as higher dynamic random-access memory (DRAM) utilization efficiency at 95 percent. A100 delivers 1.7X higher memory bandwidth over the previous generation.

DEEP LEARNING TRAINING

AI models are exploding in complexity as they take on next-level challenges such as conversational AI. Training them requires massive compute power and scalability.

NVIDIA A100 Tensor Cores with Tensor Float (TF32) provide up to 20X higher performance over the NVIDIA Volta with zero code changes and an additional 2X boost with automatic mixed precision and FP16. When combined with NVIDIA® NVLink®, NVIDIA NVSwitch™, PCI Gen4, NVIDIA® Mellanox® InfiniBand®, and the NVIDIA Magnum IO™ SDK, it's possible to scale to thousands of A100 GPUs.

A training workload like BERT can be solved at scale in under a minute by 2,048 A100 GPUs, a world record for time to solution.

For the largest models with massive data tables like deep learning recommendation models (DLRM), A100 80GB reaches up to 1.3 TB of unified memory per node and delivers up to a 3X throughput increase over A100 40GB.

NVIDIA's leadership in MLPerf, setting multiple performance records in the industry-wide benchmark for AI training.

 

DEEP LEARNING INFERENCE

A100 introduces groundbreaking features to optimize inference workloads. It accelerates a full range of precision, from FP32 to INT4. Multi-Instance GPU (MIG) technology lets multiple networks operate simultaneously on a single A100 for optimal utilization of compute resources. And structural sparsity support delivers up to 2X more performance on top of A100's other inference performance gains.

On state-of-the-art conversational AI models like BERT, A100 accelerates inference throughput up to 249X over CPUs.

On the most complex models that are batch-size constrained like RNN-T for automatic speech recognition, A100 80GB's increased memory capacity doubles the size of each MIG and delivers up to 1.25X higher throughput over A100 40GB.

NVIDIA's market-leading performance was demonstrated in MLPerf Inference. A100 brings 20X more performance to further extend that leadership.

 

HIGH-PERFORMANCE COMPUTING

To unlock next-generation discoveries, scientists look to simulations to better understand the world around us.

NVIDIA A100 introduces double precision Tensor Cores to deliver the biggest leap in HPC performance since the introduction of GPUs. Combined with 80GB of the fastest GPU memory, researchers can reduce a 10-hour, double-precision simulation to under four hours on A100. HPC applications can also leverage TF32 to achieve up to 11X higher throughput for single-precision, dense matrix-multiply operations.

For the HPC applications with the largest datasets, A100 80GB's additional memory delivers up to a 2X throughput increase with Quantum Espresso, a materials simulation. This massive memory and unprecedented memory bandwidth makes the A100 80GB the ideal platform for next-generation workloads.

 

HIGH-PERFORMANCE DATA ANALYTICS

Data scientists need to be able to analyze, visualize, and turn massive datasets into insights. But scale-out solutions are often bogged down by datasets scattered across multiple servers.

Accelerated servers with A100 provide the needed compute power-along with massive memory, over 2 TB/sec of memory bandwidth, and scalability with NVIDIA® NVLink® and NVSwitch™, -to tackle these workloads. Combined with InfiniBand, NVIDIA Magnum IO™ and the RAPIDS™ suite of open-source libraries, including the RAPIDS Accelerator for Apache Spark for GPU-accelerated data analytics, the NVIDIA data center platform accelerates these huge workloads at unprecedented levels of performance and efficiency.

On a big data analytics benchmark, A100 80GB delivered insights with 83X higher throughput than CPUs and a 2X increase over A100 40GB, making it ideally suited for emerging workloads with exploding dataset sizes.

 

ENTERPRISE-READY UTILIZATION

A100 with MIG maximizes the utilization of GPU-accelerated infrastructure. With MIG, an A100 GPU can be partitioned into as many as seven independent instances, giving multiple users access to GPU acceleration. With A100 40GB, each MIG instance can be allocated up to 5GB, and with A100 80GB's increased memory capacity, that size is doubled to 10GB.

MIG works with Kubernetes, containers, and hypervisor-based server virtualization. MIG lets infrastructure managers offer a right-sized GPU with guaranteed quality of service (QoS) for every job, extending the reach of accelerated computing resources to every user.

 

 NVIDIA A100

The flagship product of the NVIDIA data center platform for deep learning, HPC, and data analytics.

The platform accelerates over 700 HPC applications and every major deep learning framework. It's available everywhere, from desktops to servers to cloud services, delivering both dramatic performance gains and cost-saving opportunities.

Specifications

GPU Architecture

NVIDIA Ampere

Peak FP64

9.7 TF

Peak FP64 Tensor Core

19.5 TF

Peak FP32

19.5 TF

Peak TF32 Tensor Core

156 TF | 312 TF*

Peak BFLOAT16 Tensor Core

312 TF | 624 TF*

Peak FP16 Tensor Core

312 TF | 624 TF*

Peak INT8 Tensor Core

624 TOPS | 1,248 TOPS*

Peak INT4 Tensor Core

1,248 TOPS | 2,496 TOPS*

GPU Memory

80 GB

GPU Memory Bandwidth

1,555 GB/s

Interconnect

NVIDIA NVLink 600 GB/s**
PCIe Gen4 64 GB/s

Multi-instance GPUs

Various instance sizes with up to 7MIGs @5GB

Form Factor

PCIe

Max TDP Power

250W

Delivered Performance of Top Apps

90%

Notes

* With sparsity
** SXM GPUs via HGX A100 server boards, PCIe GPUs via NVLink Bridge for up to 2-GPUs

Date Created: 14:50, 11-09-2025
Product URL: https://www.pbtech.co.nz/product/VGALTK8101/Leadtek-NVIDIA-A100-80GB-16TBsec-Memory-Bandwidth?srsltid=AfmBOopMcWqwPOTHOV44D6SyhhqCsvtjSTfhco1VEx1jVtZQBm-Xjwsp