Flashattention 4 News | Blockchain.News

FLASHATTENTION 4

FlashAttention-4 Hits 71% GPU Utilization on NVIDIA Blackwell B200
Flashattention 4

FlashAttention-4 Hits 71% GPU Utilization on NVIDIA Blackwell B200

Together AI's FlashAttention-4 achieves 1,605 TFLOPs/s on B200 GPUs, up to 2.7x faster than Triton. New pipelining overcomes asymmetric hardware scaling bottlenecks.

FlashAttention-4 Hits 1,605 TFLOPS on NVIDIA Blackwell GPUs
Flashattention 4

FlashAttention-4 Hits 1,605 TFLOPS on NVIDIA Blackwell GPUs

NVIDIA's FlashAttention-4 achieves 71% hardware efficiency on Blackwell chips, delivering 3.6x speedup over FA2 for AI training workloads.