Winvest — Bitcoin investment
datacenter AI News List | Blockchain.News
AI News List

List of AI News about datacenter

Time Details
2026-03-27
14:36
Meta Ray Ban AI Glasses Leak, $10B Texas Datacenter Push, and Shield AI’s $12.7B Valuation: 2026 AI Business Analysis

According to TheRundownAI, Meta’s next generation Ray Ban AI glasses appeared in FCC filings, signaling imminent hardware with on-device AI and improved connectivity that could accelerate multimodal assistant adoption in consumer wearables; the filing indicates pre-launch compliance steps, as reported by FCC records via TheRundownAI. According to TheRundownAI, Meta is investing $10 billion into a Texas megadata center, a move consistent with hyperscale AI infrastructure expansion to train and serve large-scale foundation models and recommendation systems; as reported by TheRundownAI, this spend reflects intensifying GPU and power procurement, with potential benefits for AI inference latency in North America. As reported by TheRundownAI, defense startup Shield AI reached a $12.7 billion valuation, underscoring rising demand for autonomous systems and AI-powered mission autonomy software across defense and dual-use markets; according to TheRundownAI, this positions Shield AI to scale swarming, navigation, and edge inference capabilities. According to TheRundownAI, Elon Musk aims to take SpaceX public on his own terms; while not directly AI, SpaceX’s satellite and launch scale can support AI edge connectivity and global data backhaul for inference workloads, as reported by TheRundownAI. Overall, according to TheRundownAI, these moves highlight 2026 AI trends: multimodal assistants in smart glasses, hyperscale datacenter buildouts for training and inference, and defense autonomy platforms reaching unicorn-plus scale.

Source
2026-03-19
18:49
Nvidia CEO Jensen Huang Discusses Orbital Datacenters: Cooling Limits, Radiation Surfaces, and AI Infrastructure Outlook

According to Sawyer Merritt on X, Nvidia CEO Jensen Huang said orbital datacenters face a core thermal challenge because space lacks convection and practical conduction, leaving only radiative cooling, which demands very large surface areas; however, he noted it is not impossible to engineer around these limits. As reported by Sawyer Merritt, Huang’s comments imply that any space-based AI compute would require novel heat rejection architectures (e.g., deployable radiators) and power-density tradeoffs, affecting GPU packaging, interconnect choices, and uptime assumptions for large-scale training. According to the interview clip shared by Sawyer Merritt, this could shift investment toward thermal management R&D, lightweight materials, and modular radiator designs, while also favoring compute architectures optimized for lower waste heat per FLOP, influencing future Nvidia data center roadmaps and partner ecosystems.

Source