Search Results for "learning"
NVIDIA Unveils Pruning and Distillation Techniques for Efficient LLMs
NVIDIA introduces structured pruning and distillation methods to create efficient language models, significantly reducing resource demands while maintaining performance.
OpenAI Introduces Fine-Tuning for GPT-4o to Enhance Application Performance
OpenAI launches fine-tuning for GPT-4o, allowing users to customize and improve model performance and accuracy for specific applications.
NVIDIA Opens Applications for $60,000 Graduate Fellowship Awards
NVIDIA's Graduate Fellowship Program is now accepting applications, offering up to $60,000 for doctoral students in AI, machine learning, and more.
Enhancing Recommender Systems with Co-Visitation Matrices and RAPIDS cuDF
Learn how to build efficient recommender systems using co-visitation matrices and RAPIDS cuDF for faster data processing and improved personalization.
TEAL Introduces Training-Free Activation Sparsity to Boost LLM Efficiency
TEAL offers a training-free approach to activation sparsity, significantly enhancing the efficiency of large language models (LLMs) with minimal degradation.
Gemini Introduces New Features to Enhance Student Learning
Gemini's new features aim to help students study smarter with resources like OpenStax. Available for students 18 years and older.
Innovators Harry Grieve and Ben Fielding Discuss Building Gensyn
Harry Grieve and Ben Fielding discuss the significance of building Gensyn, a decentralized machine learning compute protocol, and its impact on the future of tech.
Mistral AI Unveils Pixtral 12B: A Groundbreaking Multimodal Model
Mistral AI introduces Pixtral 12B, a state-of-the-art multimodal model excelling in text and image tasks, with notable performance in instruction following and reasoning.
IBM Unveils Breakthroughs in PyTorch for Faster AI Model Training
IBM Research reveals advancements in PyTorch, including a high-throughput data loader and enhanced training throughput, aiming to revolutionize AI model training.
Enhancing LLMs: Memory Augmentation Shows Promise
IBM Research explores memory augmentation techniques to improve large language models (LLMs), enhancing accuracy and efficiency without retraining.
NVIDIA Unveils Llama 3.1-Nemotron-70B-Reward to Enhance AI Alignment with Human Preferences
NVIDIA introduces Llama 3.1-Nemotron-70B-Reward, a leading reward model that improves AI alignment with human preferences using RLHF, topping the RewardBench leaderboard.
NVIDIA Modulus Revolutionizes CFD Simulations with Machine Learning
NVIDIA Modulus is transforming computational fluid dynamics by integrating machine learning, offering significant computational efficiency and accuracy enhancements for complex fluid simulations.