LLM
NVIDIA Launches DriveOS LLM SDK for Autonomous Vehicle Innovation
NVIDIA introduces the DriveOS LLM SDK to facilitate the deployment of large language models in autonomous vehicles, enhancing AI-driven applications with optimized performance.
OpenEvals Simplifies LLM Evaluation Process for Developers
LangChain introduces OpenEvals and AgentEvals to streamline evaluation processes for large language models, offering pre-built tools and frameworks for developers.
Exploring LLM Red Teaming: A Crucial Aspect of AI Security
LLM red teaming involves testing AI models to identify vulnerabilities and ensure security. Learn about its practices, motivations, and significance in AI development.
NVIDIA Introduces Nemotron-CC: A Massive Dataset for LLM Pretraining
NVIDIA debuts Nemotron-CC, a 6.3-trillion-token English dataset, enhancing pretraining for large language models with innovative data curation methods.
Enhancing AI Workflow Security with WebAssembly Sandboxing
Explore how WebAssembly provides a secure environment for executing AI-generated code, mitigating risks and enhancing application security.
NVIDIA Megatron-LM Powers 172 Billion Parameter LLM for Japanese Language Proficiency
NVIDIA's Megatron-LM aids in developing a 172 billion parameter large language model focusing on Japanese language capabilities, enhancing AI's multilingual proficiency.
Optimizing LLMs: Enhancing Data Preprocessing Techniques
Explore data preprocessing techniques essential for improving large language model (LLM) performance, focusing on quality enhancement, deduplication, and synthetic data generation.
Innovative SCIPE Tool Enhances LLM Chain Fault Analysis
SCIPE offers developers a powerful tool to analyze and improve performance in LLM chains by identifying problematic nodes and enhancing decision-making accuracy.
NVIDIA Develops RAG-Based LLM Workflows for Enhanced AI Solutions
NVIDIA is advancing AI capabilities by developing RAG-based question-and-answer LLM workflows, offering insights into system architecture and performance improvements.
The Crucial Role of Communication in AI and LLM Development
Explore the significance of communication in AI and LLM applications, highlighting the importance of prompt engineering, agent frameworks, and UI/UX innovations.
LangChain Celebrates Two Years: Reflecting on Milestones and Future Directions
LangChain marks its second anniversary, highlighting its evolution from a Python package to a leading company in LLM applications, and introduces LangSmith and LangGraph.
Boosting LLM Performance on RTX: Leveraging LM Studio and GPU Offloading
Explore how GPU offloading with LM Studio enables efficient local execution of large language models on RTX-powered systems, enhancing AI applications' performance.
NVIDIA Unveils Llama 3.1-Nemotron-70B-Reward to Enhance AI Alignment with Human Preferences
NVIDIA introduces Llama 3.1-Nemotron-70B-Reward, a leading reward model that improves AI alignment with human preferences using RLHF, topping the RewardBench leaderboard.
NVIDIA and Outerbounds Revolutionize LLM-Powered Production Systems
NVIDIA and Outerbounds collaborate to streamline the development and deployment of LLM-powered production systems with advanced microservices and MLOps platforms.
Ollama Enables Local Running of Llama 3.2 on AMD GPUs
Ollama makes it easier to run Meta's Llama 3.2 model locally on AMD GPUs, offering support for both Linux and Windows systems.