Search Results for "llms"
The Impact of AI and LLMs on the Future of Cybersecurity
An exploration into the transformative potential of generative AI and LLMs in the cybersecurity realm.
IBM and Red Hat Introduce InstructLab for Collaborative LLM Customization
IBM and Red Hat launch InstructLab, enabling collaborative LLM customization without full retraining.
NVIDIA NeMo Enhances Customization of Large Language Models for Enterprises
NVIDIA NeMo enables enterprises to customize large language models for domain-specific needs, enhancing deployment efficiency and performance.
Oracle Introduces In-Database LLMs and Automated Vector Store with HeatWave GenAI
Oracle's HeatWave GenAI now offers in-database LLMs and an automated vector store, enabling generative AI applications without AI expertise or additional costs.
NVIDIA NIM Enhances Multilingual LLM Deployment
NVIDIA NIM introduces support for multilingual large language models, improving global business communication and efficiency with LoRA-tuned adapters.
NVIDIA Explores Cyber Language Models to Enhance Cybersecurity
NVIDIA's research into cyber language models aims to address cybersecurity challenges by training models on raw cyber logs, enhancing threat detection and defense.
AMD Instinct MI300X Accelerators Boost Performance for Large Language Models
AMD's MI300X accelerators, with high memory bandwidth and capacity, enhance the performance and efficiency of large language models.
Circle and Berkeley Utilize AI for Blockchain Transactions with TXT2TXN
Circle and Blockchain at Berkeley introduce TXT2TXN, an AI-driven tool using Large Language Models to simplify blockchain transactions through intent-based applications.
Anyscale Explores Direct Preference Optimization Using Synthetic Data
Anyscale's latest blog post delves into Direct Preference Optimization (DPO) with synthetic data, highlighting its methodology and applications in tuning language models.
AI21 Labs Unveils Jamba 1.5 LLMs with Hybrid Architecture for Enhanced Reasoning
AI21 Labs introduces Jamba 1.5, a new family of large language models leveraging hybrid architecture for superior reasoning and long context handling.
NVIDIA NIM Microservices Enhance LLM Inference Efficiency at Scale
NVIDIA NIM microservices optimize throughput and latency for large language models, improving efficiency and user experience for AI applications.
NVIDIA GH200 NVL32: Revolutionizing Time-to-First-Token Performance with NVLink Switch
NVIDIA's GH200 NVL32 system shows significant improvements in time-to-first-token performance for large language models, enhancing real-time AI applications.